ip addr 1、修改虚拟机IP地址: vim /etc/sysconfig/network-scripts/ifcfg-ens33 master:192.168.10.130 slave1:192.168.10.131 slave2:192.168.10.132 ifcfg-ens33配置文件内容 " TYPE="Ethernet" PROXY_METHOD="none" BROWSER_ONLY="no" BOOTPROTO="static" DEFROUTE="yes" IPV4_FAILURE_FATAL="no" IPV6INIT="yes" IPV6_AUTOCONF="yes" IPV6_DEFROUTE="yes" IPV6_FAILURE_FATAL="no" IPV6_ADDR_GEN_MODE="stable-privacy" NAME="ens33" UUID="b2a62afc-d6ef-4a91-b17f-c094abadf746" DEVICE="ens33" ONBOOT="yes" IPADDR=192.168.10.130//IP地址 GATEWAY=192.168.10.2 NETMASK=255.255.255.0 DNS1=192.168.10.2 " 2、安装vim编辑器 yum -y install vim 3、修改主机名 vim /etc/hostname master/slave1/slave2 4、设置各个节点IP映射 vim/etc/hosts
5、jdk安装 将jdk安装包上传至各节点/opt目录下
执行 rpm -ivh jdk-8u281-linux-x64.rpm
6、将hadoop安装包上传到/opt/soft 解压文件到/usr/local tar -xvf hadoop-3.1.4.tar.gz -C /usr/local 7、进入hadoop目录,修改配置文件 cd /usr/local/hadoop-3.1.4/etc/hadoop (1)、core-site.xml <configuration> ? ? <property> ? ? <name>fs.defaultFS</name> ? ? ? ? <value>hdfs://master:8020</value> ? ? ? ? </property> ? ? ? <property> ? ? ? <name>hadoop.tmp.dir</name> ? ? ? <value>/var/log/hadoop/tmp</value> ? ? </property> ?? ?<property> ?? ? ? <name>hadoop.http.staticuser.user</name> ?? ? ? <value>root</value> ?? ?</property>
</configuration>
(2) hadoop-env.sh export JAVA_HOME=/usr/java/jdk1.8.0_281-amd64
(3) hdfs-site.xml
<configuration> <property> ? ? <name>dfs.namenode.name.dir</name> ? ? <value>file:///data/hadoop/hdfs/name</value> </property> <property> ? ? <name>dfs.datanode.data.dir</name> ? ? <value>file:///data/hadoop/hdfs/data</value> </property> <property> ? ? ?<name>dfs.namenode.secondary.http-address</name> ? ? ?<value>master:50090</value> </property> <property> ? ? ?<name>dfs.replication</name> ? ? ?<value>2</value> </property> </configuration>
(4) mapred-site.xml <configuration> <property> ? ? <name>mapreduce.framework.name</name> ? ? <value>yarn</value> </property> <!-- jobhistory properties --> <property> ? ? <name>mapreduce.jobhistory.address</name> ? ? <value>master:10020</value> </property> <property> ? ? ?<name>mapreduce.jobhistory.webapp.address</name> ? ? ?<value>master:19888</value> </property> </configuration>
(5) yarn-site.xml ? <property> ? ? <name>yarn.resourcemanager.hostname</name> ? ? <value>master</value> ? </property> ? ? ? <property> ? ? <name>yarn.resourcemanager.address</name> ? ? <value>${yarn.resourcemanager.hostname}:8032</value> ? </property> ? <property> ? ? <name>yarn.resourcemanager.scheduler.address</name> ? ? <value>${yarn.resourcemanager.hostname}:8030</value> ? </property> ? <property> ? ? <name>yarn.resourcemanager.webapp.address</name> ? ? <value>${yarn.resourcemanager.hostname}:8088</value> ? </property> ? <property> ? ? <name>yarn.resourcemanager.webapp.https.address</name> ? ? <value>${yarn.resourcemanager.hostname}:8090</value> ? </property> ? <property> ? ? <name>yarn.resourcemanager.resource-tracker.address</name> ? ? <value>${yarn.resourcemanager.hostname}:8031</value> ? </property> ? <property> ? ? <name>yarn.resourcemanager.admin.address</name> ? ? <value>${yarn.resourcemanager.hostname}:8033</value> ? </property> ? <property> ? ? <name>yarn.nodemanager.local-dirs</name> ? ? <value>/data/hadoop/yarn/local</value> ? </property> ? <property> ? ? <name>yarn.log-aggregation-enable</name> ? ? <value>true</value> ? </property> ? <property> ? ? <name>yarn.nodemanager.remote-app-log-dir</name> ? ? <value>/data/tmp/logs</value> ? </property> <property>? ?<name>yarn.log.server.url</name>? ?<value>http://master:19888/jobhistory/logs/</value> ?<description>URL for job history server</description> </property> <property> ? ?<name>yarn.nodemanager.vmem-check-enabled</name> ? ? <value>false</value> ? </property> ?<property> ? ? <name>yarn.nodemanager.aux-services</name> ? ? <value>mapreduce_shuffle</value> ? </property> ? <property> ? ? <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> ? ? ? <value>org.apache.hadoop.mapred.ShuffleHandler</value> ? ? ? </property> <property> ? ? ? ? ? <name>yarn.nodemanager.resource.memory-mb</name> ? ? ? ? ? <value>2048</value> ? ?</property> ? ?<property> ? ? ? ? ? <name>yarn.scheduler.minimum-allocation-mb</name> ? ? ? ? ? <value>512</value> ? ?</property> ?? ?<property> ? ? ? ? ? <name>yarn.scheduler.maximum-allocation-mb</name> ? ? ? ? ? <value>4096</value> ? ?</property>? ?<property>? ? ? <name>mapreduce.map.memory.mb</name>? ? ? <value>2048</value>? ?</property>? ?<property>? ? ? <name>mapreduce.reduce.memory.mb</name>? ? ? <value>2048</value>? ?</property>? ?<property>? ? ? <name>yarn.nodemanager.resource.cpu-vcores</name>? ? ? <value>1</value>? ?</property>
<property> ? ? <name>yarn.application.classpath </name> ? ? ? ? <value> /usr/local/hadoop-3.1.4/etc/hadoop:/usr/local/hadoop-3.1.4/share/hadoop/common/lib/*:/usr/local/hadoop-3.1.4/share/hadoop/common/*:/usr/local/hadoop-3.1.4/share/hadoop/hdfs:/usr/local/hadoop-3.1.4/share/hadoop/hdfs/lib/*:/usr/local/hadoop-3.1.4/share/hadoop/hdfs/*:/usr/local/hadoop-3.1.4/share/hadoop/mapreduce/lib/*:/usr/local/hadoop-3.1.4/share/hadoop/mapreduce/*:/usr/local/hadoop-3.1.4/share/hadoop/yarn:/usr/local/hadoop-3.1.4/share/hadoop/yarn/lib/*:/usr/local/hadoop-3.1.4/share/hadoop/yarn/* ? ? ? ?</value> </property>
(6) yarn-env.sh 在文件末尾加上 export JAVA_HOME=/usr/java/jdk1.8.0_281-amd64
(7) worker slave1 slave2
5.跳转到hadoop安装路径下的sbin目录下 修改start-dfs.sh,stop-dfs.sh,在文件顶部添加: HDFS_DATANODE_USER=root HADOOP_SECURE_DN_USER=hdfs HDFS_NAMENODE_USER=root HDFS_SECONDARYNAMENODE_USER=root
修改 start-yarn.sh,stop-yarn.sh,在文件顶部添加: YARN_RESOURCEMANAGER_USER=root HADOOP_SECURE_DN_USER=yarn YARN_NODEMANAGER_USER=root
6、拷贝文件到各个节点的/usr/local/? scp -qr /usr/local/hadoop-3.1.4 master:/usr/local
7、在各节点(环境变量)/etc/profile添加JAVA_HOME和Hadoop路径 export JAVA_HOME=export JAVA_HOME=/usr/java/jdk1.8.0_281-amd64 export HADOOP_HOME=/usr/local/hadoop-3.1.4 export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin
source /etc/profile使修改生效
8、配置SSH免密登录 (1)使用ssh-keygen产生公钥与私钥对。 输入命令“ssh-keygen -t rsa”,接着按三次Enter键
[root@master ~]# ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa):? Created directory '/root/.ssh'. Enter passphrase (empty for no passphrase):? Enter same passphrase again:? Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: a6:13:5a:7b:54:eb:77:58:bd:56:ef:d0:64:90:66:d4 root@master.centos.com The key's randomart image is: +--[ RSA 2048]----+ | ? ? ? ? ? ? ?.. | | ? ? ? ? ? ? . .E| | ? ? ? ? ?. ? = ?| | ? ? ? ? . . o o | | ? ? ?o S . ? . =| | ? ? o * . ? o ++| | ? ?. + . . o ooo| | ? ? ? o ? . ..o | | ? ? ? ? ? ? ? ?.| +-----------------+
生成私有密钥id_rsa和公有密钥id_rsa.pub两个文件。ssh-keygen用来生成RSA类型的密钥以及管理该密钥,参数“-t”用于指定要创建的SSH密钥的类型为RSA。 (2)用ssh-copy-id将公钥复制到远程机器中 ssh-copy-id -i /root/.ssh/id_rsa.pub master//依次输入yes,123456(root用户的密码) ssh-copy-id -i /root/.ssh/id_rsa.pub slave1 ssh-copy-id -i /root/.ssh/id_rsa.pub slave2 //所有节点都要向自己和其他节点发送密钥,实现互相登录免密
(3)验证是否设置无密码登录 依次输入,可成功跳转后exit回到master ssh slave1 ssh slave2
9、配置时间同步服务 (1)安装NTP服务。在各节点: yum -y install ntp
(2)设置假设master节点为NTP服务主节点,那么其配置如下。 使用命令“vim /etc/ntp.conf”打开/etc/ntp.conf文件, 注释掉以restrict default开头以及server开头的行,并添加: restrict 192.168.0.0 mask 255.255.255.0 nomodify notrap server 127.127.1.0 fudge 127.127.1.0 stratum 10
(3)在slave中配置NTP,同样修改/etc/ntp.conf文件,注释掉server开头的行,并添加: server master
(4)执行命令“systemctl stop firewalld.service & systemctl disable firewalld.service” 永久性关闭防火墙,主节点和从节点都要关闭。
(5)启动NTP服务。 ① 在master节点执行命令“service ntpd start & chkconfig ntpd on” ② 在slave上执行命令“ntpdate master”,获取同步时间初值。 ③ 在slave上分别执行“service ntpd start & chkconfig ntpd on”即可启动并永久启动NTP服务。
10. 格式化NameNode(只有首次部署需要运行) 进入目录 cd /usr/local/hadoop-3.1.4/bin
执行格式化 ./hdfs namenode -format
11.启动集群 进入目录 cd /usr/local/hadoop-3.1.4/sbin 执行启动: ./start-dfs.sh ./start-yarn.sh ./mr-jobhistory-daemon.sh start historyserver
使用jps,查看进程 主节点: ResourceManager JobHistoryServer NameNode SecondaryNameNode
子节点: DataNode NodeManager
12. 浏览器查看:
http://master:9870 http://master:8088
13.关闭集群 ./stop-dfs.sh ./stop-yarn.sh ./mr-jobhistory-daemon.sh stop historyserver
|