前期准备
三台linux,yum和网络都配置好 yum和网络配置点这里 jdk和hadoop的安装包上传上来
1.修改主机名
master下执行
hostnamectl set-hostname master
bash
slave1下执行
hostnamectl set-hostname slave1
bash
slave2下执行
hostnamectl set-hostname slave2
bash
2.主机映射
三台都要执行
vi /etc/hosts
末尾写入
192.168.26.148 master
192.168.26.149 slave1
192.168.26.150 slave2
3.ssh免密
仅master执行 master输入
ssh-keygen -t rsa
三次回车出现密文如下(大概这个样子,不完全一样)
+--[ RSA 2048]----+
| .o.+. .. |
|. . o.. .E. |
|.o.++ . * . |
|..+o.. + = |
| +. S+ |
| .. . . |
| . |
| |
| |
+-----------------+
ssh-copy-id -i /root/.ssh/id_rsa.pub master
ssh-copy-id -i /root/.ssh/id_rsa.pub slave1
ssh-copy-id -i /root/.ssh/id_rsa.pub slave2
依次输入 yes和root 用户的密码 依次验证
ssh master
ssh slave1
ssh slave2
登录一次就及时退出一次(exit)
4.关闭防火墙
三台都要执行
systemctl stop firewalld
systemctl disable firewalld
查看防火墙状态
systemctl status firewalld
5.安装jdk
仅master执行 解压缩
tar -zxvf jdk-8u152-linux-x64.tar.gz -C /usr/
过去改个好名字 直接改成jdk
6.安装hadoop
仅master执行 解压缩
tar -zxvf hadoop-2.7.1.tar.gz -C /usr
过去改个好名字 直接改成hadoop
7.hadoop配置
仅master执行 ①进入目录
cd /usr/hadoop/etc/hadoop
② core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:8020</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/var/log/hadoop/tmp</value>
</property>
</configuration>
③ hadoop-env.sh
export JAVA_HOME=/usr/jdk
④ hdfs-site.xml
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///data/hadoop/hdfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///data/hadoop/hdfs/data</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>master:50090</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
</configuration>
⑤ mapred-site.xml 目录中默认没有该文件,需要先通过如下命令将文件复制并重命名为“mapred-site.xml”
cp mapred-site.xml.template mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<!-- jobhistory properties -->
<property>
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master:19888</value>
</property>
</configuration>
⑥ yarn-site.xml
<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>master</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>${yarn.resourcemanager.hostname}:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>${yarn.resourcemanager.hostname}:8030</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>${yarn.resourcemanager.hostname}:8088</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.https.address</name>
<value>${yarn.resourcemanager.hostname}:8090</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>${yarn.resourcemanager.hostname}:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>${yarn.resourcemanager.hostname}:8033</value>
</property>
<property>
<name>yarn.nodemanager.local-dirs</name>
<value>/data/hadoop/yarn/local</value>
</property>
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<property>
<name>yarn.nodemanager.remote-app-log-dir</name>
<value>/data/tmp/logs</value>
</property>
<property>
<name>yarn.log.server.url</name>
<value>http://master:19888/jobhistory/logs/</value>
<description>URL for job history server</description>
</property>
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>2048</value>
</property>
<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>512</value>
</property>
<property>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>4096</value>
</property>
<property>
<name>mapreduce.map.memory.mb</name>
<value>2048</value>
</property>
<property>
<name>mapreduce.reduce.memory.mb</name>
<value>2048</value>
</property>
<property>
<name>yarn.nodemanager.resource.cpu-vcores</name>
<value>1</value>
</property>
</configuration>
⑦ yarn-env.sh
export JAVA_HOME=/usr/jdk
⑧ slaves 删除原有内容 添加
slave1
slave2
⑨配置环境变量
vi /etc/profile
写入
export JAVA_HOME=/usr/jdk
export PATH=$PATH:$JAVA_HOME/bin
export HADOOP_HOME=/usr/hadoop
export PATH=$HADOOP_HOME/bin:$PATH:$HADOOP_HOME/sbin
刷新环境变量
source /etc/profile
8.分发文件
jdk
scp -r /usr/jdk slave1:/usr/
scp -r /usr/jdk slave2:/usr/
hadoop
scp -r /usr/hadoop slave1:/usr/
scp -r /usr/hadoop slave2:/usr/
环境变量
scp -r /etc/profile slave1:/etc/profile
scp -r /etc/profile slave2:/etc/profile
刷新环境变量
source /etc/profile
9.格式化
hdfs namenode -format
10.启动集群
start-dfs.sh
start-yarn.sh
mr-jobhistory-daemon.sh start historyserver
11.查看集群状态
①jps master
NameNode
SecondaryNameNode
JobHistoryServer
Jps
ResourceManager
slave
Jps
DataNode
NodeManager
②web
http://192.168.26.148:50070
http://192.168.26.148:8088
12.执行MapReduce词频统计任务
本地创建文件
cd /root
vi 1.txt
写入
Give me the strength lightly to bear my joys and sorrows.Give me the strength to make my love fruitful in service.Give me the strength never to disown the poor or bend my knees before insolent might.Give me the strength to raise my mind high above daily trifles.And give me the strength to surrender my strength to thy will with love.
创建文件夹并上传文件
hadoop fs -mkdir /input
hadoop fs -put /root/1.txt /input
执行命令
cd /usr/hadoop/share/hadoop/mapreduce
hadoop jar hadoop-mapreduce-examples-2.7.1.jar wordcount /input/1.txt /output
此处的hadoop-mapreduce-examples-2.7.1.jar 根据实际版本填写 查看结果
hadoop fs -ls /output
hadoop fs -cat /output/part-r-00000
如下
Give 1
above 1
and 1
bear 1
before 1
bend 1
daily 1
disown 1
fruitful 1
give 1
high 1
in 1
insolent 1
joys 1
knees 1
lightly 1
love 1
love. 1
make 1
me 5
might.Give 1
mind 1
my 5
never 1
or 1
poor 1
raise 1
service.Give 1
sorrows.Give 1
strength 6
surrender 1
the 6
thy 1
to 6
trifles.And 1
will 1
with 1
|