1. 安装要求
- 至少3台服务器,本示例3台服务器的hostname分别为clickhouse1、clickhouse2、clickhouse3
- 每台服务器安装Java8
- 每台服务器相互设置ssh无密码登录,注意authorized_keys权限为600
2. 下载(在clickhouse1操作)
执行下面的命令进行下载和解压
curl -O https://ftp.nluug.nl/internet/apache/hadoop/common/hadoop-3.3.1/hadoop-3.3.1.tar.gz
tar -zxvf hadoop-3.3.1.tar.gz
进入hadoop目录
[root@clickhouse1 ~]#
[root@clickhouse1 ~]# cd hadoop-3.3.1
[root@clickhouse1 hadoop-3.3.1]#
[root@clickhouse1 hadoop-3.3.1]# pwd
/root/hadoop-3.3.1
[root@clickhouse1 hadoop-3.3.1]#
3. 配置文件修改(在clickhouse1操作)
3.1 hadoop-env.sh
创建pids和logs文件
[root@clickhouse1 hadoop-3.3.1]#
[root@clickhouse1 hadoop-3.3.1]# mkdir pids
[root@clickhouse1 hadoop-3.3.1]#
[root@clickhouse1 hadoop-3.3.1]# mkdir logs
[root@clickhouse1 hadoop-3.3.1]#
[root@clickhouse1 hadoop-3.3.1]# ls
bin etc include lib libexec LICENSE-binary licenses-binary LICENSE.txt logs NOTICE-binary NOTICE.txt pids README.txt sbin share
[root@clickhouse1 hadoop-3.3.1]#
[root@clickhouse1 hadoop-3.3.1]# pwd
/root/hadoop-3.3.1
[root@clickhouse1 hadoop-3.3.1]#
修改etc/hadoop/hadoop-env.sh文件
修改部分:
export JAVA_HOME=/root/jdk1.8.0_291
export HADOOP_PID_DIR=/root/hadoop-3.3.1/pids
export HADOOP_LOG_DIR=/root/hadoop-3.3.1/logs
export HDFS_NAMENODE_USER=root
添加部分
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root
3.2 core-site.xml
修改etc/hadoop/core-site.xml
添加部分:
<property>
<name>fs.defaultFS</name>
<value>hdfs://clickhouse1:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
3.3 hdfs-site.xml
修改etc/hadoop/hdfs-site.xml
添加namenode和datanode文件夹
[root@clickhouse1 hadoop-3.3.1]#
[root@clickhouse1 hadoop-3.3.1]# mkdir namenode
[root@clickhouse1 hadoop-3.3.1]#
[root@clickhouse1 hadoop-3.3.1]# mkdir datanode
[root@clickhouse1 hadoop-3.3.1]#
[root@clickhouse1 hadoop-3.3.1]# pwd
/root/hadoop-3.3.1
[root@clickhouse1 hadoop-3.3.1]#
添加部分:
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/root/hadoop-3.3.1/namenode</value>
</property>
<property>
<name>dfs.blocksize</name>
<value>268435456</value>
</property>
<property>
<name>dfs.namenode.handler.count</name>
<value>100</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/root/hadoop-3.3.1/datanode</value>
</property>
3.4 mapred-site.xml
修改etc/hadoop/mapred-site.xml
添加部分:
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
3.5 yarn-site.xml
修改etc/hadoop/yarn-site.xml
[root@clickhouse1 hadoop-3.3.1]#
[root@clickhouse1 hadoop-3.3.1]# pwd
/root/hadoop-3.3.1
[root@clickhouse1 hadoop-3.3.1]#
[root@clickhouse1 hadoop-3.3.1]# mkdir nm-local-dir
[root@clickhouse1 hadoop-3.3.1]# mkdir nm-log-dir
[root@clickhouse1 hadoop-3.3.1]# mkdir nm-remote-app-log-dir
[root@clickhouse1 hadoop-3.3.1]#
添加部分:
<property>
<name>yarn.acl.enable</name>
<value>false</value>
</property>
<property>
<name>yarn.log-aggregation-enable</name>
<value>false</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>${yarn.resourcemanager.hostname}:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>${yarn.resourcemanager.hostname}:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>${yarn.resourcemanager.hostname}:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>${yarn.resourcemanager.hostname}:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>${yarn.resourcemanager.hostname}:8088</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>clickhouse1</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
</property>
<property>
<name>yarn.scheduler.minimum-allocation-mb</name>
<value>1024</value>
</property>
<property>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>8192</value>
</property>
<property>
<name>yarn.resourcemanager.nodes.include-path</name>
<value></value>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>8192</value>
</property>
<property>
<name>yarn.nodemanager.vmem-pmem-ratio</name>
<value>2.1</value>
</property>
<property>
<name>yarn.nodemanager.local-dirs</name>
<value>/root/hadoop-3.3.1/nm-local-dir</value>
</property>
<property>
<name>yarn.nodemanager.log-dirs</name>
<value>/root/hadoop-3.3.1/nm-log-dir</value>
</property>
<property>
<name>yarn.nodemanager.log.retain-seconds</name>
<value>10800</value>
</property>
<property>
<name>yarn.nodemanager.remote-app-log-dir</name>
<value>/root/hadoop-3.3.1/nm-remote-app-log-dir</value>
</property>
<property>
<name>yarn.nodemanager.remote-app-log-dir-suffix</name>
<value>logs</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.env-whitelist</name>
<value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_HOME,PATH,LANG,TZ,HADOOP_MAPRED_HOME</value>
</property>
3.6 修改workers文件
clickhouse2
clickhouse3
4. hadoop目录分发(在clickhouse1操作)
将clickhouse1上配置的hadoop目录分发到其余两台服务器
[root@clickhouse1 ~]# scp -r /root/hadoop-3.3.1 root@clickhouse2:/root
[root@clickhouse1 ~]# scp -r /root/hadoop-3.3.1 root@clickhouse3:/root
5. 初始化和启动(在clickhouse1操作)
5.1 添加环境变量
- /etc/profile添加内容如下:
export HADOOP_HOME=/root/hadoop-3.3.1
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
- 使环境变量生效
[root@clickhouse1 ~]#
[root@clickhouse1 ~]# source /etc/profile
[root@clickhouse1 ~]#
5.2 HDFS
hdfs初始化
bin/hdfs namenode -format
启动hdfs
sbin/start-dfs.sh
通过http://clickhouse1:9870/ 进行访问
停止hdfs
sbin/stop.sh
5.3 YARN
启动yarn
sbin/start-yarn.sh
通过http://clickhouse1:8088/进行访问
停止yarn
sbin/stop-yarn.sh
|