环境配置
? flink 1.11 之后开始支持hadoop3
? flink-1.13.2-bin-scala_2.12.tgz
解压
tar xvf flink-1.13.2-bin-scala_2.12.tgz -C /opt/software/...
配置环境变量
#flink
export FLINK_HOME=/opt/software/flink-1.13.2
export PATH=$FLINK_HOME/bin:$PATH
#flink on yarn
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
注意点
配置:yarn-site.xml
yarn.scheduler.maximum-allocation-mb >=1600
yarn.nodemanager.resource.memory-mb >=1600
手动创建提交任务文件夹:myjobs
启动关闭
cd /opt/software/flink-1.13.2/bin
start-cluster.sh
stop-cluster.sh
服务检查
jps有两个服务:
TaskManagerRunner
StandaloneSessionClusterEntrypoint
测试
web访问;http://192.168.6.130:8081/
yarn-session.sh -nm wordCount -n 2
yarn application --list
#如果报错是Exception in thread "main" java.lang.NoClassDefFoungError:org/apache/hadoop/yarn/exception/YarnException
#解决方法
下载flink-shaded-hadoop-2-uber-2.7.5-7.0的jar包,导入到flink/lib目录下
|