CDH启用kerberos后,Flume采集数据到HDFS 之前写过一篇 Flume跨服务器采集文件数据到HDFS完整案例 ,大家可以先看看 后来CDH集群启用了kerberos,服务器B的Flume配置文件需要做出一些修改,在服务器B的Flume安装目录的conf目录下修改 bserver.conf:
1
2 b1.sources = r2
3 b1.sinks = k2
4 b1.channels = c2
5
6 b1.sources.r2.type = avro
7
8 b1.sources.r2.bind=192.168.xxx.xx
9 b1.sources.r2.port = 44444
10
11
12
13 b1.sinks.k2.type =hdfs
14
15 b1.sinks.k2.hdfs.kerberosKeytab=/home/kerberos/hdfs.keytab
16 b1.sinks.k2.hdfs.kerberosPrincipal=hdfs/data-master@HADOOP.COM
17 b1.sinks.k2.channel = c1
18
19 b1.sinks.k2.hdfs.path = hdfs://192.168.xxx.xx/user/hive/warehouse/ods.db/%{type}/dt=%Y%m%d/
20
21
22 b1.sinks.k2.hdfs.filePrefix = %{type}
23
24 b1.sinks.k2.hdfs.round = true
25
26 b1.sinks.k2.hdfs.roundValue = 10
27
28 b1.sinks.k2.hdfs.roundUnit = minute
29
30 b1.sinks.k2.hdfs.rollInterval = 60
31
32 b1.sinks.k2.hdfs.rollSize = 134217728
33
34 b1.sinks.k2.hdfs.rollCount = 0
35
36 b1.sinks.k2.hdfs.batchSize = 100
37
38 b1.sinks.k2.hdfs.useLocalTimeStamp = true
39
40 b1.sinks.k2.hdfs.fileType = DataStream
41
42 b1.channels.c2.type = memory
43 b1.channels.c2.capacity = 10000
44 b1.channels.c2.transactionCapacity = 100
45
46 b1.sources.r2.channels = c2
47 b1.sinks.k2.channel = c2
然后重新启动服务器B的Flume即可。
|