参考: https://www.jianshu.com/p/f6f135855e42 https://www.jianshu.com/p/40c592186502
FlinkKafkaConsumer<T> extends FlinkKafkaConsumerBase<T>
1、initializeState
初始化unionOffsetStates 存放offset 数据结构为ListState<Tuple2<KafkaTopicPartition, Long>> 一个subtask可以消费多个partition,所以是list
判断是否restore 如果是,将 unionOffsetStates 赋值给内存 restoredState 数据结构为 TreeMap<KafkaTopicPartition, Long>
2、open
设置提交offset的模式 ON_CHECKPOINTS KAFKA_PERIODIC DISABLED
创建和分区发现工具 createPartitionDiscoverer
创建出一个KafkaConsumer this.partitionDiscoverer.open() -> initializeConnections -> this.kafkaConsumer = new KafkaConsumer<>(kafkaProperties);
获取所有fixedTopics和匹配topicPattern的Topic包含的所有分区信息
partitionDiscoverer.discoverPartitions() -> getAllPartitionsForTopics(isFixedTopics) -> List<PartitionInfo> kafkaPartitions = kafkaConsumer.partitionsFor(topic)
getAllTopics(正则匹配) -> kafkaConsumer.listTopics() -> isMatchingTopic(正则匹配) -> getAllPartitionsForTopics
并通过分区分配器为当前subtask的kafkaconsumer分配kafka分区
partitionDiscoverer.discoverPartitions() -> setAndCheckDiscoveredPartition -> KafkaTopicPartitionAssigner.assign(partition, numParallelSubtasks) == indexOfThisSubtask
判断是否从快照恢复
从快照恢复:通过分区分配器找到当前subtask的kafkaconsumer分配的kafka分区 放入 subscribedPartitionsToStartOffsets( Map<KafkaTopicPartition, Long>订阅的分区和起始offset信息)
如果修改了并行度或者kafka新增了分区会导致重分配 参考
不从快照恢复:StartupMode:GROUP_OFFSETS,EARLIEST,LATEST,TIMESTAMP,SPECIFIC_OFFSETS
3、run
创建一个 kafkaFetcher createFetcher -> public KafkaFetcher
创建 unassignedPartitionsQueue super -> this.unassignedPartitionsQueue = new ClosableBlockingQueue<>();
ClosableBlockingQueue<KafkaTopicPartitionState<T, KPH>> unassignedPartitionsQueue
当初始化时会把需要消费的 TopicPartition 加入这个队列;如果启动了 TopicPartition 周期性自动发现,那么后续新发现 TopicPartition 也会加入这个队列
创建 Handover this.handover = new Handover() 可以理解为一个长度为一的阻塞队列,将 consumerThread 获取的消息或者抛出的异常,传递给 flink 执行的线程
创建kafkaconsumerThread this.consumerThread = new KafkaConsumerThread
封装了 Kafka 消费的逻辑,另外依靠 unassignedPartitionsQueue,可以动态添加新的 TopicPartition。
封装了 offset 提交的逻辑,如果提交策略是 OffsetCommitMode.ON_CHECKPOINTS,那么利用 CheckpointListener 的回调执行 offset 提交,
其中线程间通信使用了 nextOffsetsToCommit 这个数据结构
判断是否开启分区发现
(1)、开启分区发现:启动定期分区发现任务和数据获取任务
runWithPartitionDiscovery -> createAndStartDiscoveryLoop{
discoveredPartitions = partitionDiscoverer.discoverPartitions();
如果发现新的分区,并且数据源running,则添加新分区
if(running && !discoveredPartitions.isEmpty()) {
kafkaFetcher.addDiscoveredPartitions(discoveredPartitions);
-> unassignedPartitionsQueue.add(newPartitionState);
}
}
kafkaFetcher.runFetchLoop()
启动kafka消费线程,定期从kafkaConsumer拉取数据并转交给handover对象,handover 将 consumerThread 获取的消息或者抛出的异常,传递给 flink 执行的线程
consumerThread.start() -> KafkaConsumerThread.run -> while(running) {
如果有offset需要提交就先提交offset
if(!commitInProgress) {
consumer.commitAsync
}
try {
if(hasAssignedPartitions) {
newPartitions = unassignedPartitionsQueue.pollBatch();
} else {
newPartitions = unassignedPartitionsQueue.getBatchBlocking();
}
if(newPartitions != null) {
reassignPartitions(newPartitions);
}
} catch(AbortedReassignmentException e) {
continue;
}
records = consumer.poll(pollTimeout)
handover.produce(records);
}
handover.pollNext()
partitionConsumerRecordsHandler -> emitRecordsWithTimestamps(发送给flink执行的线程)
(2)、未开启分区发现:直接拉取数据 kafkaFetcher.runFetchLoop();
4、snapshotState
如果KafkaFetcher尚未初始化完毕。需要保存已订阅的topic连同他们的初始offset
如果KafkaFetcher已初始化完毕,调用fetcher的snapshotCurrentState方法,获取当前offset
如果offsetCommitMode为ON_CHECKPOINTS类型,还需要将topic和offset写入到pendingOffsetsToCommit集合中,该集合用于checkpoint成功的时候向Kafka broker提交offset
并放入 unionOffsetStates 状态中,从checkpoint恢复时使用
重点看 fetcher.snapshotCurrentState(){
HashMap<KafkaTopicPartition, Long> state = new HashMap<>(subscribedPartitionStates.size());
for (KafkaTopicPartitionState<T, KPH> partition : subscribedPartitionStates) {
state.put(partition.getKafkaTopicPartition(), partition.getOffset());
}
}
pendingOffsetsToCommit.put(context.getCheckpointId(), currentOffsets);
for(Map.Entry<KafkaTopicPartition, Long> kafkaTopicPartitionLongEntry : currentOffsets.entrySet()) {
unionOffsetStates.add(Tuple2.of(kafkaTopicPartitionLongEntry.getKey(), kafkaTopicPartitionLongEntry.getValue()));
}
再来看 subscribedPartitionStates 是怎样加入最新的offset的
FlinkKafkaConsumerBase.run -> runFetchLoop{
consumerThread.start();
while(true){
final ConsumerRecords<byte[], byte[]> records = handover.pollNext();
for(KafkaTopicPartitionState<T, TopicPartition> partition : subscribedPartitionStates()) {
List<ConsumerRecord<byte[], byte[]>> partitionRecords = records.records(partition.getKafkaPartitionHandle());
partitionConsumerRecordsHandler(partitionRecords, partition);
}
}
}
partitionConsumerRecordsHandler -> emitRecordsWithTimestamps -> partitionState.setOffset(offset)
5、notifyCheckpointComplete
从 pendingOffsetsToCommit 取出对应 checkpoint 的 offsets 提交
kafkaconsumerThread 的while true循环里先通过nextOffsetsToCommit.getAndSet检查有无新的要commit消息,有的话就使用consumer.commitAsync 异步提交offset
int posInMap = pendingOffsetsToCommit.indexOf(checkpointId)
Map<KafkaTopicPartition, Long> offsets = (Map<KafkaTopicPartition, Long>) pendingOffsetsToCommit.remove(posInMap)
fetcher.commitInternalOffsetsToKafka(offsets, offsetCommitCallback) -> doCommitInternalOffsetsToKafka
-> consumerThread.setOffsetsToCommit(offsetsToCommit, commitCallback);
if(nextOffsetsToCommit.getAndSet(Tuple2.of(offsetsToCommit, commitCallback)) != null)
|