 zk+controller:controller通过抢占zk节点选举 topic/partition/replica/broker变动:一般先zk,controller监听zk,本地Partition/ReplicaStateMachine处理,元数据同步至其他broker+更新zk
consumer rebalance:每个broker都是group coordinator 位点存储:0.9之前是zk,之后内置topic 消费组位点存储在内置topic的partition为P,P的leader所在节点为消费组的coordinator
consumer上线(主动joinGroup) consumer下线(session.timeout, heartbeat.interval) consumer主动离开(max.poll.interval, 消费消息慢,两次poll间隔长,consumer发起LeaveGroup) topic/partition变化
coordinator感知到上述情况需要rebalance,在heartbeat的resp中说明,consumer纷纷joinGroup coordinator等待一定时间,暂定一个leader,发出joinGroup的resp consumer接到joinGroup的resp,非leader发起SyncGroup,leader分配后发起SyncGroup(带分配结果) coordinator给各consumer SyncGroup的resp(带分配结果)
consumer和coordinator的常规通信:heartbeat和offsetCommit(coordinator将其放入内置topic)
|