业务:首先使用flink从kafka中获取消息,这个消息对应着一个关于直播间的具体信息,当然过来的是一个JSON;之后对数据进行流式处理,存入clickhouse;最后通过kafka将数据更新的情况传递给下游服务。
main方法:流式处理直接用main启动,自己就跟那儿跑,但是遇到报错会停止;并行度这块儿可以按需设置;execute方法必须执行,不写运行不了。
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(1);
LiveRecordsApp liveRecordsApp = new LiveRecordsApp();
liveRecordsApp.liveRecordsAppKafkaSource(env);
env.execute("LiveRecordsApp");
}
以下所有代码都在此方法中包裹
private void liveRecordsAppKafkaSource(StreamExecutionEnvironment env)throws Exception {
}
通过flink拉取kafka消息,可以设置所有kafka消费者所需参数
KafkaSource<String> source = KafkaSource.<String>builder()
.setBootstrapServers("localhost:9092")
.setTopics("topic")
.setGroupId("groupId")
.setStartingOffsets(OffsetsInitializer.latest())
.setValueOnlyDeserializer(new SimpleStringSchema())
.build();
DataStreamSource<String> dataStreamSource =
env.fromSource(source, WatermarkStrategy.noWatermarks(), "Kafka Source 4u");
对数据进行转换处理,flink里面的map等算子完全照搬JDK8的lambda表达式,大同小异;
另外这块儿必须要将insertDB放置于自定义的RickFunction.class中,如果暴露在外用addSInk虽然节省一层封装,但是无法保证顺序。
SingleOutputStreamOperator<LiveRecords> liveLogsSingleOutputStreamOperator = dataStreamSource.map(new MapFunction<String, LiveRecords>() {
@Override
public LiveRecords map(String jsonStr) throws Exception {
JSONObject jsonObject = JSONObject.parseObject(jsonStr);
DateUtils.takeNetTime4J(jsonObject,"StartTime");
DateUtils.takeNetTime4J(jsonObject,"EndTime");
String s = JSONObject.toJSONString(jsonObject);
LiveRecords liveRecords = JSONObject.parseObject(s,LiveRecords.class);
return liveRecords;
}
});
liveLogsSingleOutputStreamOperator.print();
liveLogsSingleOutputStreamOperator.addSink(new SinkToInsertLiveRecord());
insertDB
public class SinkToInsertLiveRecord extends RichSinkFunction<LiveRecords> {
Connection connection;
//PreparedStatement 是执行sql语句的API
PreparedStatement pstmt;
//获取数据库连接信息
private Connection getConnection(){
Connection conn = null;
try{
Class.forName("ru.yandex.clickhouse.ClickHouseDriver");//将clickhouse驱动注册到DriverManager中去
String url = "jdbc:clickhouse://localhost:2138/db1";//数据库路径
conn = DriverManager.getConnection(url,"username","passward");//数据库连接信息
}catch (Exception e){
e.printStackTrace();
}
return conn;
}
/**
* 在open方法中建立connection
* @param parameters
* @throws Exception
*/
@Override
public void open(Configuration parameters) throws Exception {
super.open(parameters);
connection = getConnection();
String sql = "insert into test_user values (?,?,?,?,?)";
pstmt = connection.prepareStatement(sql);
}
//每条记录插入时调用一次
@Override
public void invoke(LiveRecords x, Context context) throws Exception {
SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
pstmt.setString(1, x.getUrl());
pstmt.setInt(2, x.getSync() == null ? 0 : x.getSync());
pstmt.setString(3, sdf.format(x.getStartTime()));
pstmt.setString(4, sdf.format(x.getEndTime()));
pstmt.setLong(5, x.getFileSize() == null ? 0 : x.getFileSize());
pstmt.executeUpdate();
}
/**
* 在close方法中要释放资源
* @throws Exception
*/
@Override
public void close() throws Exception {
super.close();
if(pstmt != null){
pstmt.close();
}
if (connection != null){
connection.close();
}
}
}
此时数据已经成功插入,咱们可以开始简单处理一下流数据 拿出需要传递给下游的数据传入kafka中,这个时候我们的身份是生产者。
SingleOutputStreamOperator<String> map = liveLogsSingleOutputStreamOperator.map(new MapFunction<LiveRecords, String>() {
@Override
public String map(LiveRecords liveRecords) throws Exception {
return liveRecords.getLiveChannelId();
}
});
//Properties类用于设置配置文件中的参数
Properties properties = new Properties();
//调用setProperty方法将设置的参数保存到配置文件中
//这里是设置了kafka的bootstrap.servers参数,为启动kafka的主机ip和端口
properties.setProperty("bootstrap.servers", "localhost:9092");
//创建生产者producer,指定消息要发送到的topic以及添加上面的properties
FlinkKafkaProducer08<String> producer = new FlinkKafkaProducer08<>("topic",new SimpleStringSchema(),properties);
//自定义的接收器,接收producer产生的消息
map.addSink(producer);
以下为addSlink写法(还没写入直接就通过kafka发出去了)
// liveLogsSingleOutputStreamOperator.addSink(
// JdbcSink.sink(
// "insert into live_records values (?,?,?,?," +
// "?,?,?,?,?," +
// "?,?,?,?,?)",
// (pstmt, x) -> {
// pstmt.setString(1, x.getUrl());
// pstmt.setInt(2, x.getSync() == null ? 0 : x.getSync());
// pstmt.setString(3, sdf.format(x.getStartTime()));
// pstmt.setString(4, sdf.format(x.getEndTime()));
// pstmt.setLong(5, x.getFileSize() == null ? 0 : x.getFileSize());
// pstmt.setInt(6, x.getDuration() == null ? 0 : x.getDuration());
//
// pstmt.setInt(7, x.getBitRate() == null ? 0 : x.getBitRate());
// pstmt.setString(8, x.getResolution());
// pstmt.setString(9, x.getChannelSessionId());
// pstmt.setString(10, x.getFileName());
// pstmt.setString(11, x.getName());
// pstmt.setString(12, x.getLiveId());
//
// pstmt.setString(13, x.getLiveChannelId());
// pstmt.setString(14, x.getId());
//
// },
// JdbcExecutionOptions.builder().withBatchSize(50).withBatchIntervalMs(4000).build(),
// new JdbcConnectionOptions.JdbcConnectionOptionsBuilder()
// .withUrl("jdbc:clickhouse://202.205.160.224:50172/test")
// .withDriverName("ru.yandex.clickhouse.ClickHouseDriver")
// .build()
// )
// );
flink使用kafka以及DB时,暂不能结合其他框架的方式,我认为原因跟流式数据有关,不能按常理处置;再者说,flink中数据不能结合mybatis/plus等ORM框架,只能在RichFunction中使用伪集成mybatis并不灵活。
当然,flink中使用kafka,也可以设置多个bootstrap.servers/多个topics
如有不足,欢迎同行者指点迷津!
|