Overview | Apache Flink
Table & SQL Connectors?#
Flink’s Table API & SQL programs can be connected to other external systems for reading and writing both batch and streaming tables. A table source provides access to data which is stored in external systems (such as a database, key-value store, message queue, or file system). A table sink emits a table to an external storage system. Depending on the type of source and sink, they support different formats such as CSV, Avro, Parquet, or ORC.
This page describes how to register table sources and table sinks in Flink using the natively supported connectors. After a source or sink has been registered, it can be accessed by Table API & SQL statements.
If you want to implement your own?custom?table source or sink, have a look at the?user-defined sources & sinks page.
Supported Connectors?#
Flink natively support various connectors. The following tables list all available connectors.
Name | Version | Source | Sink |
---|
Filesystem | | Bounded and Unbounded Scan, Lookup | Streaming Sink, Batch Sink | Elasticsearch | 6.x & 7.x | Not supported | Streaming Sink, Batch Sink | Apache Kafka | 0.10+ | Unbounded Scan | Streaming Sink, Batch Sink | Amazon Kinesis Data Streams | | Unbounded Scan | Streaming Sink | JDBC | | Bounded Scan, Lookup | Streaming Sink, Batch Sink | Apache HBase | 1.4.x & 2.2.x | Bounded Scan, Lookup | Streaming Sink, Batch Sink | Apache Hive | Supported Versions | Unbounded Scan, Bounded Scan, Lookup | Streaming Sink, Batch Sink |
How to use connectors?#
Flink supports using SQL?CREATE TABLE ?statements to register tables. One can define the table name, the table schema, and the table options for connecting to an external system.
See the?SQL section for more information about creating a table.
The following code shows a full example of how to connect to Kafka for reading and writing JSON records.
User-defined Sources & Sinks | Apache Flinkhttps://nightlies.apache.org/flink/flink-docs-release-1.13/docs/dev/table/sourcessinks/User-defined Sources & Sinks?#
|