IT数码 购物 网址 头条 软件 日历 阅读 图书馆
TxT小说阅读器
↓语音阅读,小说下载,古典文学↓
图片批量下载器
↓批量下载图片,美女图库↓
图片自动播放器
↓图片自动播放器↓
一键清除垃圾
↓轻轻一点,清除系统垃圾↓
开发: C++知识库 Java知识库 JavaScript Python PHP知识库 人工智能 区块链 大数据 移动开发 嵌入式 开发工具 数据结构与算法 开发测试 游戏开发 网络协议 系统运维
教程: HTML教程 CSS教程 JavaScript教程 Go语言教程 JQuery教程 VUE教程 VUE3教程 Bootstrap教程 SQL数据库教程 C语言教程 C++教程 Java教程 Python教程 Python3教程 C#教程
数码: 电脑 笔记本 显卡 显示器 固态硬盘 硬盘 耳机 手机 iphone vivo oppo 小米 华为 单反 装机 图拉丁
 
   -> 大数据 -> 建立Hive和Hbase的映射关系,通过Spark将Hive表中数据导入ClickHouse -> 正文阅读

[大数据]建立Hive和Hbase的映射关系,通过Spark将Hive表中数据导入ClickHouse

HBase+Hive+Spark+ClickHouse

? 在HBase中建表,通过Hive与HBase建立映射关系,实现双方新增数据后彼此都可以查询到。

通过spark将Hive中的数据读取到并经过处理保存到ClickHouse中

一 Hbase

1 Hbase表操作

1.1 创建命名空间

hbase(main):008:0> create_namespace 'zxy',{'hbasename'=>'hadoop'}
0 row(s) in 0.0420 seconds

1.2 创建列簇

hbase(main):012:0> create 'zxy:t1',{NAME=>'f1',VERSIONS=>5}
0 row(s) in 2.4850 seconds


hbase(main):014:0> list 'zxy:.*'
TABLE
zxy:t1
1 row(s) in 0.0200 seconds

=> ["zxy:t1"]
hbase(main):015:0> describe 'zxy:t1'
Table zxy:t1 is ENABLED
zxy:t1
COLUMN FAMILIES DESCRIPTION
{NAME => 'f1', BLOOMFILTER => 'ROW', VERSIONS => '5', IN_MEMORY => 'false', KEEP_DELETED_C
ELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', M
IN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}
{

1.3 按行导入数据

hbase(main):016:0> put 'zxy:t1','r1','f1:name','zxy'
0 row(s) in 0.1080 seconds
hbase(main):028:0> append 'zxy:t1','r1','f1:id','001'
0 row(s) in 0.0400 seconds

hbase(main):029:0> scan 'zxy:t1'
ROW                     COLUMN+CELL
 r1                     column=f1:id, timestamp=1627714724257, value=001
 r1                     column=f1:name, timestamp=1627714469210, value=zxy
1 row(s) in 0.0120 seconds

hbase(main):030:0> append 'zxy:t1','r2','f1:id','002'
0 row(s) in 0.0060 seconds

hbase(main):031:0> append 'zxy:t1','r2','f1:name','bx'
0 row(s) in 0.0080 seconds

hbase(main):032:0> append 'zxy:t1','r3','f1:id','003'
0 row(s) in 0.0040 seconds

hbase(main):033:0> append 'zxy:t1','r3','f1:name','zhg'
0 row(s) in 0.0040 seconds

hbase(main):034:0> scan 'zxy:t1'
ROW                     COLUMN+CELL
 r1                     column=f1:id, timestamp=1627714724257, value=001
 r1                     column=f1:name, timestamp=1627714469210, value=zxy
 r2                     column=f1:id, timestamp=1627714739647, value=002
 r2                     column=f1:name, timestamp=1627714754108, value=bx
 r3                     column=f1:id, timestamp=1627714768018, value=003
 r3                     column=f1:name, timestamp=1627714778121, value=zhg
3 row(s) in 0.0190 seconds

二 Hive

1 建立Hbase关联表

hive (zxy)> create external table if not exists t1(
          > uid string,
          > id int,
          > name string
          > )
          > STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
          > WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,f1:id,f1:name")
          > TBLPROPERTIES ("hbase.table.name" = "zxy:t1");
OK
Time taken: 0.306 seconds
hive (zxy)> select * from t1
          > ;
OK
r1      1       zxy
r2      2       bx
r3      3       zhg
Time taken: 0.438 seconds, Fetched: 3 row(s)

2 Hbase添加数据

  • hbase添加数据

hbase(main):002:0> append 'zxy:t1','r4','f1:id','004'
0 row(s) in 0.1120 seconds

hbase(main):003:0> append 'zxy:t1','r4','f1:name','hyy'
0 row(s) in 0.0220 seconds

hbase(main):004:0> scan 'zxy:t1'
ROW                                      COLUMN+CELL
 r1                                      column=f1:id, timestamp=1627714724257, value=001
 r1                                      column=f1:name, timestamp=1627714469210, value=zxy
 r2                                      column=f1:id, timestamp=1627714739647, value=002
 r2                                      column=f1:name, timestamp=1627714754108, value=bx
 r3                                      column=f1:id, timestamp=1627714768018, value=003
 r3                                      column=f1:name, timestamp=1627714778121, value=zhg
 r4                                      column=f1:id, timestamp=1627716660482, value=004
 r4                                      column=f1:name, timestamp=1627716670546, value=hyy
  • hive更新到数据
hive (zxy)> select * from t1;
OK
r1      1       zxy
r2      2       bx
r3      3       zhg
r4      4       hyy

3 Hive添加数据

hive添加数据不能直接通过load添加数据,所以这里选择使用中间表来导入数据

  • user.txt
r5 5 tzk
r6 6 fyj
  • 创建中间表
hive (zxy)> create table if not exists t2 (uid string,id int,name string) row format delimited fields terminated by ' '
          > ;
OK
Time taken: 0.283 seconds
  • 导入数据
hive (zxy)> load data local inpath '/data/data/user.txt' into table t2;
Loading data to table zxy.t2
Table zxy.t2 stats: [numFiles=1, totalSize=18]
OK
  • 查询导入中间表数据
hive (zxy)> insert into table t1 select * from t2;
Query ID = root_20210731154037_e8019cc0-38bb-42fc-9674-a9de2be9dba6
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1627713883513_0001, Tracking URL = http://hadoop:8088/proxy/application_1627713883513_0001/
Kill Command = /data/apps/hadoop-2.8.1/bin/hadoop job  -kill job_1627713883513_0001
Hadoop job information for Stage-0: number of mappers: 1; number of reducers: 0
2021-07-31 15:41:23,373 Stage-0 map = 0%,  reduce = 0%
2021-07-31 15:41:34,585 Stage-0 map = 100%,  reduce = 0%, Cumulative CPU 3.45 sec
MapReduce Total cumulative CPU time: 3 seconds 450 msec
Ended Job = job_1627713883513_0001
MapReduce Jobs Launched:
Stage-Stage-0: Map: 1   Cumulative CPU: 3.45 sec   HDFS Read: 3659 HDFS Write: 0 SUCCESS
Total MapReduce CPU Time Spent: 3 seconds 450 msec
OK
Time taken: 60.406 seconds

  • Hive端查询数据
hive (zxy)> select * from t1;
OK
r1      1       zxy
r2      2       bx
r3      3       zhg
r4      4       hyy
r5      5       tzk
r6      6       fyj
Time taken: 0.335 seconds, Fetched: 6 row(s)
hive (zxy)>
  • Hbase端查询到数据
hbase(main):001:0> scan 'zxy:t1'
ROW                                      COLUMN+CELL
 r1                                      column=f1:id, timestamp=1627714724257, value=001
 r1                                      column=f1:name, timestamp=1627714469210, value=zxy
 r2                                      column=f1:id, timestamp=1627714739647, value=002
 r2                                      column=f1:name, timestamp=1627714754108, value=bx
 r3                                      column=f1:id, timestamp=1627714768018, value=003
 r3                                      column=f1:name, timestamp=1627714778121, value=zhg
 r4                                      column=f1:id, timestamp=1627716660482, value=004
 r4                                      column=f1:name, timestamp=1627716670546, value=hyy
 r5                                      column=f1:id, timestamp=1627717294053, value=5
 r5                                      column=f1:name, timestamp=1627717294053, value=tzk
 r6                                      column=f1:id, timestamp=1627717294053, value=6
 r6                                      column=f1:name, timestamp=1627717294053, value=fyj
6 row(s) in 0.4660 seconds

三 Hive2ClickHouse

完整项目连接

1 pom.xml

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.zxy</groupId>
    <artifactId>hive2ch</artifactId>
    <version>1.0-SNAPSHOT</version>

    <properties>
        <maven.compiler.source>8</maven.compiler.source>
        <maven.compiler.target>8</maven.compiler.target>
        <scala.version>2.11.12</scala.version>
        <play-json.version>2.3.9</play-json.version>
        <maven-scala-plugin.version>2.10.1</maven-scala-plugin.version>
        <scala-maven-plugin.version>3.2.0</scala-maven-plugin.version>
        <maven-assembly-plugin.version>2.6</maven-assembly-plugin.version>
        <spark.version>2.4.5</spark.version>
        <scope.type>compile</scope.type>
        <json.version>1.2.3</json.version>
        <!--compile provided-->
    </properties>

    <dependencies>

        <!--json 包-->
        <dependency>
            <groupId>com.alibaba</groupId>
            <artifactId>fastjson</artifactId>
            <version>${json.version}</version>
        </dependency>

        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.11</artifactId>
            <version>${spark.version}</version>
            <scope>${scope.type}</scope>
            <exclusions>
                <exclusion>
                    <groupId>com.google.guava</groupId>
                    <artifactId>guava</artifactId>
                </exclusion>
            </exclusions>
        </dependency>

        <dependency>
            <groupId>com.google.guava</groupId>
            <artifactId>guava</artifactId>
            <version>15.0</version>
        </dependency>

        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-sql_2.11</artifactId>
            <version>${spark.version}</version>
            <scope>${scope.type}</scope>
        </dependency>

        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-hive_2.11</artifactId>
            <version>${spark.version}</version>
            <scope>${scope.type}</scope>
        </dependency>

        <dependency>
            <groupId>mysql</groupId>
            <artifactId>mysql-connector-java</artifactId>
            <version>5.1.47</version>
        </dependency>

        <dependency>
            <groupId>log4j</groupId>
            <artifactId>log4j</artifactId>
            <version>1.2.17</version>
            <scope>${scope.type}</scope>
        </dependency>

        <dependency>
            <groupId>commons-codec</groupId>
            <artifactId>commons-codec</artifactId>
            <version>1.6</version>
        </dependency>

        <dependency>
            <groupId>org.scala-lang</groupId>
            <artifactId>scala-library</artifactId>
            <version>${scala.version}</version>
            <scope>${scope.type}</scope>
        </dependency>

        <dependency>
            <groupId>org.scala-lang</groupId>
            <artifactId>scala-reflect</artifactId>
            <version>${scala.version}</version>
            <scope>${scope.type}</scope>
        </dependency>

        <dependency>
            <groupId>com.github.scopt</groupId>
            <artifactId>scopt_2.11</artifactId>
            <version>4.0.0-RC2</version>
        </dependency>

        <dependency>
            <groupId>org.apache.hudi</groupId>
            <artifactId>hudi-spark-bundle_2.11</artifactId>
            <version>0.5.2-incubating</version>
            <scope>${scope.type}</scope>
        </dependency>

        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-avro_2.11</artifactId>
            <version>${spark.version}</version>
        </dependency>

        <dependency>
            <groupId>com.hankcs</groupId>
            <artifactId>hanlp</artifactId>
            <version>portable-1.7.8</version>
        </dependency>

        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-mllib_2.11</artifactId>
            <version>${spark.version}</version>
            <scope>${scope.type}</scope>
        </dependency>

        <dependency>
            <groupId>org.apache.hive</groupId>
            <artifactId>hive-jdbc</artifactId>
            <version>1.2.1</version>
            <scope>${scope.type}</scope>
            <exclusions>
                <exclusion>
                    <groupId>javax.mail</groupId>
                    <artifactId>mail</artifactId>
                </exclusion>
                <exclusion>
                    <groupId>org.eclipse.jetty.aggregate</groupId>
                    <artifactId>*</artifactId>
                </exclusion>
            </exclusions>
        </dependency>

        <dependency>
            <groupId>ru.yandex.clickhouse</groupId>
            <artifactId>clickhouse-jdbc</artifactId>
            <version>0.2.4</version>
        </dependency>
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.12</version>
            <scope>compile</scope>
        </dependency>

        <dependency>
            <groupId>org.apache.hive</groupId>
            <artifactId>hive-hbase-handler</artifactId>
            <version>1.2.1</version>
        </dependency>

        <dependency>
            <groupId>org.apache.hbase</groupId>
            <artifactId>hbase-server</artifactId>
            <version>1.2.0</version>
        </dependency>

    </dependencies>

    <repositories>

        <repository>
            <id>alimaven</id>
            <url>http://maven.aliyun.com/nexus/content/groups/public/</url>
            <releases>
                <updatePolicy>never</updatePolicy>
            </releases>
            <snapshots>
                <updatePolicy>never</updatePolicy>
            </snapshots>
        </repository>
    </repositories>

    <build>
        <sourceDirectory>src/main/scala</sourceDirectory>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-assembly-plugin</artifactId>
                <version>${maven-assembly-plugin.version}</version>
                <configuration>
                    <descriptorRefs>
                        <descriptorRef>jar-with-dependencies</descriptorRef>
                    </descriptorRefs>
                </configuration>
                <executions>
                    <execution>
                        <id>make-assembly</id>
                        <phase>package</phase>
                        <goals>
                            <goal>single</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
            <plugin>
                <groupId>net.alchim31.maven</groupId>
                <artifactId>scala-maven-plugin</artifactId>
                <version>${scala-maven-plugin.version}</version>
                <executions>
                    <execution>
                        <goals>
                            <goal>compile</goal>
                            <goal>testCompile</goal>
                        </goals>
                        <configuration>
                            <args>
                                <arg>-dependencyfile</arg>
                                <arg>${project.build.directory}/.scala_dependencies</arg>
                            </args>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-archetype-plugin</artifactId>
                <version>2.2</version>
            </plugin>
        </plugins>
    </build>
</project>

tips:

依赖中要含有这两个依赖,否则会报如下问题:

Exception in thread “main” java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Error in loading storage handler.org.apache.hadoop.hive.hbase.HBaseStorageHandler

    <dependency>
        <groupId>org.apache.hive</groupId>
        <artifactId>hive-hbase-handler</artifactId>
        <version>1.2.1</version>
    </dependency>

    <dependency>
        <groupId>org.apache.hbase</groupId>
        <artifactId>hbase-server</artifactId>
        <version>1.2.0</version>
    </dependency>

1 Config

package com.zxy.bigdata.hive2CH.conf

import org.slf4j.LoggerFactory
import scopt.OptionParser

/**
 * 配置类
 */
case class Config(
                 env:String = "",
                 username:String = "",
                 password:String = "",
                 url:String = "",
                 cluster:String = "",
                 stateDate:String = "",
                 endDate:String = "",
                 proxyUser:String = "root",
                 topK:Int = 25
                 )

object Config {
    private val logger = LoggerFactory.getLogger(Config.getClass.getSimpleName)
    
    
    /**
     * 解析参数
     */
    def parseConfig(obj:Object, args: Array[String]):Config = {
        //1. 程序名称
        val pargramName = obj.getClass.getSimpleName.replace("$", "")
        //2. 类似于getopts
        val parser = new OptionParser[Config]("spark sql " + pargramName) {
            head(pargramName, "v1.0") // 抬头
            // 必填
            opt[String]('e', "env").required().action((x, config) => config.copy(env = x)).text("dev or prod")
            opt[String]('x', "proxyUser").required().action((x, config) => config.copy(proxyUser = x)).text("proxy username")
            
            pargramName match {
                case "Hive2Clickhouse" => {
                    logger.info("Hive2Clickhouse")
                    opt[String]('n', "username").action((x, config) => config.copy(username = x)).text("username")
                    opt[String]('p', "password").required().action((x, config) => config.copy(password = x)).text("user pass")
                    opt[String]('u', "url").required().action((x, config) => config.copy(url = x)).text("url")
                    opt[String]('c', "cluster").required().action((x, config) => config.copy(cluster = x)).text("cluster name")
                }
                case _ =>
            }
        }
        parser.parse(args, Config()) match {
            case Some(conf) => conf
            case None => {
                logger.error("cannot parse args")
                System.exit(-1)
                null
            }
        }
    }
}

2 SparkUtils

package com.zxy.bigdata.hive2CH.utils

import org.apache.spark.SparkConf
import org.apache.spark.sql.SparkSession
import org.slf4j.LoggerFactory


object SparkUtils {
    private val logger = LoggerFactory.getLogger(SparkUtils.getClass.getSimpleName)
    
    def getSparkSession(env:String, appName:String):SparkSession = {
        val conf = new SparkConf()
            .set("spark.serializer", "org.apache.spark.serializer.KryoSerializer")
            .set("spark.sql.hive.metastore.version", "1.2.1")
            .set("spark.sql.cbo.enabled", "true") // 开启spark sql底层的优化器
            .set("spark.hadoop.dfs.client.block.write.replace-datanode-on-failure.enable", "true")
            .set("spark.hadoop.dfs.client.block.write.replace-datanode-on-failure.policy", "NEVER")
        
        env match {
            case "prod" => {
                conf.setAppName(appName + "_" +"prod")
                SparkSession.builder().config(conf).enableHiveSupport().getOrCreate()
            }
            case "dev" => {
                conf.setMaster("local[*]").setAppName(appName + "_" +"dev").set("spark.sql.hive.metastore.jars", "maven")
                SparkSession.builder().config(conf).enableHiveSupport().getOrCreate()
            }
            case _ => {
                logger.error("not match env exist")
                System.exit(-1)
                null
            }
        }
    }
}

3 ClickHouseUtils

package com.zxy.bigdata.hive2CH.utils

import ru.yandex.clickhouse.ClickHouseDataSource
import ru.yandex.clickhouse.settings.ClickHouseProperties

/**
 * ClickHouse DataSource
 */
object ClickHouseUtils {
    /**
     * 获取连接到clickhouse的数据源对象
     */
    def getDataSource(user:String,password:String,url:String):ClickHouseDataSource = {
        Class.forName("ru.yandex.clickhouse.ClickHouseDriver")
        val properties = new ClickHouseProperties()
        properties.setUser(user)
        properties.setPassword(password)
        val dataSource = new ClickHouseDataSource(url,properties)
        dataSource
    }
    
    /**
     * df
     * uid string gender string,...
     * ch
     * uid String gender String,...
     */
    def dfTypeName2CH(dfCol:String) = {
        dfCol.split(",").map(line => {
            val fields: Array[String] = line.split(" ")
            val fType: String = dfType2CHType(fields(1))
            val fName: String = fields(0)
            fName + " " + fType
        }).mkString(",")
    }
    
    /**
     * 将df的type转为ch的type
     */
    def dfType2CHType(fieldType: String):String = {
        fieldType.toLowerCase() match {
            case "string" => "String"
            case "integer" => "Int32"
            case "long" => "Int64"
            case "float" => "Float32"
            case "double" => "Float64"
            case "date" => "Date"
            case "timestamp" => "Datetime"
            case _ => "String"
        }
    }
}

4 TableUtils

package com.zxy.bigdata.hive2CH.utils

import java.sql.PreparedStatement

import com.zxy.bigdata.hive2CH.conf.Config
import org.apache.spark.sql.types.{IntegerType, LongType, StringType}
import org.apache.spark.sql.{DataFrame, Row}
import org.slf4j.LoggerFactory
import ru.yandex.clickhouse.{ClickHouseConnection, ClickHouseDataSource}

/**
 * 生成clickhouse数据类
 */
object TableUtils {
    
    /**
     * 向clickhouse插入数据
     */
    def insertBaseFeatureTable(partition:Iterator[Row],insertSql:String,params:Config): Unit ={
        var batchCount = 0 //记录批量的个数
        val batchSize = 2000 //极限个数
        var lastBatchTime = System.currentTimeMillis()
        
        //1.获取数据源
        val dataSource: ClickHouseDataSource = ClickHouseUtils.getDataSource(params.username,params.password,params.url)
        val connection: ClickHouseConnection = dataSource.getConnection
        val ps: PreparedStatement = connection.prepareStatement(insertSql)
        
        //2.填充占位符
        partition.foreach(row => {
            var index = 1
            row.schema.fields.foreach(field => {
                field.dataType match {
                    case StringType => ps.setString(index,row.getAs[String](field.name))
                    case LongType => ps.setLong(index,row.getAs[Long](field.name))
                    case IntegerType => ps.setInt(index,row.getAs[Int](field.name))
                    case _ => logger.error(s"type is err,${field.dataType}")
                }
                index += 1
            })
            //3.添加批处理集合
            ps.addBatch()
            batchCount += 1
            //控制batch缓存过大,过大就执行
            if (batchCount >= batchSize || lastBatchTime < System.currentTimeMillis() - 3000){
                lastBatchTime = System.currentTimeMillis()
                ps.executeBatch() //执行批缓存中的所有数据
                logger.warn(s"send data to clickhouse,batchNum:${batchCount},batchTime:${System.currentTimeMillis() - lastBatchTime}")
                batchCount = 0
            }
        })
        //无论数据大小,都执行
        ps.executeBatch()
        logger.warn(s"send data to clickhouse,batchNum:${batchCount},batchTime:${System.currentTimeMillis() - lastBatchTime}")
        
        ps.close()
        connection.close()
    }
    private val logger = LoggerFactory.getLogger(TableUtils.getClass.getSimpleName)
    
    val USER_PROFILE_CLICKHOUSE_DATABASE = "zxy"
    val USER_PROFILE_CLICKHOUSE_TABLE = "t1"
    
    /**
     * 根据df生成clickhoust的表,返回df的各个列的名字和占位符,用于写数据
     */
    def getClickHouseProfileBaseTable(baseFeatureDF:DataFrame,params:Config):(String,String) = {
        /*
         * baseFeatureDF的schema的如下:
         * fieldName = uid, gender, age, region, email_suffix, model
         * fieldType = uid string, gender string, age string, region string, email_suffix string, model string
         * pl =        ?, ?, ?, ?, ?, ?
         */
        /**
         * 1.提取数据
         */
        val (fieldName,fieldType,pl) = baseFeatureDF.schema.fields.foldLeft("", "", "")(
            (z, f) => {
                if (z._1.nonEmpty && z._2.nonEmpty && z._3.nonEmpty) {
                    //非空即表示不是第一次的时候进行拼接
                    (z._1 + "," + f.name, z._2 + "," + f.name + " " + f.dataType.simpleString, z._3 + ",?")
                } else {
                    (f.name, f.name + " " + f.dataType.simpleString, "?")
                }
            }
        )
        ("","")
    
        /**
         * 2.将spark的表达式转换为clickhouse的表达式
         */
        val chCol: String = ClickHouseUtils.dfTypeName2CH(fieldType)
    
        /**
         * 3.获取到连接ch的cluster
         */
        val cluster: String = params.cluster
    
        /**
         * 4.建库
         */
        val createDatabaseSql =
            s"create database if not exists ${USER_PROFILE_CLICKHOUSE_DATABASE}";
    
        /**
         * 5.在clickhouse中建表
         */
        val chTableSql =
            s"""
               |create table ${USER_PROFILE_CLICKHOUSE_DATABASE}.${USER_PROFILE_CLICKHOUSE_TABLE}(${chCol})
               |ENGINE = MergeTree()
               |ORDER BY(uid)
               |""".stripMargin
    
        /**
         * 6.删表
         */
        val dropTableSql = s"drop table if exists ${USER_PROFILE_CLICKHOUSE_DATABASE}.${USER_PROFILE_CLICKHOUSE_TABLE}"
    
        /**
         * 7.获取连接到clickhouse
         */
        val dataSource: ClickHouseDataSource = ClickHouseUtils.getDataSource(params.username,params.password,params.url)
        val connection: ClickHouseConnection = dataSource.getConnection
        logger.warn(createDatabaseSql)
        var ps: PreparedStatement = connection.prepareStatement(createDatabaseSql)
        ps.execute()
        logger.warn(dropTableSql)
        ps = connection.prepareStatement(dropTableSql)
        ps.execute()
        logger.warn(chTableSql)
        ps = connection.prepareStatement(chTableSql)
        ps.execute()
        ps.close()
        connection.close()
        logger.info("init success")
        (fieldName,pl)
    }
}

5 Hive2Clickhouse

package com.zxy.bigdata.hive2CH

import com.zxy.bigdata.hive2CH.conf.Config
import com.zxy.bigdata.hive2CH.utils.{SparkUtils, TableUtils}
import org.apache.log4j.{Level, Logger}
import org.apache.spark.sql.{DataFrame, SparkSession}
import org.slf4j.LoggerFactory

object Hive2Clickhouse {
    private val logger = LoggerFactory.getLogger(Hive2Clickhouse.getClass.getSimpleName)
    
    def main(args: Array[String]): Unit = {
        Logger.getLogger("org").setLevel(Level.WARN)
        //1.解析参数
        val params: Config = Config.parseConfig(Hive2Clickhouse,args)
        logger.warn("job is running,please wait...")
        
        //2.获取到SparkUtils
        val spark: SparkSession = SparkUtils.getSparkSession(params.env,Hive2Clickhouse.getClass.getSimpleName)
        import spark.implicits._
        
        //3.如果是本地测试就控制输出
        var limitData = ""
        if(params.env.trim.equals("dev")){
            limitData = "limit 10"
        }
        //4.zxy.t1数据
        val hiveDataSql =
            s"""
               |select
               |uid,
               |id,
               |name
               |from
               |zxy.t1 ${limitData}
               |""".stripMargin
        
        val hiveDF: DataFrame = spark.sql(hiveDataSql)
        hiveDF.show()
        
    
        //2.6 保存在clickhouse中:1.meta(filedName,pl),2.sql的占位符
        val meta = TableUtils.getClickHouseProfileBaseTable(hiveDF,params)
    
        //2.7 插入数据
        val insertSql =
            s"""
               |insert into ${TableUtils.USER_PROFILE_CLICKHOUSE_DATABASE}.${TableUtils.USER_PROFILE_CLICKHOUSE_TABLE}(${meta._1}) values(${meta._2})
               |""".stripMargin
    
        logger.warn(insertSql)
    
        hiveDF.foreachPartition(partition => {
            TableUtils.insertBaseFeatureTable(partition,insertSql,params)
        })
        hiveDF.unpersist()
        spark.stop()
        logger.info("job has successed")
    }
}

tips:

resources目录下需要添加:

core-site.xml

hive-site.xml

yarn-site.xml

四 提交测试

1 Spark-submit

sh /opt/apps/spark-2.4.5/bin/spark-submit \
--jars /opt/apps/hive-1.2.1/auxlib/hudi-spark-bundle_2.11-0.5.2-incubating.jar \
--conf spark.executor.heartbeatInterval=120s \
--conf spark.network.timeout=600s \
--conf spark.sql.catalogImplementation=hive \
--conf spark.sql.shuffle.partitions=20 \
--conf spark.yarn.submit.waitAppCompletion=true \
--conf spark.sql.hive.convertMetastoreParquet=false \
--name hbase_hive_clickhouse \
--master yarn \
--deploy-mode client \
--driver-memory 1G \
--executor-memory 3G \
--num-executors 1 \
--class com.zxy.bigdata.hive2CH.Hive2Clickhouse \
/data/jars/hive2ch.jar \
-e prod -u jdbc:clickhouse://192.168.130.111:8321 -n zxy-insert -p zxy-0613 -x root -c 1

2 测试运行

[root@hadoop jars]# sh /opt/apps/spark-2.4.5/bin/spark-submit \
> --jars /opt/apps/hive-1.2.1/auxlib/hudi-spark-bundle_2.11-0.5.2-incubating.jar \
> --conf spark.executor.heartbeatInterval=120s \
> --conf spark.network.timeout=600s \
> --conf spark.sql.catalogImplementation=hive \
> --conf spark.sql.shuffle.partitions=20 \
> --conf spark.yarn.submit.waitAppCompletion=true \
> --conf spark.sql.hive.convertMetastoreParquet=false \
> --name hbase_hive_clickhouse \
> --master yarn \
> --deploy-mode client \
> --driver-memory 1G \
> --executor-memory 3G \
> --num-executors 1 \
> --class com.zxy.bigdata.hive2CH.Hive2Clickhouse \
> /data/jars/hive2ch.jar \
> -e prod -u jdbc:clickhouse://192.168.130.111:8321 -n zxy-insert -p zxy-0613 -x root -c 1
21/07/31 17:41:46 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
21/07/31 17:41:47 INFO Config$: Hive2Clickhouse
21/07/31 17:41:47 WARN Hive2Clickhouse$: job is running,please wait...
21/07/31 17:41:51 WARN yarn.Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
21/07/31 17:42:25 INFO hive.metastore: Trying to connect to metastore with URI thrift://192.168.130.111:9083
21/07/31 17:42:25 INFO hive.metastore: Connected to metastore.
21/07/31 17:42:37 WARN mapreduce.TableInputFormatBase: You are using an HTable instance that relies on an HBase-managed Connection. This is usually due to directly creating an HTable, which is deprecated. Instead, you should create a Connection object and then request a Table instance from it. If you don't need the Table instance for your own use, you should instead use the TableInputFormatBase.initalizeTable method directly.
+---+---+----+
|uid| id|name|
+---+---+----+
| r1|  1| zxy|
| r2|  2|  bx|
| r3|  3| zhg|
| r4|  4| hyy|
| r5|  5| tzk|
| r6|  6| fyj|
+---+---+----+

21/07/31 17:43:03 INFO clickhouse.ClickHouseDriver: Driver registered
21/07/31 17:43:04 WARN TableUtils$: create database if not exists zxy
21/07/31 17:43:04 WARN TableUtils$: drop table if exists zxy.t1
21/07/31 17:43:04 WARN TableUtils$:
create table zxy.t1(uid String,id String,name String)
ENGINE = MergeTree()
ORDER BY(uid)

21/07/31 17:43:04 INFO TableUtils$: init success
21/07/31 17:43:04 WARN Hive2Clickhouse$:
insert into zxy.t1(uid,id,name) values(?,?,?)

21/07/31 17:43:04 WARN mapreduce.TableInputFormatBase: You are using an HTable instance that relies on an HBase-managed Connection. This is usually due to directly creating an HTable, which is deprecated. Instead, you should create a Connection object and then request a Table instance from it. If you don't need the Table instance for your own use, you should instead use the TableInputFormatBase.initalizeTable method directly.
21/07/31 17:43:10 INFO Hive2Clickhouse$: job has successed

3 查看ClickHouse表

clickhouse-client --host 192.168.130.111 --port 9999 --password qwert

[root@hadoop jars]# clickhouse-client --host 192.168.130.111 --port 9999 --password qwert
ClickHouse client version 20.3.12.112.
Connecting to 192.168.130.111:9999 as user default.
Connected to ClickHouse server version 20.3.12 revision 54433.

hadoop :) show databases;

SHOW DATABASES

┌─name─────┐
│ app_news │
│ default  │
│ system   │
│ zxy      │
└──────────┘

4 rows in set. Elapsed: 0.179 sec.

hadoop :) use zxy;

USE zxy

Ok.

0 rows in set. Elapsed: 0.015 sec.

hadoop :) show tables;

SHOW TABLES

┌─name─┐
│ t1   │
└──────┘

1 rows in set. Elapsed: 1.279 sec.

hadoop :) select * from t1;

SELECT *
FROM t1

┌─uid─┬─id─┬─name─┐
│ r1  │ 1  │ zxy  │
│ r2  │ 2  │ bx   │
│ r3  │ 3  │ zhg  │
│ r4  │ 4  │ hyy  │
│ r5  │ 5  │ tzk  │
│ r6  │ 6  │ fyj  │
└─────┴────┴──────┘

6 rows in set. Elapsed: 0.095 sec.

hadoop :)

  大数据 最新文章
实现Kafka至少消费一次
亚马逊云科技:还在苦于ETL?Zero ETL的时代
初探MapReduce
【SpringBoot框架篇】32.基于注解+redis实现
Elasticsearch:如何减少 Elasticsearch 集
Go redis操作
Redis面试题
专题五 Redis高并发场景
基于GBase8s和Calcite的多数据源查询
Redis——底层数据结构原理
上一篇文章      下一篇文章      查看所有文章
加:2021-08-02 10:53:03  更:2021-08-02 10:54:36 
 
开发: C++知识库 Java知识库 JavaScript Python PHP知识库 人工智能 区块链 大数据 移动开发 嵌入式 开发工具 数据结构与算法 开发测试 游戏开发 网络协议 系统运维
教程: HTML教程 CSS教程 JavaScript教程 Go语言教程 JQuery教程 VUE教程 VUE3教程 Bootstrap教程 SQL数据库教程 C语言教程 C++教程 Java教程 Python教程 Python3教程 C#教程
数码: 电脑 笔记本 显卡 显示器 固态硬盘 硬盘 耳机 手机 iphone vivo oppo 小米 华为 单反 装机 图拉丁

360图书馆 购物 三丰科技 阅读网 日历 万年历 2024年5日历 -2024/5/22 1:04:15-

图片自动播放器
↓图片自动播放器↓
TxT小说阅读器
↓语音阅读,小说下载,古典文学↓
一键清除垃圾
↓轻轻一点,清除系统垃圾↓
图片批量下载器
↓批量下载图片,美女图库↓
  网站联系: qq:121756557 email:121756557@qq.com  IT数码