Hudi catalog
Hudi catalog
A Hudi catalog is a kind of external catalog that enables you to query data from Apache Hudi without ingestion.
Also, you can directly transform and load data from Hudi by using INSERT INTO based on Hudi catalogs. StarRocks supports Hudi catalogs from v2.4 onwards.
To ensure successful SQL workloads on your Hudi cluster, your StarRocks cluster needs to integrate with two \important components:
- Distributed file system (HDFS) or object storage like AWS S3, Microsoft Azure Storage, Google GCS, or other S3-compatible storage system (for example, MinIO)
- Metastore like Hive metastore or AWS Glue
NOTE
If you choose AWS S3 as storage, you can use HMS or AWS Glue as metastore. If you choose any other storage system, you can only use HMS as metastore.
Usage notes
- The file format of Hudi that StarRocks supports is Parquet. Parquet files support the following compression formats: SNAPPY, LZ4, ZSTD, GZIP, and NO_COMPRESSION.
- StarRocks provides complete support for Copy On Write (COW) tables and Merge On Read (MOR) tables from Hudi.
Integration preparations
Before you create a Hudi catalog, make sure your StarRocks cluster can integrate with the storage system and metastore of your Hudi cluster.
AWS IAM
If your Hudi cluster uses AWS S3 as storage or AWS Glue as metastore, choose your suitable authentication method and make the required preparations to ensure that your StarRocks cluster can access the related AWS cloud resources.
The following authentication methods are recommended:
- Instance profile
- Assumed role
- IAM user
Of the above-mentioned three authentication methods, instance profile is the most widely used.
For more information, see Preparation for authentication in AWS IAM.
HDFS
If you choose HDFS as storage, configure your StarRocks cluster as follows:
- (Optional) Set the username that is used to access your HDFS cluster and Hive metastore. By default, StarRocks uses the username of the FE and BE processes to access your HDFS cluster and Hive metastore. You can also set the username by adding
export HADOOP_USER_NAME="<user_name>"
at the beginning of the fe/conf/hadoop_env.sh file of each FE and at the beginning of the be/conf/hadoop_env.sh file of each BE. After you set the username in these files, restart each FE and each BE to make the parameter settings take effect. You can set only one username for each StarRocks cluster. - When you query Hudi data, the FEs and BEs of your StarRocks cluster use the HDFS client to access your HDFS cluster. In most cases, you do not need to configure your StarRocks cluster to achieve that purpose, and StarRocks starts the HDFS client using the default configurations. You need to configure your StarRocks cluster only in the following situations:
- High availability (HA) is enabled for your HDFS cluster: Add the hdfs-site.xml file of your HDFS cluster to the $FE_HOME/conf** path of each FE and to the **$BE_HOME/conf path of each BE.
- View File System (ViewFs) is enabled for your HDFS cluster: Add the core-site.xml file of your HDFS cluster to the $FE_HOME/conf** path of each FE and to the **$BE_HOME/conf path of each BE.
NOTE
If an error indicating an unknown host is returned when you send a query, you must add the mapping between the host names and IP addresses of your HDFS cluster nodes to the /etc/hosts path.
Kerberos authentication
If Kerberos authentication is enabled for your HDFS cluster or Hive metastore, configure your StarRocks cluster as follows:
- Run the
kinit -kt keytab_path principal
command on each FE and each BE to obtain Ticket Granting Ticket (TGT) from Key Distribution Center (KDC). To run this command, you must have the permissions to access your HDFS cluster and Hive metastore. Note that accessing KDC with this command is time-sensitive. Therefore, you need to use cron to run this command periodically. - Add
JAVA_OPTS="-Djava.security.krb5.conf=/etc/krb5.conf"
to the $FE_HOME/conf/fe.conf** file of each FE and to the **$BE_HOME/conf/be.conf file of each BE. In this example,/etc/krb5.conf
is the save path of the krb5.conf file. You can modify the path based on your needs.
Create a Hudi catalog
Syntax
CREATE EXTERNAL CATALOG <catalog_name>
[COMMENT <comment>]
PROPERTIES
(
"type" = "hudi",
MetastoreParams,
StorageCredentialParams,
MetadataUpdateParams
)
Parameters
catalog_name
The name of the Hudi catalog. The naming conventions are as follows:
- The name can contain letters, digits (0-9), and underscores (_). It must start with a letter.
- The name is case-sensitive and cannot exceed 1023 characters in length.
comment
The description of the Hudi catalog. This parameter is optional.
type
The type of your data source. Set the value to hudi
.
MetastoreParams
A set of parameters about how StarRocks integrates with the metastore of your data source.
Hive metastore
If you choose Hive metastore as the metastore of your data source, configure MetastoreParams
as follows:
"hive.metastore.type" = "hive",
"hive.metastore.uris" = "<hive_metastore_uri>"
NOTE
Before querying Hudi data, you must add the mapping between the host names and IP addresses of your Hive metastore nodes to the
/etc/hosts
path. Otherwise, StarRocks may fail to access your Hive metastore when you start a query.
The following table describes the parameter you need to configure in MetastoreParams
.
Parameter | Required | Description |
---|---|---|
hive.metastore.type | Yes | The type of metastore that you use for your Hudi cluster. Set the value to |
hive.metastore.uris | Yes | The URI of your Hive metastore. Format: If high availability (HA) is enabled for your Hive metastore, you can specify multiple metastore URIs and separate them with commas ( |
AWS Glue
If you choose AWS Glue as the metastore of your data source, which is supported only when you choose AWS S3 as storage, take one of the following actions:
-
To choose the instance profile-based authentication method, configure
MetastoreParams
as follows:"hive.metastore.type" = "glue",
"aws.glue.use_instance_profile" = "true",
"aws.glue.region" = "<aws_glue_region>" -
To choose the assumed role-based authentication method, configure
MetastoreParams
as follows:"hive.metastore.type" = "glue",
"aws.glue.use_instance_profile" = "true",
"aws.glue.iam_role_arn" = "<iam_role_arn>",
"aws.glue.region" = "<aws_glue_region>" -
To choose the IAM user-based authentication method, configure
MetastoreParams
as follows:"hive.metastore.type" = "glue",
"aws.glue.use_instance_profile" = "false",
"aws.glue.access_key" = "<iam_user_access_key>",
"aws.glue.secret_key" = "<iam_user_secret_key>",
"aws.glue.region" = "<aws_s3_region>"
The following table describes the parameters you need to configure in MetastoreParams
.
Parameter | Required | Description |
---|---|---|
hive.metastore.type | Yes | The type of metastore that you use for your Hudi cluster. Set the value to |
aws.glue.use_instance_profile | Yes | Specifies whether to enable the instance profile-based authentication method and the assumed role-based authentication method. Valid values: |
aws.glue.iam_role_arn | No | The ARN of the IAM role that has privileges on your AWS Glue Data Catalog. If you use the assumed role-based authentication method to access AWS Glue, you must specify this parameter. |
aws.glue.region | Yes | The region in which your AWS Glue Data Catalog resides. Example: |
aws.glue.access_key | No | The access key of your AWS IAM user. If you use the IAM user-based authentication method to access AWS Glue, you must specify this parameter. |
aws.glue.secret_key | No | The secret key of your AWS IAM user. If you use the IAM user-based authentication method to access AWS Glue, you must specify this parameter. |
For information about how to choose an authentication method for accessing AWS Glue and how to configure an access control policy in the AWS IAM Console, see Authentication parameters for accessing AWS Glue.
StorageCredentialParams
A set of parameters about how StarRocks integrates with your storage system. This parameter set is optional.
If you use HDFS as storage, you do not need to configure StorageCredentialParams
.
If you use AWS S3, other S3-compatible storage system, Microsoft Azure Storage, or Google GCS as storage, you must configure StorageCredentialParams
.
AWS S3
If you choose AWS S3 as storage for your Hudi cluster, take one of the following actions:
-
To choose the instance profile-based authentication method, configure
StorageCredentialParams
as follows:"aws.s3.use_instance_profile" = "true",
"aws.s3.region" = "<aws_s3_region>" -
To choose the assumed role-based authentication method, configure
StorageCredentialParams
as follows:"aws.s3.use_instance_profile" = "true",
"aws.s3.iam_role_arn" = "<iam_role_arn>",
"aws.s3.region" = "<aws_s3_region>"