You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Join the mailing list to engage in conversations and get faster support at [email protected].
If you have triaged this as a bug, then file an issue directly.
Describe the problem you faced
We are running a glue job to write 2 hudi tables. It is an hourly job. It usually takes around 10-12 mins. But, occasionally we are seeing the job taking more than 25 mins. When deep dived into it, the count stage of Doing partition and writing data is taking more time for one partition even when the number of records is less. Please note that we are getting 100k records to 200k records on an average every hour. We could not find any specific patterns on which partition the issue is happening
Tips before filing an issue
Have you gone through our FAQs?
Join the mailing list to engage in conversations and get faster support at [email protected].
If you have triaged this as a bug, then file an issue directly.
Describe the problem you faced
We are running a glue job to write 2 hudi tables. It is an hourly job. It usually takes around 10-12 mins. But, occasionally we are seeing the job taking more than 25 mins. When deep dived into it, the count stage of Doing partition and writing data is taking more time for one partition even when the number of records is less. Please note that we are getting 100k records to 200k records on an average every hour. We could not find any specific patterns on which partition the issue is happening
To Reproduce
hudi_options = {
"hoodie.datasource.write.partitionpath.urlencode": "true",
'hoodie.datasource.write.table.type': 'COPY_ON_WRITE',
'hoodie.datasource.write.reconcile.schema': 'true',
'hoodie.schema.on.read.enable': 'true',
'hoodie.table.base.file.format': 'PARQUET',
'hoodie.parquet.compression.codec': 'snappy',
"hoodie.datasource.write.hive_style_partitioning": "true",
"hoodie.datasource.hive_sync.enable": "true",
"hoodie.datasource.hive_sync.partition_extractor_class": "org.apache.hudi.hive.MultiPartKeysValueExtractor",
"hoodie.datasource.hive_sync.use_jdbc": "false",
"hoodie.datasource.hive_sync.mode": "hms",
"hoodie.datasource.hive_sync.support_timestamp": "true",
'hoodie.parquet.max.file.size': '125829120',
'hoodie.parquet.small.file.limit': '104857600',
'hoodie.copyonwrite.record.size.estimate': '5120'
}
def write_df_to_hudi(df, target_path, partition_key_column, database, table_name, primary_key, mode='append',
timestamp_column_name='timestamp'):
Expected behavior
We are looking for consistent run times or it should be directly associated with the number of records.
Environment Description
Hudi version :0.12.1
Spark version : 3.3
Hive version :
Hadoop version :
Storage (HDFS/S3/GCS..) : s3
Running on Docker? (yes/no) : no
The text was updated successfully, but these errors were encountered: