Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SUPPORT] Hudi write taking more time for one partition in AWS glue occasionally #12685

Open
logesr opened this issue Jan 21, 2025 · 0 comments

Comments

@logesr
Copy link

logesr commented Jan 21, 2025

Tips before filing an issue

  • Have you gone through our FAQs?

  • Join the mailing list to engage in conversations and get faster support at [email protected].

  • If you have triaged this as a bug, then file an issue directly.

Describe the problem you faced

We are running a glue job to write 2 hudi tables. It is an hourly job. It usually takes around 10-12 mins. But, occasionally we are seeing the job taking more than 25 mins. When deep dived into it, the count stage of Doing partition and writing data is taking more time for one partition even when the number of records is less. Please note that we are getting 100k records to 200k records on an average every hour. We could not find any specific patterns on which partition the issue is happening

Image

To Reproduce

hudi_options = {
"hoodie.datasource.write.partitionpath.urlencode": "true",
'hoodie.datasource.write.table.type': 'COPY_ON_WRITE',
'hoodie.datasource.write.reconcile.schema': 'true',
'hoodie.schema.on.read.enable': 'true',
'hoodie.table.base.file.format': 'PARQUET',
'hoodie.parquet.compression.codec': 'snappy',
"hoodie.datasource.write.hive_style_partitioning": "true",
"hoodie.datasource.hive_sync.enable": "true",
"hoodie.datasource.hive_sync.partition_extractor_class": "org.apache.hudi.hive.MultiPartKeysValueExtractor",
"hoodie.datasource.hive_sync.use_jdbc": "false",
"hoodie.datasource.hive_sync.mode": "hms",
"hoodie.datasource.hive_sync.support_timestamp": "true",
'hoodie.parquet.max.file.size': '125829120',
'hoodie.parquet.small.file.limit': '104857600',
'hoodie.copyonwrite.record.size.estimate': '5120'
}

def write_df_to_hudi(df, target_path, partition_key_column, database, table_name, primary_key, mode='append',
timestamp_column_name='timestamp'):

hudi_options.update(
    {
        'hoodie.table.name': table_name,
        'hoodie.datasource.write.recordkey.field': primary_key,
        'hoodie.datasource.write.partitionpath.field': partition_key_column,
        'hoodie.datasource.write.precombine.field': timestamp_column_name,
        "hoodie.datasource.hive_sync.database": database,
        "hoodie.datasource.hive_sync.table": table_name,
        "hoodie.datasource.hive_sync.partition_fields": partition_key_column,

    })
df.write.format('org.apache.hudi') \
    .option('hoodie.datasource.write.operation', 'INSERT') \
    .options(**hudi_options).mode(mode).save(target_path)

Expected behavior

We are looking for consistent run times or it should be directly associated with the number of records.

Environment Description

  • Hudi version :0.12.1

  • Spark version : 3.3

  • Hive version :

  • Hadoop version :

  • Storage (HDFS/S3/GCS..) : s3

  • Running on Docker? (yes/no) : no

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant