Skip to content

A model named BERT-NAFE for SemEval-2024 Task 7. Accuracy 77.09%

Notifications You must be signed in to change notification settings

anson70242/BERT-NAFE

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 

Repository files navigation

BERT-NAFE

This study explores Task 2 in NumEval-2024, which is SemEval-2024(Semantic Evaluation) Task 7, focusing on the Reading Comprehension of Numerals in Text (Chinese). The dataset utilized in this study is the Numeral-related Question Answering Dataset (NQuAD), and the model employed is BERT. The data undergoes preprocessing, incorporating Numerals Augmentation and Feature Enhancement to numerical entities before model training. Additionally, fine-tuning will also be applied.The result was an accuracy rate of 77.09%, representing a 7.14% improvement compared to the initial NQuAD processing model, referred to as the Numeracy-Enhanced Model (NEMo).

The file "pre-processing.ipynb" is intended for generating data for model training and evaluation.

The file "BERT-NAFE.ipynb" is utilized for fine-tuning the model.

About

A model named BERT-NAFE for SemEval-2024 Task 7. Accuracy 77.09%

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published