Skip to content

Latest commit

 

History

History
13 lines (9 loc) · 692 Bytes

README.md

File metadata and controls

13 lines (9 loc) · 692 Bytes

NLP-F22

Natural Language Processing UT Austin Fall 2022

Final project details utilizing Transformer models:

  • Adapt CheckList detailed testing approach to a industry-standard Stanford Question Answering Database (SQuAD)-trained ELECTRA model
  • Train an ELECTRA SQuAD model on adversarial datasets to evaluate improvments based on question/answer categories
  • Generate and train with hand-tuned training sets to improve performance on specific categories (such as a specialized model for a given task)

Models were trained and code was ran on Google CoLab Pro.

PyTorch models and HuggingFace datasets/transformers were utilized.

11-page PDF report is included under final project folder.