In this project we observed the performance of four different encoder-decoder models on a task of question-answering on the CoQA dataset. We studied the effect of the model architecture and specification, the effect of providing the history of the dialogues to the models, possible causes for the errors and shortages, and patterns of mistakes observed in the models.
-
Notifications
You must be signed in to change notification settings - Fork 0
Dundalia/Conversational_QA_CoQA
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
Development of two Encoder-Decoder models based on the pretrained distilroberta and bert-tiny, fine-tuned on CoQA for Question Answering
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published