Skip to content

Commit

Permalink
add time-spent
Browse files Browse the repository at this point in the history
  • Loading branch information
MisterXY89 committed Dec 20, 2023
1 parent d1f9789 commit 7ed6dd1
Showing 1 changed file with 16 additions and 0 deletions.
16 changes: 16 additions & 0 deletions HACKING.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@
- [Quantitative Results](#quantitative-results)
- [Methodology](#methodology)
- [Results](#results)
- [Time-spent](#time-spent)
- [Conclusion](#conclusion-1)

## 1. Introduction
Expand Down Expand Up @@ -309,6 +310,21 @@ This is quite counter-intuitive as the model should be able to answer the questi
A reason for this could be that the model is not able to generalize well and overfits to the training data.
Furhter investigation is needed to find the root cause of this problem.
## Time-spent
| WOY | Days | Task |
| --- | --- | --- |
| 42 - 43 | 1 | Setup |
| 43 - 45 | 3 | Collect and parse data from ICD-11 |
| 45 - 49 | 1 | Train Llama2 using QLoRA v1 |
| 43 - 45 | 1 | Collect additional data |
| 45 - 49 | 1 | Train Llama2 using QLoRA v2 |
| 01 - 03 | 0.5 | Report |
| $\sum$ | 8 | |
8 days $\approx$ 64 hours with 8 hours per day.
We spent around 8 days on this milestone. Thus we are still (more or less) on track with our initial plan.
## Conclusion
In this milestone, we have shown that fine-tuning llama2 for medical diagnostics and advice is possible and that the model is able to generate quite good responses to questions. There are still some things we want to investigate further, like the poor performance on the MedMCQA dataset, but overall we are quite happy with the results and the performance of the fine-tuned model.
Expand Down

0 comments on commit 7ed6dd1

Please sign in to comment.