How FP16 embeddings are converted to FP32 tensors in GPT-2 example ? #652
-
In the But isn't it wrong if we have weights in FP16 ?
And the type of this tensor is checkpoint dependent, so it can be FP16. But the problem is that the So how the conversion from FP16 to FP32 happens ? I can't see it either in I'm asking this because I'm trying to debug my own model, and I found that the output of |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Here is where it happens https://github.com/ggerganov/ggml/blob/6b846cbde81ae02cd3e363311180ae706091933e/src/ggml.c#L10447 |
Beta Was this translation helpful? Give feedback.
Here is where it happens https://github.com/ggerganov/ggml/blob/6b846cbde81ae02cd3e363311180ae706091933e/src/ggml.c#L10447