You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Model quantization (using int8's instead of floats for faster inference) is all the rage these days, it seems. The Oracle Devs AlphaZero blog post series writes extensively about how this improved inference throughput (4x they claim).
We should experiment with this. I have minimal familiarity with this technique.
The text was updated successfully, but these errors were encountered:
Model quantization (using int8's instead of floats for faster inference) is all the rage these days, it seems. The Oracle Devs AlphaZero blog post series writes extensively about how this improved inference throughput (4x they claim).
We should experiment with this. I have minimal familiarity with this technique.
The text was updated successfully, but these errors were encountered: