-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BOUNTY - $500] Add support for quantized models with tinygrad #148
Comments
(I'm not in any way positioned to implement the quantization support, but wanted to share some notes with those planning to work on it) background: I thought tinygrad example already has some quantization support, how hard could it be to get it over to exo :) So i copied over the int8 and nf4 code, updated the create_transformer functions etc - and indeed can see that it sort of works conceptually (tried on both llama3.1. 8B and 70B). But a few things need to be sorted for this to be usable (and user-friendly)
For documentation purposes, to run llama3.1 70B at NF4, i've used 3 hosts with 64GB GPU ram between them and the model fits only just. Looks like NF4 is skipping many layers etc - so even quantized 70B model is still quite large. Resulting tokens / sec was very low: at about 0.5 TPS I think (but i had to also disable JIT on tinygrad else some GPUs were throwing errors - so perhaps the performance in not representative). As reference point I'm able to get >7 tokens/sec if put 3 of these GPUs into 1 machine and run with llama.cpp). On CPU inference on same hardware is ~0.8. But again providing these numbers just for reference, performance discussion is obviosly premature at this point. For the record, the command lines for each node: JIT=0 DEBUG=0 ASSISTED_DISCOVERY=1 GPU_MEM_MB=28000 CUDA=1 python main.py --max-parallel-downloads 1 --disable-tui --node-id 1111 --quantize nf4 --node-port 10001 --discovery-timeout 3600 JIT=0 DEBUG=0 ASSISTED_DISCOVERY=1 CUDA_VISIBLE_DEVICES=0 CUDA=1 python main.py --max-parallel-downloads 1 --disable-tui --node-id 2222 --quantize nf4 --node-port 10002 --discovery-timeout 3600 --broadcast-port 5680 JIT=0 DEBUG=0 ASSISTED_DISCOVERY=1 CUDA_VISIBLE_DEVICES=1 CUDA=1 python main.py --max-parallel-downloads 1 --disable-tui --node-id 3333 --quantize nf4 --node-port 10003 --chatgpt-api-port 7999 --discovery-timeout 3600 --broadcast-port 5679 --listen-port 10003 JIT=0 DEBUG=0 ASSISTED_DISCOVERY=1 GPU_MEM_MB=12000 CUDA=1 python main.py --max-parallel-downloads 1 --disable-tui --node-id 4444 --quantize nf4 --node-port 10004 --discovery-timeout 3600 --broadcast-port 5681 (ASSISTED_DISCOVERY and GPU_MEM_MB are the modification made for points 2 and 3 above)
|
Thanks @barsuna this is super helpful for implementers. I've added a $200 bounty as this seems like an important addition to exo. Also added to the bounties sheet: #148 |
checking this, good chance to explore tinygrad :) |
@varshith15 Any progress on this? Or I can take over |
@AlexCheema I made a PR #413 but I haven't tested with the 70B model yet |
@RashikShahjahan i had done the same thing you did a bit ago b7b911d but thats not whats expected, the idea is not quantize the models on the fly, the expectation is to run existing already quantized models (mlx, bnb etc) on tinygrad like this https://github.com/exo-explore/exo/pull/213/files (it is a bit slow) so that there is interoperability between machines don't mind working on this together, ping me if on discord if you got ideas :) |
@varshith15 Thanks for pointing out! I got tripped up by the tinygrad example. Did you need help with anything in particular? I might just pick another issue now that I have a better understanding of the codebase |
@RashikShahjahan i have been busy and not been able to work on it, the specific requirement is to figure out how tinygrad generates mat_mul code and to see how to get tinygrad to write optimized quantized_mat_mul for mlx, bnb(awq, nf4 etc) |
Bumping this up to $500 - this would be a great thing to have. |
@AlexCheema just to double check, are you saying that you can have a cluster running where one machine has a model on 4-bit with MLX and the other machine is running fp16 with tinygrad and inference works? I thought I read in the code that if one machine in the cluster picks the tinygrad engine then the whole cluster switches to tinygrad engine. I've had trouble in my cluster finding a working test config for Linux with Nvidia with Mac with MLX (I would love to use Qwen2.5 72B or Qwen2.5-Coder 32B distributed but in the https://github.com/exo-explore/exo/blob/main/exo/models.py file it looks like Qwen is just setup for MLX and back when I was messing with 72B I couldn't figure out a way to get Tinygrad to work with it) |
Hi, I started working on this, my WIP PR is #630 . As @varshith15 pointed out, naively dequantizing weights before each forward is not performant, however this is not an issue with tinygrad's kernels. inference is slow because in each forward, we are adding O(2n^2) worth of muls and adds, since we are multiplying each weights with it's scale and adding the biases(zero_points). This paper shows how to:
My PR implements (1), I'll look into (2) as well. |
The text was updated successfully, but these errors were encountered: