-
Notifications
You must be signed in to change notification settings - Fork 10.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
vulkan: initial support for IQ1_S and IQ1_M quantizations #11528
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Works with coopmat2 enabled! Perf is a bit low, but I'll fix it after it's merged.
@@ -217,7 +217,7 @@ void quantize(uint dst_idx, uint src_idx) | |||
#endif | |||
|
|||
void main() { | |||
#if defined(DATA_A_IQ2_XXS) || defined(DATA_A_IQ2_XS) || defined(DATA_A_IQ2_S) || defined(DATA_A_IQ3_XXS) || defined(DATA_A_IQ3_S) || defined(DATA_A_IQ4_NL) | |||
#if defined(DATA_A_IQ1_S) || defined(DATA_A_IQ1_M) || defined(DATA_A_IQ2_XXS) || defined(DATA_A_IQ2_XS) || defined(DATA_A_IQ2_S) || defined(DATA_A_IQ3_XXS) || defined(DATA_A_IQ3_S) || defined(DATA_A_IQ4_XS) || defined(DATA_A_IQ4_NL) | |||
init_iq_shmem(gl_WorkGroupSize); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be nice to defined a "NEED_INIT_IQ_SHMEM" macro each place init_iq_shmem is defined, and then all the #ifs can be simple.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At this point this is a good idea, yeah.
I added MMV kernels for the new quants Some performance figures on Radeon 780M (~70GB/s memory bandwidth). The LLVM/AMDGPU compiler does not like the generic code at all and behaves better with the specialized shader. Before MMV kernels:
After:
|
See branch https://github.com/remyoudompheng/llama.cpp/tree/vulkan-iq-mmv for MMV kernels for IQ2 and IQ3 quants |
Very cool! I tested this and it's functionally correct and perf is better on RTX 4070. I dug into the perf a bit and realized that a significant amount of time is spent in init_iq_shmem since the LUT is so large. I think I had suggested this before, but this more unrollable loop code helps:
Even then, it's still expensive and that suggests we should be doing more work per workgroup to amortize the cost. The large shared memory allocation may also limit the number of workgroups that can run concurrently, which argues for using larger workgroups. I verified that increasing NUM_ROWS and workgroup size helps:
You don't need to do all of this at once. I think the unrollable loops is a simple change and should help everywhere. For figuring out the best values for all the knobs we'll need to get more exhaustive data from different HW and also test with real models. |
07820ef
to
6a32664
Compare
6a32664
to
dfaa96c
Compare
Rebased and added shmem sizes following #11502 |
@@ -217,7 +217,7 @@ void quantize(uint dst_idx, uint src_idx) | |||
#endif | |||
|
|||
void main() { | |||
#if defined(DATA_A_IQ2_XXS) || defined(DATA_A_IQ2_XS) || defined(DATA_A_IQ2_S) || defined(DATA_A_IQ3_XXS) || defined(DATA_A_IQ3_S) || defined(DATA_A_IQ4_NL) | |||
#if defined(DATA_A_IQ1_S) || defined(DATA_A_IQ1_M) || defined(DATA_A_IQ2_XXS) || defined(DATA_A_IQ2_XS) || defined(DATA_A_IQ2_S) || defined(DATA_A_IQ3_XXS) || defined(DATA_A_IQ3_S) || defined(DATA_A_IQ4_XS) || defined(DATA_A_IQ4_NL) | |||
init_iq_shmem(gl_WorkGroupSize); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At this point this is a good idea, yeah.
@@ -1,6 +1,9 @@ | |||
#if !defined(DATA_A_F32) && !defined(DATA_A_F16) | |||
#extension GL_EXT_shader_explicit_arithmetic_types_int8 : require | |||
#endif | |||
#if defined(DATA_A_IQ1_M) | |||
#extension GL_EXT_shader_explicit_arithmetic_types_float16 : require | |||
#endif |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This extension shouldn't be used due to hardware restrictions, otherwise IQ1_M is not going to work on hardware without 16-bit support.
vec2 get_dm(uint ib, uint a_offset) { | ||
const uint16_t[4] scales = data_a[a_offset + ib].scales; | ||
const u16vec4 s = u16vec4(scales[0], scales[1], scales[2], scales[3]) >> 12; | ||
const float d = float(uint16BitsToHalf(s.x | (s.y << 4) | (s.z << 8) | (s.w << 12))); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe you can work around the 16-bit requirement here with vec2 unpackHalf2x16(uint v)
?
This pull request implements basic support for the remaining I-quants (IQ1_S and IQ1_M).
Performance is not great but similar to IQ2 quantizations.
To avoid spamming shared memory, the IQ1S grid has been compressed to 2 bits per value (4kB shmem size).
Pull request is draft waiting for #11501 and #11502 to be merged