You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@sstadick The advantage of compression on the GPU would be that it is probably easier to parallelize the compression here than on a CPU.
It is important to pay attention to the GPU overhead and therefore it only makes sense to outsource the compression to the GPU from a certain input size.
One possibility would be to exclude the standard library using #![no_std] and compile the code on the NVIDIA target.
In my opinion, the optimal solution would be to use OpenCL (e.g. using ocl, opencl3 or others), for example.
It might be useful when its larger than a specific size to compress the data on the graphics card if available.
The text was updated successfully, but these errors were encountered: