You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Error. error information is No operator found for memory_efficient_attention_forward with inputs:
query : shape=(1536, 1008, 1, 72) (torch.bfloat16)
key : shape=(1536, 1008, 1, 72) (torch.bfloat16)
value : shape=(1536, 1008, 1, 72) (torch.bfloat16)
attn_bias : <class 'NoneType'>
p : 0.0 decoderF is not supported because:
attn_bias type is <class 'NoneType'>
bf16 is only supported on A100+ GPUs [email protected] is not supported because:
requires device with capability > (8, 0) but your GPU has capability (7, 5) (too old)
bf16 is only supported on A100+ GPUs tritonflashattF is not supported because:
requires device with capability > (8, 0) but your GPU has capability (7, 5) (too old)
bf16 is only supported on A100+ GPUs
operator wasn't built - see python -m xformers.info for more info
triton is not available
requires GPU with sm80 minimum compute capacity, e.g., A100/H100/L4 cutlassF is not supported because:
bf16 is only supported on A100+ GPUs smallkF is not supported because:
max(query.shape[-1] != value.shape[-1]) > 32
dtype=torch.bfloat16 (supported: {torch.float32})
has custom scale
bf16 is only supported on A100+ GPUs
unsupported embed per head: 72
The text was updated successfully, but these errors were encountered:
We only support using bf16 for inference. Please use the specific machine, like a10, a100, etc. If you don't have locally, you can play with EasyAnimate on cloud: https://gallery.pai-ml.com/#/preview/deepLearning/cv/easyanimate. We provide free gpu time of A10 for new user of PAI, read the instruction carefully to receive the free gpu time.
Error. error information is No operator found for
memory_efficient_attention_forward
with inputs:query : shape=(1536, 1008, 1, 72) (torch.bfloat16)
key : shape=(1536, 1008, 1, 72) (torch.bfloat16)
value : shape=(1536, 1008, 1, 72) (torch.bfloat16)
attn_bias : <class 'NoneType'>
p : 0.0
decoderF
is not supported because:attn_bias type is <class 'NoneType'>
bf16 is only supported on A100+ GPUs
[email protected]
is not supported because:requires device with capability > (8, 0) but your GPU has capability (7, 5) (too old)
bf16 is only supported on A100+ GPUs
tritonflashattF
is not supported because:requires device with capability > (8, 0) but your GPU has capability (7, 5) (too old)
bf16 is only supported on A100+ GPUs
operator wasn't built - see
python -m xformers.info
for more infotriton is not available
requires GPU with sm80 minimum compute capacity, e.g., A100/H100/L4
cutlassF
is not supported because:bf16 is only supported on A100+ GPUs
smallkF
is not supported because:max(query.shape[-1] != value.shape[-1]) > 32
dtype=torch.bfloat16 (supported: {torch.float32})
has custom scale
bf16 is only supported on A100+ GPUs
unsupported embed per head: 72
The text was updated successfully, but these errors were encountered: