报错:
RuntimeError: FlashAttention only supports Ampere GPUs or newer.
报错原因分析:
GPU机器配置低,不支持 特斯拉-V100;
是否有解决方案,是;
方案1、能搞到A100或者H100以及更高版本的机器最佳;
方案2、use_flash_attention_2=True,关闭use_flash_attention_2,即:use_flash_attention_2=False;
数据支持:
FlashAttention-2 currently supports:
Ampere, Ada, or Hopper GPUs (., A100, RTX 3090, RTX 4090, H100). Support for Turing GPUs (T4, RTX 2080) is coming soon, please use FlashAttention for Turing GPUs for now.
Datatype fp16 and bf16 (bf16 requires Ampere, Ada, or Hopper GPUs).
All head dimensions up to 256. Head dim > 192 backward requires A100/A800 or H100/H800.
详细描述请查看:/Dao-AILab/flash-attention