[Bug]使用Transformers 微调 Whisper出现版本不兼容的bug

时间:2024-06-10 07:18:29

错误的现象


ImportError Traceback (most recent call last)
<ipython-input-20-6958d7eed552> in ()
        from transformers import Seq2SegTrainingArguments
        training_args = Seq2SeqTrainingArguments(
                output_dir="./whisper-small-hi", # change to a repo name of your choice
                per_device_train_batch_size=16,

/usr/local/lib/python3.1o/dist-packages/transformers/training_args-py in_setup_devices(self)
        if not is_sagemaker_mp_enabled():
                if notis_accelerate_available():
                        raise ImportError(
                                f"Using the`Trainer`with^PyTorch`requires `accelerate>={ACCELERATE_MIN_VERSION}`:"
                                "please run `pip install transformers[torch]` or `pip install accelerate -U` 


ImportError: Using the^Trainer` with `PyTorch` requires `accelerate>=0.21.0`: Please run `pip install transformers[torch]` or `pip install accelerate -U`

原因分析

看上去accelerate包的依赖没有导入,或者是版本不太匹配。

我先按照他的建议执行了

pip install transformers[torch]

pip install accelerate -U

这两个命令,但是都不管用。

分析一下提到的每个包的作用

transformers

accelerate

PyTorch

依赖分析结果是

accelerate 来管理 transformers 模型的分布式训练和混合精度训练,他们都依赖PyTorch 来执行底层操作。

OK,一步步尝试吧

尝试解决

我做了多次尝试如下,


import transformers
import accelerate
import torch

accelerate.__version__,transformers.__version__,torch.__version__
('0.30.1', '4.42.0.dev0', '2.3.0+cu121')
('0.30.1', '4.41.2', '2.3.0+cu121')
('0.21.0', '4.41.2', '2.3.0+cu121')

经过多次尝试,最后一次的版本好能走通。

新问题

下一步的时候又有新错误:

from transformers import Seq2SeqTrainer
 
trainer = Seq2SeqTrainer(
    args=training_args,
    model=model,
    train_dataset=common_voice["train"],
    eval_dataset=common_voice["test"],
    data_collator=data_collator,
    compute_metrics=compute_metrics,
    tokenizer=processor.feature_extractor,
)
TypeError                                 Traceback (most recent call last)
 in <cell line: 3>()
      1 from transformers import Seq2SeqTrainer
      2 
----> 3 trainer = Seq2SeqTrainer(
      4     args=training_args,
      5     model=model,
/usr/local/lib/python3.10/dist-packages/transformers/trainer.py in create_accelerator_and_postprocess(self)
   4533 
   4534         # create accelerator object
-> 4535         self.accelerator = Accelerator(**args)
   4536         # some Trainer classes need to use `gather` instead of `gather_for_metrics`, thus we store a flag
   4537         self.gather_function = self.accelerator.gather_for_metrics

TypeError: Accelerator.__init__() got an unexpected keyword argument 'use_seedable_sampler'

最终可用版本

重新尝试了下面新的依赖:

!pip install torch==2.2.0
!pip install accelerate==0.27.2

('0.27.2', '4.41.2', '2.3.0+cu121')

终于成功了!

注意,每次尝试新的版本,都要重启启动整个计算资源,这样新的版本才会生效,建议最上面导入资源的时候指定版本

哎,这玩意真难啊,本来版本不会是太大的问题,奈何菜鸟碰到的问题多啊!