#3094 OSError: Error no file named pytorch_model.bin, model.safetensors, tf_model.h5, model.ckpt.index or flax_model.msgpack f

45.125* Posted at: 8 hours ago 👁20

Traceback (most recent call last):
File "videotrans\process\stt_fun.py", line 346, in pipe_asr
File "transformers\pipelines\__init__.py", line 1027, in pipeline

framework, model = infer_framework_load_model(

File "transformers\pipelines\base.py", line 333, in infer_framework_load_model

raise ValueError(

ValueError: Could not load model D:/pyVideoTrans/V3.96/models/models--openai--whisper-large-v2 with any of the following classes: (, , ). See the original errors:

while loading with AutoModelForCTC, an error is thrown:
Traceback (most recent call last):
File "transformers\pipelines\base.py", line 293, in infer_framework_load_model

model = model_class.from_pretrained(model, **kwargs)

File "transformers\models\auto\auto_factory.py", line 607, in from_pretrained

raise ValueError(

ValueError: Unrecognized configuration class for this kind of AutoModel: AutoModelForCTC.
Model type should be one of Data2VecAudioConfig, HubertConfig, MCTCTConfig, Para
......
le "transformers\models\auto\auto_factory.py", line 607, in from_pretrained

raise ValueError(

ValueError: Unrecognized configuration class for this kind of AutoModel: AutoModelForCTC.
Model type should be one of Data2VecAudioConfig, HubertConfig, MCTCTConfig, ParakeetCTCConfig, SEWConfig, SEWDConfig, UniSpeechConfig, UniSpeechSatConfig, Wav2Vec2Config, Wav2Vec2BertConfig, Wav2Vec2ConformerConfig, WavLMConfig.

while loading with AutoModelForSpeechSeq2Seq, an error is thrown:
Traceback (most recent call last):
File "transformers\pipelines\base.py", line 293, in infer_framework_load_model

model = model_class.from_pretrained(model, **kwargs)

File "transformers\models\auto\auto_factory.py", line 604, in from_pretrained

return model_class.from_pretrained(

File "transformers\modeling_utils.py", line 277, in _wrapper

return func(*args, **kwargs)

File "transformers\modeling_utils.py", line 4900, in from_pretrained

checkpoint_files, sharded_metadata = _get_resolved_checkpoint_files(

File "transformers\modeling_utils.py", line 989, in _get_resolved_checkpoint_files

raise OSError(

OSError: Error no file named pytorch_model.bin, model.safetensors, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory D:/pyVideoTrans/V3.96/models/models--openai--whisper-large-v2.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "transformers\pipelines\base.py", line 311, in infer_framework_load_model

model = model_class.from_pretrained(model, **fp32_kwargs)

File "transformers\models\auto\auto_factory.py", line 604, in from_pretrained

return model_class.from_pretrained(

File "transformers\modeling_utils.py", line 277, in _wrapper

return func(*args, **kwargs)

File "transformers\modeling_utils.py", line 4900, in from_pretrained

checkpoint_files, sharded_metadata = _get_resolved_checkpoint_files(

File "transformers\modeling_utils.py", line 989, in _get_resolved_checkpoint_files

raise OSError(

OSError: Error no file named pytorch_model.bin, model.safetensors, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory D:/pyVideoTrans/V3.96/models/models--openai--whisper-large-v2.

while loading with WhisperForConditionalGeneration, an error is thrown:
Traceback (most recent call last):
File "transformers\pipelines\base.py", line 293, in infer_framework_load_model

model = model_class.from_pretrained(model, **kwargs)

File "transformers\modeling_utils.py", line 277, in _wrapper

return func(*args, **kwargs)

File "transformers\modeling_utils.py", line 4900, in from_pretrained

checkpoint_files, sharded_metadata = _get_resolved_checkpoint_files(

File "transformers\modeling_utils.py", line 989, in _get_resolved_checkpoint_files

raise OSError(

OSError: Error no file named pytorch_model.bin, model.safetensors, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory D:/pyVideoTrans/V3.96/models/models--openai--whisper-large-v2.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "transformers\pipelines\base.py", line 311, in infer_framework_load_model

model = model_class.from_pretrained(model, **fp32_kwargs)

File "transformers\modeling_utils.py", line 277, in _wrapper

return func(*args, **kwargs)

File "transformers\modeling_utils.py", line 4900, in from_pretrained

checkpoint_files, sharded_metadata = _get_resolved_checkpoint_files(

File "transformers\modeling_utils.py", line 989, in _get_resolved_checkpoint_files

raise OSError(

OSError: Error no file named pytorch_model.bin, model.safetensors, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory D:/pyVideoTrans/V3.96/models/models--openai--whisper-large-v2.


=
system:Windows-10-10.0.22631-SP0
version:v3.96
frozen:True
language:zh
root_dir:D:/pyVideoTrans/V3.96

1 Replies

这是由于 openai-whisper 模型文件下载不完整或缺失导致的。软件在 D:/pyVideoTrans/V3.96/models/models--openai--whisper-large-v2 目录下找不到有效的模型文件(如 pytorch_model.binmodel.safetensors)。

  1. 手动下载并放置模型文件

    • 由于 HuggingFace 在国内可能无法直接访问,你需要手动下载模型文件。
    • 前往模型页面(如 https://huggingface.co/openai/whisper-large-v2)或使用镜像站,下载 pytorch_model.bin(或 model.safetensors)、config.jsontokenizer.json 等核心文件。
    • 将这些文件放入 D:/pyVideoTrans/V3.96/models/models--openai--whisper-large-v2 目录下。注意:文件夹名称必须严格为 models--openai--whisper-large-v2,不能更改。
  2. (推荐)切换为 faster-whisper 模型

    • faster-whisper 是更推荐使用的本地免费识别模型,速度更快。
    • 在软件“语音识别渠道”设置中,将渠道从 openai-whisper 切换为 faster-whisper
    • 同样,如果遇到网络问题,需要手动下载 faster-whisper 的模型文件(如 large-v3)到 models/ 目录下,文件夹命名规则类似(如 models--Systran--faster-whisper-large-v3)。
  3. 配置网络代理

    • 如果你希望软件自动下载模型,可以在软件菜单的“设置”->“网络代理”中,配置可用的代理地址,以访问 HuggingFace。

请查阅相关文档:

Post Your Reply
Open source and free maintenance is not easy. If this project is helpful to you, please consider making a small donation to help the project continue to maintain and update.

Related resource