#4593 Traceback (most recent call last): File "videotrans\process\stt_fun.py", line 479, in faster_whisper File "faster_whis

119.248* Posted at: 2 hours ago 👁11

Traceback (most recent call last):
File "videotrans\process\stt_fun.py", line 479, in faster_whisper
File "faster_whisper\transcribe.py", line 1851, in restore_speech_timestamps
File "faster_whisper\transcribe.py", line 1279, in generate_segments
File "faster_whisper\transcribe.py", line 1590, in add_word_timestamps
File "faster_whisper\transcribe.py", line 1709, in find_alignment
RuntimeError: cuBLAS failed with status CUBLAS_STATUS_NOT_SUPPORTED

Traceback (most recent call last):
File "videotrans\task\_only_one.py", line 50, in run
File "videotrans\task\trans_create.py", line 390, in recogn
File "videotrans\recognition\__init__.py", line 293, in run
File "videotrans\recognition\_base.py", line 143, in run
File "videotrans\recognition\_overall.py", line 33, in _exec
File "videotrans\recognition\_overall.py", line 106, in _faster
File "videotrans\configure\_base.py", line 289, in _new_process
RuntimeError: Traceback (most recent call last):
File "videotrans\process\stt_fun.py", line 479, in faster_whisper
File "faster_whisper\transcribe.py", line 1851, in restore_speech_timestamps
File "faster_whisper\transcribe.py", line 1279, in generate_segments
File "faster_whisper\transcribe.py", line 1590, in add_word_timestamps
File "faster_whisper\transcribe.py", line 1709, in find_alignment
RuntimeError: cuBLAS failed with status CUBLAS_STATUS_NOT_SUPPORTED


=
system:Windows-10-10.0.26200-SP0
version:v3.99
frozen:True
language:zh
root_dir:G:/win-pyvideotrans-v3.99-0508

3 Replies

CUBLAS_STATUS_NOT_SUPPORTED 错误通常是因为 NVIDIA 显卡驱动、CUDA 或 cuDNN 版本与 faster-whisper 不兼容,尤其常见于 50 系显卡。

解决方案:

  1. 更新 NVIDIA 显卡驱动至最新版。
  2. 安装 CUDA 12.8+ 及对应的 cuDNN 9.x(参见下方文档)。
  3. 临时改用 openai-whisper(本地) 渠道或取消 CUDA 加速(勾选去掉)以绕过此错误。
  4. 若为 50 系显卡,请参考下方专门文档。

请查阅相关文档:

还能怎么优化让它更快

首先要解决当前 CUBLAS_STATUS_NOT_SUPPORTED 错误才能正常使用 GPU 加速。请先按上一步骤更新驱动与 CUDA 环境。解决后,可通过以下调整进一步提升速度:

  1. 使用更小的 Whisper 模型:如 mediumsmalllarge-v3 最慢)。
  2. 降低精度:在菜单 工具/选项 → 高级选项 → 语音识别参数 中,将 CUDA 数据类型 改为 int8float16
  3. 减少 beam_size 和 best_of:分别设为 1(默认 5),可大幅降低计算量。
  4. 关闭上下文感知:将 启用上下文感知 设为 false
  5. 启用 Whisper 预分割音频:在高级选项中找到并勾选,可减少模型一次性处理的长音频,降低显存压力并提升速度。
  6. 关闭二次识别:若非必要,取消主界面右上角的 二次识别 选项,减少一次额外转录。
  7. 使用 faster-whisper 而非 openai-whisper:faster-whisper 本身速度更快。

注意:以上优化会牺牲部分准确率,请根据需求平衡速度与质量。

请查阅相关文档:

Post Your Reply