#4377 TaskCfgVTT(is_cuda=True, uuid='11339c2d22', cache_folder='C:/398/tmp/18108/11339c2d22', target_dir='C:/Users/trise/Downl

84.17* Posted at: 2 hours ago 👁5

ASR Error [faster-whisper (Local)] An error has occurred. There may be insufficient memory or video memory. Model:medium GPU0
Traceback (most recent call last):
File "videotrans\configure\_base.py", line 285, in _new_process
File "concurrent\futures\_base.py", line 458, in result
File "concurrent\futures\_base.py", line 403, in __get_result
concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.

Traceback (most recent call last):
File "videotrans\configure\_base.py", line 285, in _new_process
File "concurrent\futures\_base.py", line 458, in result
File "concurrent\futures\_base.py", line 403, in __get_result
concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "videotrans\task\job.py", line 105, in run
File "videotrans\task\trans_create.py", line 360, in recogn
File "videotrans\recognition\__init__.py", line 272, in run
File "videotrans\recognition\_base.py", line 143, in run
File "videotrans\recognition\_overall.py", line 33, in _exec
File "videotrans\recognition\_overall.py", line 105, in _faster
File "videotrans\configure\_base.py", line 303, in _new_process
RuntimeError: An error has occurred. There may be insufficient memory or video memory. Model:medium GPU0
Traceback (most recent call last):
File "videotrans\configure\_base.py", line 285, in _new_process
File "concurrent\futures\_base.py", line 458, in result
File "concurrent\futures\_base.py", line 403, in __get_result
concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.
TaskCfgVTT(is_cuda=True, uuid='11339c2d22', cache_folder='C:/398/tmp/18108/11339c2d22', target_dir='C:/Users/trise/Downloads/_video_out/0428-mp4', source_language='Simplified Chinese', source_language_code='zh-cn', source_sub='C:/Users/trise/Downloads/_video_out/0428-mp4/zh-cn.srt', source_wav='C:/398/tmp/18108/11339c2d22/zh-cn.wav', source_wav_output='C:/Users/trise/Downloads/_video_out/0428-mp4/zh-cn.m4a', target_language='Korean', target_language_code='ko', target_sub='C:/Users/trise/Downloads/_video_out/0428-mp4/ko.srt', target_wav='C:/398/tmp/18108/11339c2d22/target.wav', target_wav_output='C:/Users/trise/Downloads/_video_out/0428-mp4/ko.m4a', name='C:/Users/trise/Downloads/0428.mp4', noextname='0428', basename='0428.mp4', ext='mp4', dirname='C:/Users/trise/Downloads', shound_del_name=None, translate_type=3, tts_type=0, volume='+0%', pitch='+0Hz', voice_rate='+0%', voice_role='SunHi(Female/KR)', voice_autorate=True, video_autorate=False, remove_silent_mid=False, align_sub_audio=True, detect_language='zh-cn', recogn_type=0, model_name='medium', shibie_audio=None, remove_noise=False, enable_diariz=False, nums_diariz=0, rephrase=0, fix_punc=False, subtitle_language=None, app_mode='biaozhun', subtitles='', targetdir_mp4='C:/Users/trise/Downloads/_video_out/0428-mp4/0428.mp4', novoice_mp4='C:/398/tmp/18108/11339c2d22/novoice.mp4', is_separate=False, instrument=None, vocal=None, back_audio='', clear_cache=True, background_music=None, subtitle_type=1, only_out_mp4=False, recogn2pass=True, output_srt=0, copysrt_rawvideo=False)


=
system:Windows-10-10.0.26200-SP0
version:v3.98
frozen:True
language:en
root_dir:C:/398

1 Replies

The error is due to insufficient GPU video memory when using the medium model with CUDA enabled, made worse by the enabled "二次识别" (recogn2pass) option which doubles memory usage.

  1. Disable "二次识别"
    On the main interface, uncheck the "二次识别" checkbox. It causes the ASR to run again on the generated audio, requiring extra VRAM.
  2. Use a smaller model
    Change the recognition model from medium to small or base. The medium model needs more VRAM than many mid-range GPUs offer.
    How: Select faster-whisper (Local) and choose small or base from the dropdown.
  3. Reduce GPU memory settings
    Go to Menu → Tools/Options → Advanced Options, find the faster/openai 语音识别调整 section, and change:

    • beam_size to 1
    • best_of to 1
    • CUDA数据型 to int8
    • 启用上下文感知 to false
  4. If the problem persists
    Try an even smaller model like tiny (accuracy will drop) or disable CUDA entirely (runs on CPU, much slower but avoids VRAM limits). Ensure no other applications are using the GPU.

请查阅相关文档:

Post Your Reply
Open source and free maintenance is not easy. If this project is helpful to you, please consider making a small donation to help the project continue to maintain and update.

Related resource