#3770 TaskCfgSTT(is_cuda=True, uuid='c90b94e55e', cache_folder='C:/Users/nguye/Downloads/win-pyvideotrans-v3.98-323/tmp/40840/

14.169* Posted at: 3 hours ago 👁6

ASR Error [faster-whisper (Local)] An error has occurred. There may be insufficient memory or video memory. Model:large-v3-turbo GPU0
Traceback (most recent call last):
File "videotrans\configure\_base.py", line 285, in _new_process
File "concurrent\futures\_base.py", line 458, in result
File "concurrent\futures\_base.py", line 403, in __get_result
concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.

Traceback (most recent call last):
File "videotrans\configure\_base.py", line 285, in _new_process
File "concurrent\futures\_base.py", line 458, in result
File "concurrent\futures\_base.py", line 403, in __get_result
concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "videotrans\task\job.py", line 105, in run
File "videotrans\task\_speech2text.py", line 159, in recogn
File "videotrans\recognition\__init__.py", line 272, in run
File "videotrans\recognition\_base.py", line 143, in run
File "videotrans\recognition\_overall.py", line 33, in _exec
File "videotrans\recognition\_overall.py", line 105, in _faster
File "videotrans\configure\_base.py", line 303, in _new_process
RuntimeError: An error has occurred. There may be insufficient memory or video memory. Model:large-v3-turbo GPU0
Traceback (most recent call last):
File "videotrans\configure\_base.py", line 285, in _new_process
File "concurrent\futures\_base.py", line 458, in result
File "concurrent\futures\_base.py", line 403, in __get_result
concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.
TaskCfgSTT(is_cuda=True, uuid='c90b94e55e', cache_folder='C:/Users/nguye/Downloads/win-pyvideotrans-v3.98-323/tmp/40840/c90b94e55e', target_dir='C:/Users/nguye/Downloads/win-pyvideotrans-v3.98-323/output/recogn', source_language=None, source_language_code=None, source_sub=None, source_wav=None, source_wav_output=None, target_language=None, target_language_code=None, target_sub='C:/Users/nguye/Downloads/win-pyvideotrans-v3.98-323/output/recogn/UECPP26-Mar 5 - 2. Demo ví dụ dùng class UAnimInstance và Reparent Blueprint.srt', target_wav=None, target_wav_output=None, name='E:/Kuro.Learn.library/images/MMXIVG8LNF7EQ.info/UECPP26-Mar 5 - 2. Demo ví dụ dùng class UAnimInstance và Reparent Blueprint.mkv', noextname='UECPP26-Mar 5 - 2. Demo ví dụ dùng class UAnimInstance và Reparent Blueprint', basename='UECPP26-Mar 5 - 2. Demo ví dụ dùng class UAnimInstance và Reparent Blueprint.mkv', ext='mkv', dirname='E:/Kuro.Learn.library/images/MMXIVG8LNF7EQ.info', shound_del_name=None, detect_language='vi', recogn_type=0, model_name='large-v3-turbo', shibie_audio='C:/Users/nguye/Downloads/win-pyvideotrans-v3.98-323/tmp/40840/c90b94e55e/UECPP26-Mar 5 - 2. Demo ví dụ dùng class UAnimInstance và Reparent Blueprint-1774284952.076369.wav', remove_noise=False, enable_diariz=False, nums_diariz=0, rephrase=0, fix_punc=True)


=
system:Windows-10-10.0.26200-SP0
version:v3.98
frozen:True
language:en
root_dir:C:/Users/nguye/Downloads/win-pyvideotrans-v3.98-323

1 Replies

The error is caused by insufficient GPU memory (VRAM) or system memory when using the large-v3-turbo model with CUDA acceleration.

Solution Steps:

  1. Use a smaller model: In the "Speech Recognition" section, change the model from large-v3-turbo to a smaller one like medium, small, or base. The large-v3-turbo model requires significant VRAM.
  2. Adjust advanced settings (if you must use a large model): Go to Menu -> Tools/Options -> Advanced Options. In the "Speech Recognition Parameters" section, make the following adjustments to reduce memory usage:

    • Set CUDA data type to float16 or int8.
    • Set beam_size to 1.
    • Set best_of to 1.
    • Set Enable context awareness to false.
  3. Check GPU usage: Ensure no other applications are heavily using your GPU. Close unnecessary programs.
  4. Consider the video length: Processing very long videos with large models can exceed memory limits. Try processing a shorter segment first.

Please refer to the relevant documentation:

Post Your Reply
Open source and free maintenance is not easy. If this project is helpful to you, please consider making a small donation to help the project continue to maintain and update.

Related resource