#3660 concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was

181.215* Posted at: 1 day ago 👁23

An error has occurred. There may be insufficient memory or video memory. [GPU0]
Traceback (most recent call last):
File "/media/charlie/Tub8/Vids/pyvideotrans/videotrans/configure/_base.py", line 285, in _new_process

_rs = future.result()

File "/home/charlie/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/concurrent/futures/_base.py", line 458, in result

return self.__get_result()

File "/home/charlie/.local/share/uv/python/cpython-3.10.19-linux-x86_64-gnu/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result

raise self._exception

concurrent.futures.process.BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending.


=
system:Linux-6.8.0-94-generic-x86_64-with-glibc2.35
version:v3.98
frozen:False
language:en
root_dir:/media/charlie/Tub8/Vids/pyvideotrans
Python: 3.10.19 (main, Feb 12 2026, 00:42:18) [Clang 21.1.4 ]

1 Replies

This error indicates insufficient GPU memory (VRAM) while using the faster-whisper model for speech recognition.

Solution:

  1. Reduce the model size: In the main interface, under "Speech Recognition," change the model from large-v3 to a smaller one like large-v3-turbo, large-v2, or medium.
  2. Enable CUDA quantization: Go to Menu -> Tools/Options -> Advanced Options -> Speech Recognition Parameters. Set "CUDA Data Type" to int8. This reduces VRAM usage at a slight cost to accuracy.
  3. Reduce recognition accuracy parameters: In the same "Speech Recognition Parameters" section, lower the values for "Recognition Accuracy beam_size" and "Recognition Accuracy best_of" to 1 or 2.
  4. Disable context awareness: Ensure "Enable Context Awareness" in the same section is unchecked.
  5. Limit concurrent GPU tasks: In Advanced Options -> General Settings, set "GPU Simultaneous Task Count [Restart Required]" to 1. Restart the software after changing this.
  6. Check available VRAM: Run nvidia-smi in a terminal to confirm your GPU's total VRAM. The large-v3 model typically requires more than 10GB of VRAM with CUDA acceleration enabled.

If the issue persists after these steps, your GPU's VRAM may be too limited for local models. Consider using an online speech recognition API (like OpenAI or Gemini) or performing the task on a system with a more powerful GPU.

Please refer to the relevant documentation:

Post Your Reply
Open source and free maintenance is not easy. If this project is helpful to you, please consider making a small donation to help the project continue to maintain and update.

Related resource