#3321 TaskCfg(cache_folder=None, target_dir='E:/PH', remove_noise=False, is_separate=False, detect_language=None, subtitle_lan

202.150* Posted at: 1 day ago 👁18

Trans Error [Local/Compatible AI] Could not parse response content as the length limit was reached - CompletionUsage(completion_tokens=8192, prompt_tokens=1977, total_tokens=10169, completion_tokens_details=None, prompt_tokens_details=None)
Traceback (most recent call last):
File "videotrans\task\job.py", line 174, in run
File "videotrans\task\_translate_srt.py", line 45, in trans
File "videotrans\translator\__init__.py", line 960, in run
File "videotrans\translator\_base.py", line 102, in run
File "videotrans\translator\_base.py", line 165, in _run_srt
File "tenacity\__init__.py", line 338, in wrapped_f
File "tenacity\__init__.py", line 477, in call
File "tenacity\__init__.py", line 378, in iter
File "tenacity\__init__.py", line 400, in
File "concurrent\futures\_base.py", line 439, in result
File "concurrent\futures\_base.py", line 391, in __get_result
File "tenacity\__init__.py", line 480, in call
File "videotrans\translator\_localllm.py", line 72, in _item_task
openai.LengthFinishReasonError: Could not parse response content as the length limit was reached - CompletionUsage(completion_tokens=8192, prompt_tokens=1977, total_tokens=10169, completion_tokens_details=None, prompt_tokens_details=None)
TaskCfg(cache_folder=None, target_dir='E:/PH', remove_noise=False, is_separate=False, detect_language=None, subtitle_language=None, source_language=None, target_language=None, source_language_code='Simplified Chinese', target_language_code='English', source_sub='E:/PH/K5.srt', target_sub='E:/PH/K5.English.srt', source_wav=None, source_wav_output=None, target_wav=None, target_wav_output=None, subtitles='', novoice_mp4=None, noextname='K5', shibie_audio=None, targetdir_mp4=None, instrument=None, vocal=None, back_audio=None, background_music=None, app_mode='biaozhun', subtitle_type=0, volume='+0%', pitch='+0Hz', voice_rate='+0%', voice_role=None, copysrt_rawvideo=False, clear_cache=False, translate_type=8, tts_type=None, recogn_type=None, model_name=None, voice_autorate=False, video_autorate=False, is_cuda=False, name='E:/PH/K5.srt', basename='K5.srt', ext='srt', dirname='E:/PH', shound_del_name=None, uuid='a36d914289', remove_silent_mid=False, align_sub_audio=True, enable_diariz=False, nums_diariz=0, rephrase=2, only_out_mp4=False, fix_punc=False, recogn2pass=False)


=
system:Windows-10-10.0.26100-SP0
version:v3.97
frozen:True
language:en
root_dir:E:/xtrans

1 Replies

The error is caused by the translation model's output exceeding the token limit (8192 completion_tokens). This typically happens when using a local/compatible AI model (like Ollama) to translate a very long SRT file with the "Send complete subtitles" option enabled.

Solution:

  1. Disable "Send complete subtitles": In the translation settings, uncheck the "Send complete subtitles" option. This will make the software send the SRT file line by line, avoiding the token limit.
  2. Use a more capable model: If you need context-aware translation, switch to a more powerful online model (e.g., DeepSeek, GPT-4) that supports longer contexts.
  3. Adjust the local model's context window: If you must use a local model, ensure its context window (e.g., via num_ctx parameter in Ollama) is large enough to handle your entire SRT file. However, for very long files, method 1 (line-by-line) is more reliable.

Reference:
This issue is related to the translation process when using local models. For more details on configuring translation channels and prompts, please refer to:

Post Your Reply
Open source and free maintenance is not easy. If this project is helpful to you, please consider making a small donation to help the project continue to maintain and update.

Related resource