#4375 openai.LengthFinishReasonError: Could not parse response content as the length limit was reached - CompletionUsage(compl

2405:4802* Posted at: 3 hours ago 👁9

Could not parse response content as the length limit was reached - CompletionUsage(completion_tokens=916, prompt_tokens=3180, total_tokens=4096, completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=None, audio_tokens=None, reasoning_tokens=0, rejected_prediction_tokens=None), prompt_tokens_details=None)
Traceback (most recent call last):
File "videotrans\task\_only_one.py", line 65, in run
File "videotrans\task\trans_create.py", line 678, in trans
File "videotrans\translator\__init__.py", line 1008, in run
File "videotrans\translator\_base.py", line 97, in run
File "videotrans\translator\_base.py", line 161, in _run_srt
File "tenacity\__init__.py", line 338, in wrapped_f
File "tenacity\__init__.py", line 477, in call
File "tenacity\__init__.py", line 378, in iter
File "tenacity\__init__.py", line 400, in
File "concurrent\futures\_base.py", line 451, in result
File "concurrent\futures\_base.py", line 403, in __get_result
File "tenacity\__init__.py", line 480, in call
File "videotrans\translator\_chatgpt.py", line 148, in _item_task
openai.LengthFinishReasonError: Could not parse response content as the length limit was reached - CompletionUsage(completion_tokens=916, prompt_tokens=3180, total_tokens=4096, completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=None, audio_tokens=None, reasoning_tokens=0, rejected_prediction_tokens=None), prompt_tokens_details=None)


=
system:Windows-10-10.0.26200-SP0
version:v3.99
frozen:True
language:en
root_dir:C:/Users/zauberberglab/Downloads/win-pyvideotrans-v3.99-428

1 Replies

Translation failed because the AI model reached its token limit. The prompt (3180 tokens) plus the allowed completion length caused a 4096 total token cap to be exceeded, truncating the output and making the JSON response unparseable.

Root cause: the selected AI model has a maximum context length (likely 4096 tokens) and the request size was too large.

Solution:

  1. Reduce “AI translation batch size” – open Menu → Tools → Advanced Options → Subtitle Translation Settings and set AI翻译渠道每批字幕行数 to a smaller value, e.g., 10 or even 5.
  2. Turn off the “Send full subtitle” checkbox on the main interface. This prevents the entire SRT file (with timecodes) from being sent as one massive prompt.
  3. If you are using a local/small model (e.g., 7B), switch to a model with a larger context window (e.g., DeepSeek-V3, GPT-4o, GPT-4-turbo) that supports 32K+ tokens.
  4. Optionally, set 翻译后暂停秒 to 0 and 并发配音线程数 to 1 to avoid overloading the API, but the immediate fix is the batch size reduction.

Relevant documentation

Post Your Reply