#1765 TaskCfg(cache_folder='D:/Portable/pyvideotrans/tmp/10024/e11e740a9b', target_dir='D:/Download/ZBrush/_video_out/02-mp4',

45.95* Posted at: 4 days ago 👁25

翻译字幕阶段出错:[兼容AI/本地模型] 内容太长超出最大允许Token,请减小内容或增大max_token,或者降低每次发送字幕行数
Could not parse response content as the length limit was reached - CompletionUsage(completion_tokens=8192, prompt_tokens=2122, total_tokens=10314, completion_tokens_details=None, prompt_tokens_details=None):
Traceback (most recent call last):
File "videotrans\task\job.py", line 184, in run
File "videotrans\task\trans_create.py", line 456, in trans
File "videotrans\translator\__init__.py", line 911, in run
File "videotrans\translator\_base.py", line 78, in run
File "videotrans\translator\_base.py", line 137, in _run_srt
File "tenacity\__init__.py", line 338, in wrapped_f
File "tenacity\__init__.py", line 477, in call
File "tenacity\__init__.py", line 378, in iter
File "tenacity\__init__.py", line 400, in
File "concurrent\futures\_base.py", line 439, in result
File "concurrent\futures\_base.py", line 391, in __get_result
File "tenacity\__init__.py", line 480, in call
File "videotrans\translator\_localllm.py", line 71, in _item_task
openai.LengthFinishReasonError: Could not parse response content as the length limit was reached - CompletionUsage(completion_tokens=8192, prompt_tokens=2122, total_tokens=10314, completion_tokens_details=None, prompt_tokens_details=None)

TaskCfg(cache_folder='D:/Portable/pyvideotrans/tmp/10024/e11e740a9b', target_dir='D:/Download/ZBrush/_video_out/02-mp4', remove_noise=False, is_separate=False, detect_language='en', subtitle_language=None, source_language='英语', target_language='简体中文', source_language_code='en', target_language_code='zh-cn', source_sub='D:/Download/ZBrush/_video_out/02-mp4/en.srt', target_sub='D:/Download/ZBrush/_video_out/02-mp4/zh-cn.srt', source_wav='D:/Portable/pyvideotrans/tmp/10024/e11e740a9b/en.wav', source_wav_output='D:/Download/ZBrush/_video_out/02-mp4/en.m4a', target_wav='D:/Portable/pyvideotrans/tmp/10024/e11e740a9b/target.wav', target_wav_output='D:/Download/ZBrush/_video_out/02-mp4/zh-cn.m4a', subtitles='', novoice_mp4='D:/Portable/pyvideotrans/tmp/10024/e11e740a9b/novoice.mp4', noextname='02', shibie_audio=None, targetdir_mp4='D:/Download/ZBrush/_video_out/02-mp4/02.mp4', instrument=None, vocal=None, back_audio='', background_music=None, app_mode='biaozhun', subtitle_type=1, volume='+0%', pitch='+0Hz', voice_rate='+0%', voice_role='晓辰 多语言(Female/CN)', copysrt_rawvideo=False, clear_cache=True, translate_type=8, tts_type=0, recogn_type=0, model_name='large-v3-turbo', split_type=0, voice_autorate=True, video_autorate=True, cuda=True, name='D:/Download/ZBrush/02.mp4', basename='02.mp4', ext='mp4', dirname='D:/Download/ZBrush', shound_del_name=None, uuid='e11e740a9b', remove_silent_mid=False, align_sub_audio=True, enable_diariz=False, nums_diariz=0, rephrase=0)


=
system:Windows-10-10.0.26100-SP0
version:v3.87
frozen:True
language:zh
root_dir:D:/Portable/pyvideotrans

1 Replies

翻译时内容超出模型的最大 Token 限制。错误显示总 Token 数达到 10314,超过了模型的处理上限。

请按以下步骤解决:

  1. 减少单次发送内容:在软件主界面,点击右下角的 “设置更多参数”
  2. 在弹出的“快速参数设置”窗口中,找到并调低 “每批字幕行数” 的值(例如从默认的 50 改为 20 或 10)。这能有效降低单次请求的文本长度。
  3. (可选)启用上下文模式:在同一窗口,确保勾选 “发送完整字幕”。这能让 AI 模型更好地理解上下文,但可能会略微增加 Token 消耗。如果调整“每批字幕行数”后问题依旧,可以尝试取消勾选此项。

如果上述调整后问题仍然存在,说明您的字幕文件可能非常长。您可以:

  • 尝试在 “高级选项” -> “字幕翻译调整” 中,进一步降低 “每批字幕行数”
  • 或者,考虑将长视频分割成较短的片段分别处理。
Post Your Reply
Open source and free maintenance is not easy. If this project is helpful to you, please consider making a small donation to help the project continue to maintain and update.

Donate: https://ko-fi.com/jianchang512

Trending Questions