#4515 TaskCfgVTT(is_cuda=True, uuid='1a225c475c', cache_folder='D:/fanyi/win-pyvideotrans-v3.99-0508/tmp/8556/1a225c475c', tar

125.110* Posted at: 8 hours ago 👁20

翻译字幕阶段出错 [DeepSeek] 内容太长超出最大允许Token,请减小内容或增大max_token,或者降低每次发送字幕行数
Could not parse response content as the length limit was reached - CompletionUsage(completion_tokens=384000, prompt_tokens=1015, total_tokens=385015, completion_tokens_details=None, prompt_tokens_details=PromptTokensDetails(audio_tokens=None, cached_tokens=896), prompt_cache_hit_tokens=896, prompt_cache_miss_tokens=119)
Traceback (most recent call last):
File "videotrans\task\job.py", line 173, in run
File "videotrans\task\trans_create.py", line 689, in trans
File "videotrans\translator\__init__.py", line 1040, in run
File "videotrans\translator\_base.py", line 98, in run
File "videotrans\translator\_base.py", line 119, in _run_text
File "videotrans\translator\_deepseek.py", line 66, in _item_task
openai.LengthFinishReasonError: Could not parse response content as the length limit was reached - CompletionUsage(completion_tokens=384000, prompt_tokens=1015, total_tokens=385015, completion_tokens_details=None, prompt_tokens_details=PromptTokensDetails(audio_tokens=None, cached_tokens=896), prompt_cache_hit_tokens=896, prompt_cache_miss_tokens=119)
TaskCfgVTT(is_cuda=True, uuid='1a225c475c', cache_folder='D:/fanyi/win-pyvideotrans-v3.99-0508/tmp/8556/1a225c475c', target_dir='D:/shipin/_video_out/LULU-434 -ts', source_language='日语', source_language_code='ja', source_sub='D:/shipin/_video_out/LULU-434 -ts/ja.srt', source_wav='D:/fanyi/win-pyvideotrans-v3.99-0508/tmp/8556/1a225c475c/ja.wav', source_wav_output='D:/shipin/_video_out/LULU-434 -ts/ja.m4a', target_language='简体中文', target_language_code='zh-cn', target_sub='D:/shipin/_video_out/LULU-434 -ts/zh-cn.srt', target_wav='D:/fanyi/win-pyvideotrans-v3.99-0508/tmp/8556/1a225c475c/target.wav', target_wav_output='D:/shipin/_video_out/LULU-434 -ts/zh-cn.m4a', name='D:/shipin/LULU-434 .ts', noextname='LULU-434 ', basename='LULU-434 .ts', ext='ts', dirname='D:/shipin', shound_del_name=None, translate_type=4, tts_type=0, volume='+0%', pitch='+0Hz', voice_rate='+0%', voice_role='No', voice_autorate=False, video_autorate=False, remove_silent_mid=False, align_sub_audio=True, detect_language='ja', recogn_type=0, model_name='large-v3-turbo', shibie_audio=None, remove_noise=False, enable_diariz=False, nums_diariz=0, rephrase=0, fix_punc=False, subtitle_language=None, app_mode='tiqu', subtitles='', targetdir_mp4='D:/shipin/_video_out/LULU-434 -ts/LULU-434 .mp4', novoice_mp4='D:/fanyi/win-pyvideotrans-v3.99-0508/tmp/8556/1a225c475c/novoice.mp4', is_separate=False, embed_bgm=True, instrument=None, vocal='D:/fanyi/win-pyvideotrans-v3.99-0508/tmp/8556/1a225c475c/vocal.wav', back_audio='', clear_cache=True, background_music=None, subtitle_type=0, only_out_mp4=False, recogn2pass=False, output_srt=0, copysrt_rawvideo=False, loop_backaudio=1, backaudio_volume=0.8)


=
system:Windows-10-10.0.26200-SP0
version:v3.99
frozen:True
language:zh
root_dir:D:/fanyi/win-pyvideotrans-v3.99-0508

3 Replies

这是由于翻译时单次发送给 DeepSeek 的内容超出了模型最大 token 限制(错误中显示 completion_tokens=384000,远超典型模型上限)。

解决方案:

  1. 降低每批字幕行数:在菜单栏 工具/选项 → 高级选项 → 字幕翻译调整 中,将 AI翻译渠道每批字幕行数 调小(例如设为 10 或 5),减少每次请求的 token 消耗。
  2. 关闭“发送完整字幕”:若当前勾选了主界面中的 发送完整字幕,取消该选项,改为按行翻译,能大幅降低单次请求长度。
  3. 调整后重启翻译:修改设置后,重新执行当前视频的翻译任务(注意勾选“清理已生成”以清除缓存)。

若问题依然存在,请检查 DeepSeek API 的 max_tokens 设置(在对应渠道配置中是否有自定义参数),或改用支持更长上下文的模型(如 deepseek-chat 已支持 64k,通常足够)。

请查阅相关文档:

根据deepseek V4-flash模型 max_tokens 设置多少合适

根据通用经验,DeepSeek V4-flash 的 max_tokens 建议设为 40968192(该模型通常支持 8192 输出 token)。若仍有“超出限制”报错,说明主要瓶颈在于单次输入内容过多,应优先降低“AI翻译渠道每批字幕行数”(例如设为 5~10 行),而非仅调高 max_tokens。

具体操作:在菜单栏 工具/选项 → 高级选项 → 字幕翻译调整 → 将"AI翻译渠道每批字幕行数"调小,同时可取消“发送完整字幕”。调整后重新执行翻译(勾选“清理已生成”)。

Post Your Reply
Open source and free maintenance is not easy. If this project is helpful to you, please consider making a small donation to help the project continue to maintain and update.

Related resource