#3865 TaskCfgSTS(is_cuda=False, uuid='a3b2cfaa62', cache_folder=None, target_dir='C:/Users/25452/Downloads', source_language=N

128.14* Posted at: 2 days ago 👁18

翻译字幕阶段出错 [DeepSeek] 内容太长超出最大允许Token,请减小内容或增大max_token,或者降低每次发送字幕行数
Could not parse response content as the length limit was reached - CompletionUsage(completion_tokens=8192, prompt_tokens=664, total_tokens=8856, completion_tokens_details=None, prompt_tokens_details=PromptTokensDetails(audio_tokens=None, cached_tokens=640), prompt_cache_hit_tokens=640, prompt_cache_miss_tokens=24)
Traceback (most recent call last):
File "videotrans\task\job.py", line 173, in run
File "videotrans\task\_translate_srt.py", line 44, in trans
File "videotrans\translator\__init__.py", line 981, in run
File "videotrans\translator\_base.py", line 102, in run
File "videotrans\translator\_base.py", line 123, in _run_text
File "videotrans\translator\_deepseek.py", line 61, in _item_task
openai.LengthFinishReasonError: Could not parse response content as the length limit was reached - CompletionUsage(completion_tokens=8192, prompt_tokens=664, total_tokens=8856, completion_tokens_details=None, prompt_tokens_details=PromptTokensDetails(audio_tokens=None, cached_tokens=640), prompt_cache_hit_tokens=640, prompt_cache_miss_tokens=24)
TaskCfgSTS(is_cuda=False, uuid='a3b2cfaa62', cache_folder=None, target_dir='C:/Users/25452/Downloads', source_language=None, source_language_code='Simplified Chinese', source_sub='C:/Users/25452/Downloads/失忆后,契约老公是前任_All.srt', source_wav=None, source_wav_output=None, target_language=None, target_language_code='Thai', target_sub='C:/Users/25452/Downloads/失忆后,契约老公是前任_All.Thai.srt', target_wav=None, target_wav_output=None, name='C:/Users/25452/Downloads/失忆后,契约老公是前任_All.srt', noextname='失忆后,契约老公是前任_All', basename='失忆后,契约老公是前任_All.srt', ext='srt', dirname='C:/Users/25452/Downloads', shound_del_name=None, translate_type=4)


=
system:Windows-10-10.0.22631-SP0
version:v3.98
frozen:True
language:zh
root_dir:D:/pyvideotrans

1 Replies

这是由于使用DeepSeek进行字幕翻译时,单次发送的内容过长,超出了模型的最大输出Token限制(8192)导致的。

解决方案:

  1. 减小每次发送的字幕行数:在软件主界面,点击 菜单 -> 工具 -> 高级选项,在“字幕翻译调整”区域,将 AI翻译渠道每批字幕行数 的值调小(例如从默认值调为 20 或更小)。
  2. 取消“发送完整字幕”选项:如果勾选了“发送完整字幕”,AI会收到包含时间轴和行号的完整SRT格式内容,这会显著增加Token消耗。请取消勾选此选项,改为按纯文本行翻译。
  3. 调整AI模型温度值:在同一设置页面,可以尝试将 AI翻译模型温度值 调低(如设为0.1),以减少模型生成随机性,可能有助于控制输出长度。
  4. 避免使用附带完整原字幕:确保 AI翻译附带完整原字幕 选项未被勾选,此功能会为AI提供原文作为上下文,但会极大增加Token使用量,仅适用于支持超长上下文的先进模型(如GPT-4-128K, DeepSeek-V3)。

完成上述调整后,重新执行翻译任务。

请查阅相关文档:

Post Your Reply
Open source and free maintenance is not easy. If this project is helpful to you, please consider making a small donation to help the project continue to maintain and update.

Related resource