Error embedding final dub
=
system:Windows-10-10.0.19045-SP0
version:v3.86
frozen:True
language:en
root_dir:C:/Users/Shavk Visuals/Desktop/win-pyvideotrans-v3.86
Error embedding final dub
The error "Error embedding final dub" typically occurs when the software cannot properly merge the generated audio with the video. Based on your system path, the most likely cause is that your file path contains spaces (Shavk Visuals).
Please follow these steps to resolve the issue:
Move your video file: Copy the video file you are trying to process to a new location with a simple, English-only path that contains no spaces. For example:
D:\temp\myvideo.mp4C:\Users\Public\test.mp4win-pyvideotrans-v3.86 folder itself to a simple path like D:\videotrans. Avoid paths with spaces or special characters.Important: Before processing the same video again, ensure the "Clean generated files" checkbox on the main interface is checked to clear any previous cache.
If the problem persists after these steps, please provide the complete log file from C:/Users/Shavk Visuals/Desktop/win-pyvideotrans-v3.86/logs/ for further analysis.
I'm getting the same error, and I don't have any white spaces on my file path, any resolution? thank you.
system:macOS-13.7.8-arm64-arm-64bit
version:v3.97
frozen:False
language:en
root_dir:/Users/name/pyvideotrans
Python: 3.10.19 (main, Feb 12 2026, 00:36:33) [Clang 21.1.4 ]
The error "Error embedding final dub" on macOS is often related to FFmpeg issues or file permission problems. Since you are using the source code version (v3.97), the most likely cause is an FFmpeg installation or path problem.
Please follow these steps to resolve the issue:
Verify FFmpeg Installation and Path:
which ffmpeg. It should return a path like /usr/local/bin/ffmpeg.brew install ffmpeg./usr/local/bin/ffmpeg).Check File and Directory Permissions:
/Users/name/pyvideotrans), and the output directory (_video_out).Run a Basic FFmpeg Test:
ffmpeg -i "your_video.mp4" -c copy test_output.mp4Provide Logs for Further Diagnosis:
/Users/name/pyvideotrans/logs/. Copy the bottom 30-50 lines of the most recent log file (named with today's date).It did not resolve my issue, here are the log file:
[INFO] Starting call to 'videotrans.translator._google.Google._item_task', this is the 1st time calling it.
[DEBUG] [Google] self.target_code='tl' self.source_code='auto'
[DEBUG] [Google]返回code:response.status_code=200
[DEBUG] 以普通文本行按行翻译:原始行数:11,翻译后行数:11
[DEBUG] [字幕翻译]渠道0,:共耗时:0s
[DEBUG] 核对翻译结果前->原始语言字幕行数:11,目标语言字幕行数:11
[INFO] Starting call to 'videotrans.translator._google.Google._item_task', this is the 1st time calling it.
[DEBUG] [Google] self.target_code='tl' self.source_code='auto'
[DEBUG] [Google]返回code:response.status_code=200
[DEBUG] 以普通文本行按行翻译:原始行数:11,翻译后行数:11
[DEBUG] [字幕翻译]渠道0,:共耗时:0s
[DEBUG] 核对翻译结果前->原始语言字幕行数:11,目标语言字幕行数:11
[DEBUG] 最终配置信息:self.cfg=TaskCfg(cache_folder='/Users/name/pyvideotrans/tmp/1299/98ecd14c77', target_dir='/Users/name/Desktop/drama/_video_out/input3-mp4', remove_noise=False, is_separate=False, detect_language='en', subtitle_language=None, source_language='English', target_language='Fil
......
handler_name : ISO Media file produced by Google Inc.
vendor_id : [0][0][0][0]Input #1, mov,mp4,m4a,3gp,3g2,mj2, from 'origin_audio.m4a':
Metadata:
major_brand : M4A
minor_version : 512
compatible_brands: M4A isomiso2
encoder : Lavf62.3.100Duration: 00:00:30.02, start: 0.000000, bitrate: 129 kb/s
Stream #1:00x1: Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s (default)
Metadata:
handler_name : ISO Media file produced by Google Inc.
vendor_id : [0][0][0][0][AVFilterGraph @ 0x600001822800] No option name near 'end.ass'
[AVFilterGraph @ 0x600001822800] Error parsing a filter description around:
[AVFilterGraph @ 0x600001822800] Error parsing filterchain 'subtitles=end.ass' around:
Error opening output file laste_target.mp4.
Error opening output files: Invalid argument
[WARNING] 回退: 硬件加速失败,将自动回退到 CPU 编码重试...
[DEBUG] cmd=['ffmpeg', '-hide_banner', '-ignore_unknown', '-threads', '0', '-y', '-progress', 'compose1771528077.609667.txt', '-i', 'novoice.mp4', '-i', 'origin_audio.m4a', '-map', '0:v', '-map', '1:a', '-c:v', 'libx265', '-c:a', 'copy', '-vf', 'subtitles=end.ass', '-movflags', '+faststart', '-crf', '24', '-preset', 'veryfast', '-t', '30.030000', 'laste_target.mp4']
[WARNING] FFmpeg 命令执行失败 (force_cpu=True)。
命令: ffmpeg -hide_banner -ignore_unknown -threads 0 -y -progress compose1771528077.609667.txt -i novoice.mp4 -i origin_audio.m4a -map 0:v -map 1:a -c:v libx265 -c:a copy -vf subtitles=end.ass -movflags +faststart -crf 24 -preset veryfast -t 30.030000 laste_target.mp4
错误: Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'novoice.mp4':
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
encoder : Lavf62.3.100Duration: 00:00:30.03, start: 0.000000, bitrate: 2133 kb/s
Stream #0:00x1: Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709, progressive), 1920x1080 [SAR 1:1 DAR 16:9], 2130 kb/s, 23.98 fps, 23.98 tbr, 24k tbn (default)
Metadata:
handler_name : ISO Media file produced by Google Inc.
vendor_id : [0][0][0][0]Input #1, mov,mp4,m4a,3gp,3g2,mj2, from 'origin_audio.m4a':
Metadata:
major_brand : M4A
minor_version : 512
compatible_brands: M4A isomiso2
encoder : Lavf62.3.100Duration: 00:00:30.02, start: 0.000000, bitrate: 129 kb/s
Stream #1:00x1: Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 128 kb/s (default)
Metadata:
handler_name : ISO Media file produced by Google Inc.
vendor_id : [0][0][0][0][AVFilterGraph @ 0x600000450300] No option name near 'end.ass'
[AVFilterGraph @ 0x600000450300] Error parsing a filter description around:
[AVFilterGraph @ 0x600000450300] Error parsing filterchain 'subtitles=end.ass' around:
Error opening output file laste_target.mp4.
Error opening output files: Invalid argument
The error is caused by FFmpeg failing to load the subtitle file end.ass. The specific error is [AVFilterGraph] No option name near 'end.ass' and Error parsing filterchain 'subtitles=end.ass'. This is likely due to an issue with the path to the subtitle file or the file's content/encoding.Please follow these steps to resolve the issue:Check the Subtitle File Path: The FFmpeg command uses a relative path subtitles=end.ass. Ensure the end.ass file exists in the same directory as the novoice.mp4 file (likely inside a temporary folder like /Users/name/pyvideotrans/tmp/...). If the path contains spaces or special characters, FFmpeg may fail to parse it. You can try modifying the command to use an absolute path without spaces. However, this requires modifying the software's source code, which is complex.Inspect the Subtitle File Content: Open the end.ass file with a text editor. Check if it is a valid ASS subtitle file. Ensure the file is not corrupted and is saved with UTF-8 encoding (without BOM).Disable Subtitles Temporarily: As a diagnostic step, try processing the video without embedding subtitles. In the main interface, uncheck the option "Embed subtitles into video" or similar before starting the translation. If the process completes successfully, the issue is confirmed to be with the subtitle file or the subtitle embedding process.Provide More Context: If the issue persists, please check the directory containing the novoice.mp4 file and confirm the end.ass file exists. Also, share a few lines from the beginning of the end.ass file to check its format.请查阅相关文档:报错信息一堆,含有ffmpeg字样批量将视频和srt字幕合并
log file data:
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/name/pyvideotrans/videotrans/task/trans_create.py", line 1458, in _join_video_audio_srt
tools.runffmpeg(cmd,cmd_dir=self.cfg.cache_folder,force_cpu=False)File "/Users/name/pyvideotrans/videotrans/util/help_ffmpeg.py", line 400, in runffmpeg
return runffmpeg(fallback_args, noextname=noextname, uuid=uuid, force_cpu=True,cmd_dir=cmd_dir)File "/Users/name/pyvideotrans/videotrans/util/help_ffmpeg.py", line 405, in runffmpeg
raise RuntimeError(err)RuntimeError: parsing a filter description around: parsing filterchain 'subtitles=end.ass' around: opening output file laste_target.mp4. opening output files: Invalid argument
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/name/pyvideotrans/videotrans/task/_only_one.py", line 116, in run
trk.assembling()File "/Users/name/pyvideotrans/videotrans/task/trans_create.py", line 763, in assembling
self._join_video_audio_srt()File "/Users/name/pyvideotrans/videotrans/task/trans_create.py", line 1463, in _join_video_audio_srt
raise RuntimeError(msg)RuntimeError: Error embedding final dub
The error is definitively caused by FFmpeg failing to parse the subtitle filter due to an issue with the end.ass file. The error chain points to Error parsing filterchain 'subtitles=end.ass' and Error opening output file laste_target.mp4. Error opening output files: Invalid argument. This is a known issue related to the subtitle file's format or path encoding on macOS.
Please follow these steps to resolve the issue:
Locate and Inspect the end.ass File:
/Users/name/pyvideotrans/tmp/1299/98ecd14c77/.end.ass file in a text editor (e.g., TextEdit, VS Code).Check the first few lines of the file. It should start with something like:
[Script Info]
ScriptType: v4.00+
...Check for Path/Encoding Issues (Common on macOS):
Invalid argument error when opening the output file laste_target.mp4 can sometimes be related to the full path string encoding.name) and the project directory path do not contain any non-ASCII characters, emojis, or symbols. A simple English username is best.Immediate Workaround - Disable Hardcoded Subtitles:
subtitles=end.ass filter..mkv file with streams) instead of burning them into the video.Alternative: Use a Different Output Container:
mp4 to mkv. The MKV container often has better compatibility with subtitle streams and FFmpeg filters.If the problem persists after trying these steps, please provide the first 10 lines of the end.ass file located in the cache folder.
2026-02-20 10:17:30.138 python3[1254:13417] +[CATransaction synchronize] called within transaction
/Users/name/pyvideotrans/.venv/lib/python3.10/site-packages/zhconv/zhconv.py:33: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools
These are warning messages, not errors. They do not affect the functionality of the software and are not the cause of the "Error embedding final dub" issue.The first warning (+[CATransaction synchronize] called within transaction) is related to macOS's graphical framework and is harmless.The second warning is about the pkg_resources module being deprecated in a dependency (zhconv). This is also harmless and does not impact core functions.The actual error remains the FFmpeg subtitle parsing failure from the previous logs. Please follow the troubleshooting steps provided in the previous response, specifically focusing on the end.ass subtitle file and the workaround to disable hardcoded subtitles.
what is the best software to open end.ass file?
The best software to open and inspect an .ass (Advanced SubStation Alpha) subtitle file is a plain text editor. Any basic text editor will work because .ass files are plain text files with specific formatting codes.
Recommended options:
cat, less, or nano in the Terminal to view the file.How to open it:
end.ass and run cat end.ass or less end.ass to view its content.What to look for:
Open the file and check the first few lines. A valid .ass file should start with a section like this:
[Script Info]
ScriptType: v4.00+
PlayResX: 384
PlayResY: 288
...
[V4+ Styles]
Format: Name, Fontname, Fontsize, PrimaryColour, SecondaryColour, OutlineColour, BackColour, Bold, Italic, Underline, StrikeOut, ScaleX, ScaleY, Spacing, Angle, BorderStyle, Outline, Shadow, Alignment, MarginL, MarginR, MarginV, Encoding
Style: Default,Arial,20,&H00FFFFFF,&H000000FF,&H00000000,&H00000000,0,0,0,0,100,100,0,0,1,2,2,2,10,10,10,1
...
[Events]
Format: Layer, Start, End, Style, Name, MarginL, MarginR, MarginV, Effect, Text
Dialogue: 0,0:00:01.18,0:00:03.85,Default,,0,0,0,,Hello worldIf the file is empty, contains garbled characters, or lacks these standard headers, it is corrupted or invalid, which would cause the FFmpeg error you encountered.
here are the entries in end.ass
[Script Info]
; Script generated by FFmpeg/Lavc62.11.100
ScriptType: v4.00+
PlayResX: 384
PlayResY: 288
ScaledBorderAndShadow: yes
YCbCr Matrix: None
[V4+ Styles]
Format: Name, Fontname, Fontsize, PrimaryColour, SecondaryColour, OutlineColour, BackColour, Bold, Italic, Underline, StrikeOut, ScaleX, ScaleY, Spacing, Angle, BorderStyle, Outline, Shadow, Alignment, MarginL, MarginR, MarginV, Encoding
Style: Default,Arial,16,&Hffffff,&Hffffff,&H0,&H0,0,0,0,0,100,100,0,0,1,1,0,2,10,10,10,1
[Events]
Format: Layer, Start, End, Style, Name, MarginL, MarginR, MarginV, Effect, Text
The end.ass file structure is valid but incomplete. The [Events] section is present but contains no actual dialogue entries (e.g., lines starting with Dialogue:). This empty events section is likely causing FFmpeg to fail when trying to process the subtitle filter.Please follow these steps to resolve the issue:Check the Translation/Subtitle Generation Step: The empty events section indicates that either:The source video had no speech to recognize.The speech recognition (ASR) step failed to produce any text.The translation step produced an empty result.The text-to-speech (TTS) step may have succeeded without corresponding subtitles.Review the logs from earlier stages (before the embedding error) to see if the ASR or translation steps produced any output. Look for lines containing "原始语言字幕行数" (source language subtitle lines) or "目标语言字幕行数" (target language subtitle lines). The count should be greater than 0.Enable Verbose Logging and Retry:In the software's main interface, go to Settings -> Advanced Options.Ensure "Show detailed execution logs" or similar is enabled.Clear any previous cache by checking the "Clean generated files" option on the main window.Process the video again and watch the log output in real-time. Pay attention to whether subtitle lines are generated during the "语音识别" (ASR) and "字幕翻译" (translation) phases.Provide Earlier Logs:If the issue persists, please provide the log entries from the very beginning of the processing session for this specific video. Focus on the parts where ASR and translation are executed. The logs you've shared so far start from the translation phase, which already shows successful translation (11 lines). The problem is that these translated lines are not being written into the end.ass file's [Events] section.Temporary Workaround:As a direct workaround, you can manually create a simple .srt subtitle file and use the "Merge video and subtitles" tool instead of relying on the automatic embedding during the translation process. However, this does not address the root cause within the main translation pipeline.请查阅相关文档:为什么识别出的字幕长短不一乱七八糟翻译结果有空白行或输出了提示词
no, the end.ass is complete and does contain all the dialogue, I just did not add it here, I can paste the whole thing if you need it
No, you do not need to paste the entire file. The fact that the [Events] section contains dialogue entries confirms the subtitle file is valid. The root cause of the error is therefore specific to how FFmpeg on your macOS system is interpreting the file path or the filter syntax.The error [AVFilterGraph] No option name near 'end.ass' strongly suggests a parsing issue with the subtitles=end.ass filter string. This is often caused by:
.ass file (like a UTF-8 BOM).subtitles filter on macOS.Please follow these steps to diagnose and resolve the issue:
1. Convert Subtitle Encoding (Most Likely Fix):
Open the end.ass file in a proper code editor like Visual Studio Code.
2. Test the FFmpeg Command Directly:
Open Terminal and navigate to the cache folder containing novoice.mp4 and end.ass.
Run the exact FFmpeg command from the log, but simplify it to test the subtitle filter:
ffmpeg -i novoice.mp4 -vf "subtitles=end.ass" -frames:v 1 test_output.pngThis command tries to overlay the subtitle on a single frame. If it fails with the same error, the issue is confirmed with your FFmpeg/subtitle file combination.
3. Use an Absolute Path (Workaround):
In the Terminal, get the absolute path of end.ass by running pwd in its directory and then appending /end.ass.
Test the filter with the absolute path (replace /full/path/to/end.ass):
ffmpeg -i novoice.mp4 -vf "subtitles=/full/path/to/end.ass" -frames:v 1 test_output.pngIf this works, the issue is with relative path resolution in the software's generated command.
4. Final Workaround - Disable Hardcoded Subtitles:
In the pyVideoTrans GUI, before processing, uncheck "Embed subtitles into video".
This will generate a video file (likely .mkv) with separate, soft subtitle tracks instead of burning them in. You can use the standalone "Merge video and subtitles" tool later if hardcoded subtitles are essential.
If step 2 or 3 provides a specific error message, please share it.
I can confirm that the issue is with the FFmpeg/subtitile combination.
I uncheck or disable the hardcoded subtitles for the video and there is no more error.
however, my my final video output did not translate the video, it is in the same language of the source file, do you know the issue?
The issue is that disabling "Embed subtitles into video" only prevents burning subtitles into the video frames. The core translation process (speech recognition, translation, and audio dubbing) should still occur. If your final video has the original audio, it indicates a failure in one of the earlier stages: the speech recognition (ASR), translation, or text-to-speech (TTS/dubbing) step was skipped or failed.Please follow these steps to diagnose the issue:1. Check the Processing Logs: Look at the detailed logs from your recent successful run (without the embedding error). You need to verify if these key stages completed:
* **ASR Stage**: Look for lines containing `[DEBUG] [字幕识别]` or `[DEBUG] [语音识别]`. Check if it shows a line count greater than 0 (e.g., `原始行数:11`).
* **Translation Stage**: Look for lines containing `[DEBUG] [字幕翻译]`. It should also show a line count.
* **TTS/Dubbing Stage**: Look for lines containing `[DEBUG] [文本转语音]` or mentions of your TTS channel (e.g., `Edge-TTS`, `F5-TTS`).2. **Verify Output Files**: In the output folder (`_video_out`), for your processed video, there should be intermediate files. Look for:
* A `.srt` file with the **translated** subtitles.
* A separate audio file (e.g., `.m4a`, `.wav`) that contains the **newly generated dubbing**.
If these files exist and contain the correct translation/dubbing, then the software successfully created them but failed to replace the original audio in the final video.3. **Common Causes**:
* **TTS Channel Failure**: If the TTS step failed silently (e.g., Edge-TTS returned empty audio), the software might fall back to using the original audio. Check the logs for any TTS errors.
* **Configuration Error**: Ensure that in the main interface, the **"Target Language"** is set correctly (e.g., `Filipino`/`tl`), and a valid **TTS (配音)** channel is selected and configured (e.g., `Edge-TTS` with a proper voice).4. **Enable Single Video Interactive Mode**: For your next test, process **only one video** and ensure the **"Single Video Interactive Mode"** checkbox is enabled. This will pause at each critical stage (after ASR, after translation, after dubbing) and allow you to see and confirm the intermediate results.**Action Required**: Please provide the relevant sections of the log file from the run where the video was processed without hardcoded subtitles. Focus on the parts showing the ASR, Translation, and TTS stages.I need to do further test on these stages...
what is the best ASR channel to use?
what is the best Trans channel to use?
what is the best TTS/Dubb channel to use?
Based on the core knowledge库, here are the recommended channels for optimal balance of quality, speed, and cost.
Primary Recommendation: faster-whisper (Local)
large-v2 or large-v3 for best Chinese/English accuracy.Alternative (Online/Paid): OpenAI Whisper API or 阿里百炼 Qwen3-ASR
Primary Recommendation: DeepSeek (via OpenAI-compatible API)
Configuration:
https://api.deepseek.comdeepseek-chatAlternative (Local/Free): Ollama with qwen:7b or deepseek-coder:7b
Primary Recommendation for Free/Preview: Edge-TTS
Primary Recommendation for High-Quality/Cloning: F5-TTS (Local)
For a user with an NVIDIA GPU seeking a free, high-quality pipeline:
faster-whisper (large-v3) with CUDA.DeepSeek (API).Edge-TTS (for ease) or F5-TTS (for cloning/quality).For a user without a GPU or seeking maximum simplicity:
OpenAI Whisper API (paid) or Google Speech (free with proxy).DeepSeek (API).Edge-TTS.Configure these channels in the software settings and run a short test video with "Single Video Interactive Mode" enabled. This will allow you to confirm the output at each stage (ASR, Translation, Dubbing) and isolate where any failure occurs.
请查阅相关文档:
so I can better test it based on your suggestion, how do I enable the "single video interactive mode"?
To enable Single Video Interactive Mode, follow these steps:1. Select a Single Video: In the main interface, add your video to the task list, but ensure only one video is selected (highlighted).2. Locate the Checkbox: Look for a checkbox labeled "Single Video Interactive Mode" or similar wording (e.g., "交互模式"). Its exact location may vary slightly by version, but it is typically found in one of these places: Main Button Area: On the right side of the main window, near the "Start" button. Advanced Options Panel: Inside the "Advanced Options" or "高级选项" panel, often under a "System" or "系统" tab.3. Enable and Start: Check the box, then click the "Start" button to begin processing.What it does: When enabled, the software will pause and open an editing window at three critical stages: After Speech Recognition (ASR): You can view and edit the source language subtitles. After Translation: You can view and edit the translated subtitles.* After Dubbing (TTS): You can listen to each line of the generated audio and adjust timings or re-generate specific lines.This mode is the best way to verify exactly where in the pipeline your translation or dubbing is failing, as you will see the intermediate results directly.
I was able to identify, fix, and successfully run ASR stage, translation stage, and TTS/dubbing stage.
Do you have voice cloning capability on TTS dubbing?
Reselect "Embedded Hard Subtitles". If it fails, do not close the software. Open the folder shown in the log, similar to self.cfg=TaskCfg(cache_folder='/Users/name/pyvideotrans/tmp/1299/98ecd14c77', but the last two level folders are different from the current example; they change each time.
Then check if the end.ass file exists.
yes, the [end.ass] file exists, I can confirm based on earlier responses that the issue is based on FFmpeg/subtitile combination. I have not yet resolved this, I want to try and reinstall FFmpeg, etc for this issue, whether it is related to my machine or not, but I can make it work on soft subtitle for now.
does pyVideoTrans have voice cloning feature on dubbing?
open the /videotrans/task/trans_create.py file,and find all subtitles={subtitles_file} ,edit it to subtitles=filename='{subtitles_file}' .
cloning voice use chatterbox or qwen-tts(local)
thank you, I will update the [trans_create.py] based on your instruction.