Yes, you can use Ollama with pyVideoTrans via the "LocalLLM" translation channel. This is the recommended method for using a local LLM with custom prompts.
Here is how to set it up and add your summarization prompt:
Step 1: Install and Run Ollama
- Download and install Ollama from ollama.com.
Open a command prompt (Terminal) and pull a model suitable for translation. For example:
ollama pull llama3.2:3b # A smaller, faster model
# or for better quality (requires more RAM/VRAM):
# ollama pull qwen2.5:7b
- Ensure the Ollama service is running. It usually starts automatically.
Step 2: Configure pyVideoTrans
- Open pyVideoTrans.
- Go to Menu -> Tools -> Options -> Translation Settings.
- Find the section for "LocalLLM (Ollama/LMStudio)".
- API Address: Enter
http://127.0.0.1:11434 (Ollama's default address). - Model Name: Enter the exact name of the model you pulled (e.g.,
llama3.2:3b or qwen2.5:7b). - API Key: Leave this blank (not needed for local Ollama).
Step 3: Add Your Custom Prompt (Key Step)
In the same "Translation Settings" window, find the large text box labeled "AI Translation Prompt" or "LLM Prompt Template".
Replace the existing content with your custom prompt that instructs the model to translate and summarize. For example:
Please translate the following subtitles from {source_language} to {target_language}.
IMPORTANT: Also summarize the content. Make the translated text shorter, using fewer words while preserving the core meaning and key information.
Output only the translated and summarized text, nothing else.
Subtitles to translate:
{text}
Variables explained:
{source_language}: Will be auto-filled with the source language name.{target_language}: Will be auto-filled with the target language name.{text}: Will be auto-filled with the subtitle text to translate.
Step 4: Use It
- Save the settings.
In the main window:
- Speech Recognition: Choose your method (e.g., Deepgram.com).
- Translation: Select "LocalLLM" from the dropdown list.
- Target Language: Choose your language (e.g., Portuguese).
- Process your video. The LLM will now translate and summarize according to your prompt.
Important Notes:
- Performance: Local LLMs are slower than online APIs. Smaller models (3B, 7B) are faster but may have lower translation/summarization quality.
- Prompt Clarity: Be very clear in your prompt. The model must be instructed to "output only the translated text", otherwise it might add explanations.
- Testing: It's a good idea to use the "Translate SRT" tool in the toolbox first to test your prompt and model on a small SRT file before processing a full video.
This method gives you full control over the translation style and instructions using your local, private LLM.
Please refer to the documentation:
Compatible AI/Local Model Translation