#2506 e possivel adiciona promt no modo gratuito?

191.6* Posted at: 1 day ago 👁22

e possivel adiciona promt no modo gratuito?


=
system:Windows-10-10.0.22631-SP0
version:v3.92
frozen:True
language:en
root_dir:D:/win-pyvideotrans-v3.92

7 Replies

Yes, you can add custom prompts in the free mode of pyVideoTrans. This feature is available for AI-based translation channels (like DeepSeek, Gemini, etc.) that you configure with your own API keys. The "free mode" in the software typically refers to using built-in, no-cost services like Google Translate or Edge-TTS, which do not support custom prompts.

To use a custom prompt for translation, you need to:

  1. Configure an AI translation channel (e.g., DeepSeek, OpenAI, Gemini) in the Settings by entering your API key.
  2. In the main interface, select that AI channel as your Translation method.
  3. Go to Menu -> Tools -> Options -> Translation Settings.
  4. Find the text box labeled "AI Translation Prompt" or similar. Here, you can edit the system prompt that instructs the AI how to translate.

For example, you could add a prompt like: "Translate accurately into Brazilian Portuguese, maintaining a formal tone."

Note: The built-in free services (Google, Microsoft, etc.) do not have a prompt customization option, as they use fixed, non-AI models.

Please refer to the documentation for more details:

eu quero o texto menor com menos palavras eu uso o deepgram.com mais google(free) com isso e possivel adiciona promt para deixa o texto mais resumido?

No, it is not possible to add a prompt to make the text more summarized when using the combination Deepgram.com (for speech recognition) + Google Translate (free, for translation).

Here's why and what you can do:

Why it's not possible:

  1. Deepgram.com is only used for Speech Recognition (ASR). It converts audio to text. It does not summarize or translate.
  2. Google Translate (free) is a traditional machine translation engine. It translates the text it receives word-for-word. It does not accept or process custom prompts to change its behavior (like summarizing).

Your solution: Use an AI Translation Channel

To achieve summarized translation, you must use an AI-based translation channel that supports custom prompts.

Step-by-step guide:

  1. Configure an AI Translator:

    • Go to Menu -> Tools -> Options -> Translation Settings.
    • Choose a service like DeepSeek, Gemini, or OpenAI ChatGPT.
    • Enter your API Key for that service.
    • In the "AI Translation Prompt" box, write your instruction. For example:
      Please translate the following subtitles into {target_language}. Also, summarize the content to make it shorter and more concise, using fewer words while keeping the core meaning.
    • Save the settings.
  2. Use it in your workflow:

    • In the main window, for "Speech Recognition", you can keep using Deepgram.com (or any other recognizer).
    • For "Translation", select the AI channel you just configured (e.g., DeepSeek, Gemini).
    • Process your video. The AI will now both translate and summarize the text according to your prompt.

Summary of your options:

  • Deepgram + Google Translate: Fast, free, but no summarization.
  • Deepgram + AI Translator (with custom prompt): Can summarize, but requires an API key (may have costs).

Please refer to the documentation:

eo libretranslate(local) ?

Yes, you can use LibreTranslate (local) with pyVideoTrans, and you can also add a custom prompt to it, but with a very important caveat.

How to use LibreTranslate:

  1. Run LibreTranslate Locally: You must first download and run the LibreTranslate server on your own computer. You cannot use a public LibreTranslate API endpoint from within pyVideoTrans's standard interface.
  2. Configure in pyVideoTrans:

    • Go to Menu -> Tools -> Options -> Translation Settings.
    • Find the LibreTranslate section.
    • In the "API Address" field, enter your local server address, for example: http://127.0.0.1:5000
    • Save the settings.
  3. Use it: In the main window's "Translation" dropdown, select LibreTranslate.

About Custom Prompts with LibreTranslate:

  • Standard LibreTranslate: The standard LibreTranslate model is a traditional Neural Machine Translation (NMT) model like Google Translate. It does not understand or accept custom prompts. It will ignore any prompt and just perform a direct translation.
  • LibreTranslate with Local LLM Backend: This is the key point. Recent versions of LibreTranslate allow you to configure it to use a local Large Language Model (LLM) as its translation engine (e.g., Llama, Mistral via Ollama or llama.cpp).
  • If you configure LibreTranslate to use a local LLM, then you can modify the prompt template that LibreTranslate sends to the LLM. You would need to edit LibreTranslate's configuration or its API request template to include instructions like "translate and summarize."
  • In pyVideoTrans, you would still just select "LibreTranslate." The custom prompt logic would be handled on the LibreTranslate server side, not within pyVideoTrans's settings.

Summary for your goal (shorter text):

  1. Not possible with standard LibreTranslate (NMT model).
  2. Possible if you set up a LibreTranslate server that uses a local LLM and modify its prompt template to ask for summarization.
  3. Easier Alternative: The most straightforward way within pyVideoTrans is to use a dedicated AI Translation channel (like DeepSeek, LocalLLM) where you can directly set the custom prompt in the software's settings.

Recommendation: If your goal is a summarized translation and you want to work locally, consider using pyVideoTrans's "LocalLLM" translation channel. This channel is designed to connect directly to a local LLM (like Ollama) and allows you to set the custom prompt directly in the software's interface.

Please refer to the documentation:
LibreTranslate Translation
Compatible AI/Local Model Translation

eo ollama? e como adiciono o promt?

Yes, you can use Ollama with pyVideoTrans via the "LocalLLM" translation channel. This is the recommended method for using a local LLM with custom prompts.

Here is how to set it up and add your summarization prompt:

Step 1: Install and Run Ollama

  1. Download and install Ollama from ollama.com.
  2. Open a command prompt (Terminal) and pull a model suitable for translation. For example:

    ollama pull llama3.2:3b  # A smaller, faster model
    # or for better quality (requires more RAM/VRAM):
    # ollama pull qwen2.5:7b
  3. Ensure the Ollama service is running. It usually starts automatically.

Step 2: Configure pyVideoTrans

  1. Open pyVideoTrans.
  2. Go to Menu -> Tools -> Options -> Translation Settings.
  3. Find the section for "LocalLLM (Ollama/LMStudio)".
  4. API Address: Enter http://127.0.0.1:11434 (Ollama's default address).
  5. Model Name: Enter the exact name of the model you pulled (e.g., llama3.2:3b or qwen2.5:7b).
  6. API Key: Leave this blank (not needed for local Ollama).

Step 3: Add Your Custom Prompt (Key Step)

In the same "Translation Settings" window, find the large text box labeled "AI Translation Prompt" or "LLM Prompt Template".

Replace the existing content with your custom prompt that instructs the model to translate and summarize. For example:

Please translate the following subtitles from {source_language} to {target_language}. 
IMPORTANT: Also summarize the content. Make the translated text shorter, using fewer words while preserving the core meaning and key information. 
Output only the translated and summarized text, nothing else.

Subtitles to translate:
{text}

Variables explained:

  • {source_language}: Will be auto-filled with the source language name.
  • {target_language}: Will be auto-filled with the target language name.
  • {text}: Will be auto-filled with the subtitle text to translate.

Step 4: Use It

  1. Save the settings.
  2. In the main window:

    • Speech Recognition: Choose your method (e.g., Deepgram.com).
    • Translation: Select "LocalLLM" from the dropdown list.
    • Target Language: Choose your language (e.g., Portuguese).
  3. Process your video. The LLM will now translate and summarize according to your prompt.

Important Notes:

  • Performance: Local LLMs are slower than online APIs. Smaller models (3B, 7B) are faster but may have lower translation/summarization quality.
  • Prompt Clarity: Be very clear in your prompt. The model must be instructed to "output only the translated text", otherwise it might add explanations.
  • Testing: It's a good idea to use the "Translate SRT" tool in the toolbox first to test your prompt and model on a small SRT file before processing a full video.

This method gives you full control over the translation style and instructions using your local, private LLM.

Please refer to the documentation:
Compatible AI/Local Model Translation

Post Your Reply
Open source and free maintenance is not easy. If this project is helpful to you, please consider making a small donation to help the project continue to maintain and update.

Donate: https://ko-fi.com/jianchang512

Trending Questions