#3928 TaskCfgSTS(is_cuda=False, uuid='85c5f4e549', cache_folder=None, target_dir='E:/PH/krenz', source_language=None, source_l

202.150* Posted at: 4 hours ago 👁10

Trans Error [Local/Compatible AI] 'NoneType' object is not subscriptable
Traceback (most recent call last):
File "videotrans\translator\_base.py", line 102, in run
File "videotrans\translator\_base.py", line 165, in _run_srt
File "tenacity\__init__.py", line 338, in wrapped_f
File "tenacity\__init__.py", line 477, in call
File "tenacity\__init__.py", line 378, in iter
File "tenacity\__init__.py", line 421, in exc_check
tenacity.RetryError: RetryError[]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "videotrans\task\job.py", line 174, in run
File "videotrans\task\_translate_srt.py", line 45, in trans
File "videotrans\translator\__init__.py", line 970, in run
File "videotrans\translator\_base.py", line 105, in run
File "tenacity\__init__.py", line 480, in call
File "videotrans\translator\_localllm.py", line 71, in _item_task
TypeError: 'NoneType' object is not subscriptable
TaskCfgSTS(is_cuda=False, uuid='85c5f4e549', cache_folder=None, target_dir='E:/PH/krenz', source_language=None, source_language_code='Simplified Chinese', source_sub='E:/PH/krenz/4.srt', source_wav=None, source_wav_output=None, target_language=None, target_language_code='English', target_sub='E:/PH/krenz/4.English.srt', target_wav=None, target_wav_output=None, name='E:/PH/krenz/4.srt', noextname='4', basename='4.srt', ext='srt', dirname='E:/PH/krenz', shound_del_name=None, translate_type=8)


=
system:Windows-10-10.0.26200-SP0
version:v3.97
frozen:True
language:en
root_dir:E:/ytrans

1 Replies

This error occurs when using the "Compatible AI/Local Model" translation channel (translate_type=8), and the local LLM service (e.g., Ollama) is not properly configured or accessible, causing the API response to be None.

Solution Steps:

  1. Check Local LLM Service: Ensure your local LLM service (like Ollama, LM Studio) is running. The service URL (e.g., http://127.0.0.1:11434/v1) must be accessible.
  2. Verify Configuration: Go to Menu -> Translation Settings -> Local Large Model Settings. Confirm the API URL is correct (e.g., http://127.0.0.1:11434/v1 for Ollama) and the Model Name is exactly as required by your service (e.g., qwen:7b).
  3. Test the Connection: Open your browser and visit the API URL (e.g., http://127.0.0.1:11434/v1/models for Ollama) to see if it returns a valid JSON response. If not, your LLM service is not running correctly.
  4. Adjust Translation Settings: In Menu -> Tools -> Advanced Options -> Subtitle Translation Adjustment, try setting AI Translation Channel Batch Lines to a lower value (e.g., 5) and increase Pause Seconds After Translation (e.g., 3) to reduce request pressure.
  5. Check Logs: If the issue persists, examine the logs/ folder for more detailed error messages related to the API call.

Common Causes:

  • The local LLM service is not started.
  • The API URL or model name in the software settings is incorrect.
  • The LLM service does not support the OpenAI-compatible API format.
  • The request to the LLM service timed out or returned an invalid response.

Please refer to the relevant documentation:

Post Your Reply
Open source and free maintenance is not easy. If this project is helpful to you, please consider making a small donation to help the project continue to maintain and update.

Related resource