#1509 开启分离人声背景声报错 D:\MyGithubClone\pyvideotrans.venv\lib\site-packages\torch\cuda\__init__.py:283: UserWarning:Found GPU0 NVID

2409:8924* Posted at: 1 hour ago 👁15

开启分离人声背景声报错
D:\MyGithubClone\pyvideotrans.venv\lib\site-packages\torch\cuda\__init__.py:283: UserWarning:

Found GPU0 NVIDIA GeForce RTX 5060 Ti which is of cuda capability 12.0.
Minimum and Maximum cuda capability supported by this version of PyTorch is
(5.0) - (9.0)

warnings.warn(
D:\MyGithubClone\pyvideotrans.venv\lib\site-packages\torch\cuda\__init__.py:304: UserWarning:

Please install PyTorch with a following CUDA
configurations:  12.8 13.0 following instructions at
https://pytorch.org/get-started/locally/

warnings.warn(matched_cuda_warn.format(matched_arches))
D:\MyGithubClone\pyvideotrans.venv\lib\site-packages\torch\cuda\__init__.py:326: UserWarning:
NVIDIA GeForce RTX 5060 Ti with CUDA capability sm_120 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_50 sm_60 sm_61 sm_70 sm_75 sm_80 sm_86 sm_90.
If you want to use the NVIDIA GeForce RTX 5060 Ti GPU with PyTorch,
......
, in forward

input = module(input)

File "D:\MyGithubClone\pyvideotrans.venv\lib\site-packages\torch\nn\modules\module.py", line 1775, in _wrapped_call_impl

return self._call_impl(*args, **kwargs)

File "D:\MyGithubClone\pyvideotrans.venv\lib\site-packages\torch\nn\modules\module.py", line 1786, in _call_impl

return forward_call(*args, **kwargs)

File "D:\MyGithubClone\pyvideotrans.venv\lib\site-packages\torch\nn\modules\conv.py", line 548, in forward

return self._conv_forward(input, self.weight, self.bias)

File "D:\MyGithubClone\pyvideotrans.venv\lib\site-packages\torch\nn\modules\conv.py", line 543, in _conv_forward

return F.conv2d(

torch.AcceleratorError: CUDA error: no kernel image is available for execution on the device
Search for `cudaErrorNoKernelImageForDevice' in https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html for more information.
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

Traceback (most recent call last):
File "D:\MyGithubClone\pyvideotrans\videotrans\separate\st.py", line 37, in uvr

pre_fun._path_audio_(

File "D:\MyGithubClone\pyvideotrans\videotrans\separate\vr.py", line 108, in path_audio

pred, X_mag, X_phase = inference(

File "D:\MyGithubClone\pyvideotrans\videotrans\separate\utils.py", line 94, in inference

pred = _execute(

File "D:\MyGithubClone\pyvideotrans\videotrans\separate\utils.py", line 65, in _execute

pred = model.predict(X_mag_window, aggressiveness)

File "D:\MyGithubClone\pyvideotrans\videotrans\separate\lib_v5\nets_61968KB.py", line 116, in predict

h = self.forward(x_mag, aggressiveness)

File "D:\MyGithubClone\pyvideotrans\videotrans\separate\lib_v5\nets_61968KB.py", line 69, in forward

self.stg1_low_band_net(x[:, :, :bandw]),

File "D:\MyGithubClone\pyvideotrans\videotrans\separate\lib_v5\nets_61968KB.py", line 24, in call

h, e1 = self.enc1(x)

File "D:\MyGithubClone\pyvideotrans\videotrans\separate\lib_v5\layers_123821KB.py", line 59, in call

skip = self.conv1(x)

File "D:\MyGithubClone\pyvideotrans\videotrans\separate\lib_v5\layers_123821KB.py", line 26, in call

return self.conv(x)

File "D:\MyGithubClone\pyvideotrans.venv\lib\site-packages\torch\nn\modules\module.py", line 1775, in _wrapped_call_impl

return self._call_impl(*args, **kwargs)

File "D:\MyGithubClone\pyvideotrans.venv\lib\site-packages\torch\nn\modules\module.py", line 1786, in _call_impl

return forward_call(*args, **kwargs)

File "D:\MyGithubClone\pyvideotrans.venv\lib\site-packages\torch\nn\modules\container.py", line 250, in forward

input = module(input)

File "D:\MyGithubClone\pyvideotrans.venv\lib\site-packages\torch\nn\modules\module.py", line 1775, in _wrapped_call_impl

return self._call_impl(*args, **kwargs)

File "D:\MyGithubClone\pyvideotrans.venv\lib\site-packages\torch\nn\modules\module.py", line 1786, in _call_impl

return forward_call(*args, **kwargs)

File "D:\MyGithubClone\pyvideotrans.venv\lib\site-packages\torch\nn\modules\conv.py", line 548, in forward

return self._conv_forward(input, self.weight, self.bias)

File "D:\MyGithubClone\pyvideotrans.venv\lib\site-packages\torch\nn\modules\conv.py", line 543, in _conv_forward

return F.conv2d(

torch.AcceleratorError: CUDA error: no kernel image is available for execution on the device
Search for `cudaErrorNoKernelImageForDevice' in https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html for more information.
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

9 Replies

您遇到的问题是 PyTorch CUDA 版本与您的 RTX 5060 Ti 显卡不兼容。错误信息显示您的显卡 CUDA 能力为 12.0,但当前 PyTorch 版本只支持到 CUDA 9.0。

解决方案:

  1. 强制使用 CPU 处理(推荐临时解决)

    • 在软件主界面取消勾选 "CUDA加速" 选项
    • 这样会使用 CPU 进行人声分离,虽然速度较慢但可以正常工作
  2. 更新 PyTorch 版本(彻底解决)

    • 由于您使用的是源码部署,需要更新 PyTorch 以支持新显卡
    • 在命令行中执行:pip install torch torchaudio --index-url https://download.pytorch.org/whl/cu121
    • 这会安装支持 CUDA 12.1 的 PyTorch 版本,兼容您的 RTX 5060 Ti
  3. 验证安装

    • 更新后重启软件,重新尝试人声分离功能
    • 确保 "CUDA加速" 选项被勾选以启用 GPU 加速

建议优先采用方案1作为临时解决方案,然后执行方案2来永久修复此兼容性问题。

PS D:\MyGithubClone\pyvideotrans> uv add torch torchaudio --index-url https://download.pytorch.org/whl/cu121
warning: Indexes specified via --index-url will not be persisted to the pyproject.toml file; use --default-index instead.
× No solution found when resolving dependencies for split (markers: (python_full_version == '3.10.4' and
│ platform_machine != 'aarch64' and sys_platform == 'linux') or (python_full_version == '3.10.4' and sys_platform
│ == 'win32')):
╰─▶ Because absl-py was not found in the package registry and your project depends on absl-py==2.0.0, we can

  conclude that your project's requirements are unsatisfiable.

help: If you want to add the package regardless of the failed resolution, provide the --frozen flag to skip

    locking and syncing.

您使用的 uv 包管理器在解析依赖时遇到了冲突。错误信息显示缺少 absl-py==2.0.0 包。请按以下步骤操作:

方案1:使用 pip 安装(推荐)
直接在命令行中运行:

pip install torch torchaudio --index-url https://download.pytorch.org/whl/cu121

方案2:使用 uv 强制安装
如果坚持使用 uv,添加 --frozen 参数跳过依赖解析:

uv add torch torchaudio --index-url https://download.pytorch.org/whl/cu121 --frozen

方案3:检查环境完整性
确保您的虚拟环境包含了 pyVideoTrans 的所有必需依赖:

pip install -r requirements.txt

然后再执行方案1。

建议优先采用方案1,因为 pip 在处理 PyTorch 这类复杂依赖时更为可靠。安装完成后重启软件即可。

D:\MyGithubClone\pyvideotrans.venv\lib\site-packages\torch\cuda\__init__.py:283: UserWarning:

Found GPU0 NVIDIA GeForce RTX 5060 Ti which is of cuda capability 12.0.
Minimum and Maximum cuda capability supported by this version of PyTorch is
(5.0) - (9.0)

warnings.warn(
D:\MyGithubClone\pyvideotrans.venv\lib\site-packages\torch\cuda\__init__.py:304: UserWarning:

Please install PyTorch with a following CUDA
configurations:  12.8 13.0 following instructions at
https://pytorch.org/get-started/locally/

warnings.warn(matched_cuda_warn.format(matched_arches))
D:\MyGithubClone\pyvideotrans.venv\lib\site-packages\torch\cuda\__init__.py:326: UserWarning:
NVIDIA GeForce RTX 5060 Ti with CUDA capability sm_120 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_50 sm_60 sm_61 sm_70 sm_75 sm_80 sm_86 sm_90.
If you want to use the NVIDIA GeForce RTX 5060 Ti GPU with PyTorch, please check t
......
stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

Traceback (most recent call last):
File "D:\MyGithubClone\pyvideotrans\videotrans\separate\st.py", line 37, in uvr

pre_fun._path_audio_(

File "D:\MyGithubClone\pyvideotrans\videotrans\separate\vr.py", line 108, in path_audio

pred, X_mag, X_phase = inference(

File "D:\MyGithubClone\pyvideotrans\videotrans\separate\utils.py", line 94, in inference

pred = _execute(

File "D:\MyGithubClone\pyvideotrans\videotrans\separate\utils.py", line 65, in _execute

pred = model.predict(X_mag_window, aggressiveness)

File "D:\MyGithubClone\pyvideotrans\videotrans\separate\lib_v5\nets_61968KB.py", line 116, in predict

h = self.forward(x_mag, aggressiveness)

File "D:\MyGithubClone\pyvideotrans\videotrans\separate\lib_v5\nets_61968KB.py", line 69, in forward

self.stg1_low_band_net(x[:, :, :bandw]),

File "D:\MyGithubClone\pyvideotrans\videotrans\separate\lib_v5\nets_61968KB.py", line 24, in call

h, e1 = self.enc1(x)

File "D:\MyGithubClone\pyvideotrans\videotrans\separate\lib_v5\layers_123821KB.py", line 59, in call

skip = self.conv1(x)

File "D:\MyGithubClone\pyvideotrans\videotrans\separate\lib_v5\layers_123821KB.py", line 26, in call

return self.conv(x)

File "D:\MyGithubClone\pyvideotrans.venv\lib\site-packages\torch\nn\modules\module.py", line 1775, in _wrapped_call_impl

return self._call_impl(*args, **kwargs)

File "D:\MyGithubClone\pyvideotrans.venv\lib\site-packages\torch\nn\modules\module.py", line 1786, in _call_impl

return forward_call(*args, **kwargs)

File "D:\MyGithubClone\pyvideotrans.venv\lib\site-packages\torch\nn\modules\container.py", line 250, in forward

input = module(input)

File "D:\MyGithubClone\pyvideotrans.venv\lib\site-packages\torch\nn\modules\module.py", line 1775, in _wrapped_call_impl

return self._call_impl(*args, **kwargs)

File "D:\MyGithubClone\pyvideotrans.venv\lib\site-packages\torch\nn\modules\module.py", line 1786, in _call_impl

return forward_call(*args, **kwargs)

File "D:\MyGithubClone\pyvideotrans.venv\lib\site-packages\torch\nn\modules\conv.py", line 548, in forward

return self._conv_forward(input, self.weight, self.bias)

File "D:\MyGithubClone\pyvideotrans.venv\lib\site-packages\torch\nn\modules\conv.py", line 543, in _conv_forward

return F.conv2d(

torch.AcceleratorError: CUDA error: no kernel image is available for execution on the device
Search for `cudaErrorNoKernelImageForDevice' in https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html for more information.
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

PyTorch 更新失败,仍然在使用旧版本。您需要完全卸载现有版本后重新安装。请按以下步骤操作:

  1. 完全卸载现有 PyTorch

    pip uninstall torch torchaudio torchvision
  2. 清理安装新版本

    pip install torch torchaudio --index-url https://download.pytorch.org/whl/cu121
  3. 验证安装

    python -c "import torch; print(torch.__version__); print(torch.cuda.is_available())"

    应该输出类似 2.4.0True

临时解决方案
在等待 PyTorch 更新的同时,您可以在软件主界面取消勾选 "CUDA加速" 选项,这样会使用 CPU 进行人声分离处理。

RTX 5060 Ti 是最新的显卡,需要 PyTorch 2.4.0 或更高版本才能完全支持。如果上述步骤仍然失败,请检查您的 pip 是否指向了正确的虚拟环境。

D:\MyGithubClone\pyvideotrans.venv\lib\site-packages\torch\cuda\__init__.py:235: UserWarning:
NVIDIA GeForce RTX 5060 Ti with CUDA capability sm_120 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_50 sm_60 sm_61 sm_70 sm_75 sm_80 sm_86 sm_90.
If you want to use the NVIDIA GeForce RTX 5060 Ti GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/

warnings.warn(
self.proxy_str=None
D:\MyGithubClone\pyvideotrans\videotrans\separate\vr.py:32: FutureWarning: You are using torch.load with weights_only=False (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only will be flipped to True. This limits
......
CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

Traceback (most recent call last):
File "D:\MyGithubClone\pyvideotrans\videotrans\separate\st.py", line 37, in uvr

pre_fun._path_audio_(

File "D:\MyGithubClone\pyvideotrans\videotrans\separate\vr.py", line 108, in path_audio

pred, X_mag, X_phase = inference(

File "D:\MyGithubClone\pyvideotrans\videotrans\separate\utils.py", line 94, in inference

pred = _execute(

File "D:\MyGithubClone\pyvideotrans\videotrans\separate\utils.py", line 65, in _execute

pred = model.predict(X_mag_window, aggressiveness)

File "D:\MyGithubClone\pyvideotrans\videotrans\separate\lib_v5\nets_61968KB.py", line 116, in predict

h = self.forward(x_mag, aggressiveness)

File "D:\MyGithubClone\pyvideotrans\videotrans\separate\lib_v5\nets_61968KB.py", line 69, in forward

self.stg1_low_band_net(x[:, :, :bandw]),

File "D:\MyGithubClone\pyvideotrans\videotrans\separate\lib_v5\nets_61968KB.py", line 24, in call

h, e1 = self.enc1(x)

File "D:\MyGithubClone\pyvideotrans\videotrans\separate\lib_v5\layers_123821KB.py", line 59, in call

skip = self.conv1(x)

File "D:\MyGithubClone\pyvideotrans\videotrans\separate\lib_v5\layers_123821KB.py", line 26, in call

return self.conv(x)

File "D:\MyGithubClone\pyvideotrans.venv\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl

return self._call_impl(*args, **kwargs)

File "D:\MyGithubClone\pyvideotrans.venv\lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl

return forward_call(*args, **kwargs)

File "D:\MyGithubClone\pyvideotrans.venv\lib\site-packages\torch\nn\modules\container.py", line 250, in forward

input = module(input)

File "D:\MyGithubClone\pyvideotrans.venv\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl

return self._call_impl(*args, **kwargs)

File "D:\MyGithubClone\pyvideotrans.venv\lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl

return forward_call(*args, **kwargs)

File "D:\MyGithubClone\pyvideotrans.venv\lib\site-packages\torch\nn\modules\conv.py", line 554, in forward

return self._conv_forward(input, self.weight, self.bias)

File "D:\MyGithubClone\pyvideotrans.venv\lib\site-packages\torch\nn\modules\conv.py", line 549, in _conv_forward

return F.conv2d(

RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

PyTorch 仍然没有正确更新到支持您显卡的版本。请立即采用以下解决方案:立即解决方案:关闭 CUDA 加速在 pyVideoTrans 主界面取消勾选 "CUDA加速" 选项,这样会强制使用 CPU 进行人声分离处理。彻底修复步骤:1. 检查当前 PyTorch 版本:在命令行中运行:python -c "import torch; print(torch.__version__)"如果版本低于 2.4.0,说明更新失败。2. 完全卸载并重新安装:pip uninstall torch torchaudio torchvision -ypip cache purge pip install torch torchaudio --index-url https://download.pytorch.org/whl/cu1213. 验证安装:python -c "import torch; print(torch.cuda.is_available())"应该输出 True重要说明:RTX 5060 Ti 是 2024 年发布的最新显卡,需要 PyTorch 2.4.0 或更高版本才能完全支持。如果上述步骤仍然失败,建议暂时使用 CPU 模式,等待 PyTorch 官方发布更稳定的兼容版本。

D:\MyGithubClone\pyvideotrans\videotrans\separate\vr.py:32: FutureWarning: You are using torch.load with weights_only=False (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only will be flipped to True. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals. We recommend you start setting weights_only=True for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
cpk = torch.load(model_path, map_location="cpu")
D:\MyGithubClone
......
: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

Traceback (most recent call last):
File "D:\MyGithubClone\pyvideotrans\videotrans\separate\st.py", line 37, in uvr

pre_fun._path_audio_(

File "D:\MyGithubClone\pyvideotrans\videotrans\separate\vr.py", line 108, in path_audio

pred, X_mag, X_phase = inference(

File "D:\MyGithubClone\pyvideotrans\videotrans\separate\utils.py", line 94, in inference

pred = _execute(

File "D:\MyGithubClone\pyvideotrans\videotrans\separate\utils.py", line 65, in _execute

pred = model.predict(X_mag_window, aggressiveness)

File "D:\MyGithubClone\pyvideotrans\videotrans\separate\lib_v5\nets_61968KB.py", line 116, in predict

h = self.forward(x_mag, aggressiveness)

File "D:\MyGithubClone\pyvideotrans\videotrans\separate\lib_v5\nets_61968KB.py", line 69, in forward

self.stg1_low_band_net(x[:, :, :bandw]),

File "D:\MyGithubClone\pyvideotrans\videotrans\separate\lib_v5\nets_61968KB.py", line 24, in call

h, e1 = self.enc1(x)

File "D:\MyGithubClone\pyvideotrans\videotrans\separate\lib_v5\layers_123821KB.py", line 59, in call

skip = self.conv1(x)

File "D:\MyGithubClone\pyvideotrans\videotrans\separate\lib_v5\layers_123821KB.py", line 26, in call

return self.conv(x)

File "D:\MyGithubClone\pyvideotrans.venv\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl

return self._call_impl(*args, **kwargs)

File "D:\MyGithubClone\pyvideotrans.venv\lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl

return forward_call(*args, **kwargs)

File "D:\MyGithubClone\pyvideotrans.venv\lib\site-packages\torch\nn\modules\container.py", line 250, in forward

input = module(input)

File "D:\MyGithubClone\pyvideotrans.venv\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl

return self._call_impl(*args, **kwargs)

File "D:\MyGithubClone\pyvideotrans.venv\lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl

return forward_call(*args, **kwargs)

File "D:\MyGithubClone\pyvideotrans.venv\lib\site-packages\torch\nn\modules\conv.py", line 554, in forward

return self._conv_forward(input, self.weight, self.bias)

File "D:\MyGithubClone\pyvideotrans.venv\lib\site-packages\torch\nn\modules\conv.py", line 549, in _conv_forward

return F.conv2d(

RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

您的 PyTorch 版本仍然不支持 RTX 5060 Ti 显卡。请立即采用以下解决方案:

立即解决方案:
在 pyVideoTrans 主界面取消勾选 "CUDA加速" 选项,这样会强制使用 CPU 进行人声分离处理。这是目前最可靠的解决方法。

根本原因:
RTX 5060 Ti 是最新的显卡(CUDA 能力 12.0),需要 PyTorch 2.4.0 或更高版本才能完全支持。您当前的 PyTorch 版本可能仍然较旧。

验证当前状态:
在命令行中运行:

python -c "import torch; print(f'PyTorch版本: {torch.__version__}'); print(f'CUDA可用: {torch.cuda.is_available()}')"

如果输出显示版本低于 2.4.0 或 CUDA 不可用,说明 PyTorch 更新失败。

建议:
考虑到 RTX 5060 Ti 是全新硬件,软件兼容性可能需要时间完善。暂时使用 CPU 模式是最稳妥的方案,虽然处理速度会慢一些,但功能完全正常。

Post Your Reply
Open source and free maintenance is not easy. If this project is helpful to you, please consider making a small donation to help the project continue to maintain and update.

Donate: https://ko-fi.com/jianchang512

Trending Questions