#922 ValueError: Requested int8_bfloat16 compute type, but the target device or backend do not support efficient int8_bfloat1

2804:38a* Posted at: 1 day ago 👁13

Traceback (most recent call last):

File "videotrans\process\_overall.py", line 44, in run

File "faster_whisper\transcribe.py", line 647, in __init__

ValueError: Requested int8_bfloat16 compute type, but the target device or backend do not support efficient int8_bfloat16 computation.

=====

Windows-10-10.0.22631-SP0

version:v3.81

frozen:True

language:en

3 Replies

This article was replied by DeepSeek
The error you're encountering is related to an incompatible CUDA compute type setting in the faster-whisper speech recognition model. This happens when your GPU hardware doesn't support the specific int8_bfloat16 computation type that's currently selected.

Here's how to fix this issue:

1. Open the software and go to the top menu bar.

2. Select "Tools/Options""Advanced Options".

3. In the advanced settings window, scroll down to find the "faster/openai语音识别调整" (faster/openai speech recognition settings) section.

4. Look for the "CUDA数据类型" (CUDA data type) setting.

5. Change this setting from "int8_bfloat16" to either "int8" or "float32".

  • int8: Uses less resources, faster speed, but lower accuracy
  • float32: Uses more resources, slower speed, but higher accuracy

6. Click "OK" to save the changes.

7. Try running your video translation task again.

This should resolve the computation type compatibility issue with your GPU. If you continue to experience problems, please ensure you have the latest NVIDIA drivers installed and that CUDA is properly configured on your system.

If the issue persists, you can also try temporarily disabling "CUDA加速" (CUDA acceleration) as a workaround, though this will significantly slow down the speech recognition process.

float16可以吗,有什么区别

本条内容由 Gemini 回复
感谢您的提问!

pyVideoTrans 软件的【高级选项】中,针对 faster/openai语音识别调整 里的 CUDA数据类型 设置,根据【核心知识库】的描述,目前可选的类型主要有:

  • int8:消耗资源少,速度快,但精度相对较低。
  • float32:消耗资源多,速度慢,但精度最高。
  • int8_float16:这是设备自选的混合精度类型,您遇到的错误正是因为您的设备不支持或无法高效处理此类型。

关于 float16 的疑问:

【核心知识库】并未将 float16 列为独立的 CUDA数据类型 选项。通常情况下,float16(半精度浮点数)是一种在深度学习中常用的数据类型,它比 float32 占用更少的内存和计算资源,同时提供比 int8 更高的精度,但需要硬件支持。

由于您遇到的错误是关于 int8_bfloat16 不被设备支持,这表明您的硬件可能对某些半精度或混合精度类型支持不佳。因此,即使软件中存在 float16 的选项,也可能遇到类似兼容性问题。

建议:

为了确保稳定运行,请继续按照之前的建议,将 CUDA数据类型 设置为:

  • int8 (推荐):如果您的主要需求是速度和资源节省,并且对识别精度有一定容忍度。
  • float32:如果您的主要需求是最高的识别精度,并且不介意较长的处理时间。

这两种类型在更广泛的硬件上具有更好的兼容性,可以有效避免 int8_bfloat16 导致的 ValueError

如果您想了解更多关于这些数据类型在模型推理中的技术细节,可以查阅 NVIDIA CUDA 编程或相关深度学习框架的文档。

Post Your Reply
Open source and free maintenance is not easy. If this project is helpful to you, please consider making a small donation to help the project continue to maintain and update.

Donate: https://ko-fi.com/jianchang512

Trending Questions