|
|
发表于 2025-12-27 19:40:57
|
显示全部楼层
本帖最后由 大酒鬼 于 2025-12-27 19:43 编辑
0528可以用,这个用不了,我是16G P5200,出错如下:
INFO:logger:warmup model...
Traceback (most recent call last):
File "K:\LiveTalking_251127\app.py", line 375, in <module>
warm_up(opt.batch_size,model,256)
File "K:\LiveTalking_251127\env\lib\site-packages\torch\utils\_contextlib.py", line 120, in decorate_context
return func(*args, **kwargs)
File "K:\LiveTalking_251127\lipreal.py", line 95, in warm_up
model(mel_batch, img_batch)
File "K:\LiveTalking_251127\env\lib\site-packages\torch\nn\modules\module.py", line 1775, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "K:\LiveTalking_251127\env\lib\site-packages\torch\nn\modules\module.py", line 1786, in _call_impl
return forward_call(*args, **kwargs)
File "K:\LiveTalking_251127\wav2lip\models\wav2lip_v2.py", line 132, in forward
audio_embedding = self.audio_encoder(audio_sequences) # [bz*5, 1, 80, 16]->[bz*5, 512, 1, 1]
File "K:\LiveTalking_251127\env\lib\site-packages\torch\nn\modules\module.py", line 1775, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "K:\LiveTalking_251127\env\lib\site-packages\torch\nn\modules\module.py", line 1786, in _call_impl
return forward_call(*args, **kwargs)
File "K:\LiveTalking_251127\env\lib\site-packages\torch\nn\modules\container.py", line 250, in forward
input = module(input)
File "K:\LiveTalking_251127\env\lib\site-packages\torch\nn\modules\module.py", line 1775, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "K:\LiveTalking_251127\env\lib\site-packages\torch\nn\modules\module.py", line 1786, in _call_impl
return forward_call(*args, **kwargs)
File "K:\LiveTalking_251127\wav2lip\models\conv.py", line 16, in forward
out = self.conv_block(x)
File "K:\LiveTalking_251127\env\lib\site-packages\torch\nn\modules\module.py", line 1775, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "K:\LiveTalking_251127\env\lib\site-packages\torch\nn\modules\module.py", line 1786, in _call_impl
return forward_call(*args, **kwargs)
File "K:\LiveTalking_251127\env\lib\site-packages\torch\nn\modules\container.py", line 250, in forward
input = module(input)
File "K:\LiveTalking_251127\env\lib\site-packages\torch\nn\modules\module.py", line 1775, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "K:\LiveTalking_251127\env\lib\site-packages\torch\nn\modules\module.py", line 1786, in _call_impl
return forward_call(*args, **kwargs)
File "K:\LiveTalking_251127\env\lib\site-packages\torch\nn\modules\conv.py", line 548, in forward
return self._conv_forward(input, self.weight, self.bias)
File "K:\LiveTalking_251127\env\lib\site-packages\torch\nn\modules\conv.py", line 543, in _conv_forward
return F.conv2d(
torch.AcceleratorError: CUDA error: no kernel image is available for execution on the device
Search for `cudaErrorNoKernelImageForDevice' in https://docs.nvidia.com/cuda/cud ... _CUDART__TYPES.html for more information.
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
请按任意键继续. . . |
|