|
To create a public link, set `share=True` in `launch()`.
Unloaded DynamicSwap_LlamaModel as complete.
Unloaded CLIPTextModel as complete.
Unloaded SiglipVisionModel as complete.
Unloaded AutoencoderKLHunyuanVideo as complete.
Unloaded DynamicSwap_HunyuanVideoTransformer3DModelPacked as complete.
Loaded CLIPTextModel to cuda:0 as complete.
Traceback (most recent call last):
File "D:\ProgramFiles\OtherProgram\AI\FramePack_F1_250520\FramePack_250520\demo_gradio_f1.py", line 127, in worker
llama_vec, clip_l_pooler = encode_prompt_conds(prompt, text_encoder, text_encoder_2, tokenizer, tokenizer_2)
File "D:\ProgramFiles\OtherProgram\AI\FramePack_F1_250520\FramePack_250520\env\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "D:\ProgramFiles\OtherProgram\AI\FramePack_F1_250520\FramePack_250520\diffusers_helper\hunyuan.py", line 31, in encode_prompt_conds
llama_attention_length = int(llama_attention_mask.sum())
RuntimeError: CUDA error: no kernel image is available for execution on the device
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
报错代码如上,是我的驱动pytorch不支持吗,显卡是5070驱动是最新的 |
|