找回密码
 立即注册
搜索
热搜: 活动 交友 discuz
查看: 121|回复: 2

IndexTTS报错,麻烦大佬帮忙看下

[复制链接]

1

主题

11

回帖

20

零食

入门显卡4G

积分
32
发表于 2025-5-28 23:37:52 | 显示全部楼层 |阅读模式
代码如下。之前用着还没问题,突然就报错了,没做任何改动。
wav shape: torch.Size([1, 205824]) min: >> start inference...
Traceback (most recent call last):
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\gradio\queueing.py", line 625, in process_events
    response = await route_utils.call_process_api(
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\gradio\route_utils.py", line 322, in call_process_api
    output = await app.get_blocks().process_api(
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\gradio\blocks.py", line 2137, in process_api
    result = await self.call_function(
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\gradio\blocks.py", line 1663, in call_function
    prediction = await anyio.to_thread.run_sync(  # type: ignore
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\anyio\_backends\_asyncio.py", line 2470, in run_sync_in_worker_thread
    return await future
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\anyio\_backends\_asyncio.py", line 967, in run
    result = context.run(func, *args)
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\gradio\utils.py", line 890, in wrapper
    response = f(*args, **kwargs)
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\webui.py", line 36, in gen_single
    output = tts.infer(prompt, text, output_path) # 普通推理
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\indextts\infer.py", line 457, in infer
    text_tokens = torch.tensor(text_tokens, dtype=torch.int32, device=self.device).unsqueeze(0)
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

回复

使用道具 举报

1

主题

11

回帖

20

零食

入门显卡4G

积分
32
 楼主| 发表于 2025-5-29 00:03:28 | 显示全部楼层
回忆了下可能跟近期更新了N卡驱动有关,我又回滚了N卡驱动到之前版本,依旧报错如下:
>> start inference...
wav shape: torch.Size([1, 212992]) min: tensor(-10072., device='cuda:0', dtype=torch.float16) max: tensor(9928., device='cuda:0', dtype=torch.float16)
wav shape: torch.Size([1, 196608]) min: tensor(-8116., device='cuda:0', dtype=torch.float16) max: tensor(6700., device='cuda:0', dtype=torch.float16)
wav shape: torch.Size([1, 163840]) min: tensor(-10248., device='cuda:0', dtype=torch.float16) max: tensor(10384., device='cuda:0', dtype=torch.float16)
wav shape: torch.Size([1, 161792]) min: tensor(-4996., device='cuda:0', dtype=torch.float16) max: tensor(5824., device='cuda:0', dtype=torch.float16)
wav shape: torch.Size([1, 161792]) min: tensor(-8208., device='cuda:0', dtype=torch.float16) max: tensor(7012., device='cuda:0', dtype=torch.float16)
wav shape: torch.Size([1, 236544]) min: tensor(-4692., device='cuda:0', dtype=torch.float16) max: tensor(5668., device='cuda:0', dtype=torch.float16)
Traceback (most recent call last):
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\gradio\queueing.py", line 625, in process_events
    response = await route_utils.call_process_api(
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\gradio\route_utils.py", line 322, in call_process_api
    output = await app.get_blocks().process_api(
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\gradio\blocks.py", line 2137, in process_api
    result = await self.call_function(
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\gradio\blocks.py", line 1663, in call_function
    prediction = await anyio.to_thread.run_sync(  # type: ignore
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\anyio\_backends\_asyncio.py", line 2470, in run_sync_in_worker_thread
    return await future
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\anyio\_backends\_asyncio.py", line 967, in run
    result = context.run(func, *args)
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\gradio\utils.py", line 890, in wrapper
    response = f(*args, **kwargs)
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\webui.py", line 36, in gen_single
    output = tts.infer(prompt, text, output_path) # 普通推理
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\indextts\infer.py", line 515, in infer
    wav, _ = self.bigvgan(latent, auto_conditioning.transpose(1, 2))
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\indextts\BigVGAN\models.py", line 240, in forward
    xs = self.resblocks[i * self.num_kernels + j](x)
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\indextts\BigVGAN\models.py", line 70, in forward
    xt = a2(xt)
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\indextts\BigVGAN\alias_free_torch\act.py", line 25, in forward
    x = self.upsample(x)
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\indextts\BigVGAN\alias_free_torch\resample.py", line 29, in forward
    x = self.ratio * F.conv_transpose1d(
RuntimeError: cuDNN error: CUDNN_STATUS_EXECUTION_FAILED
回复

使用道具 举报

1

主题

11

回帖

20

零食

入门显卡4G

积分
32
 楼主| 发表于 2025-5-29 02:20:18 | 显示全部楼层
折腾了3个多小时,具体情况如下:
1. 半个多月前使用没问题,今天突然出问题,回忆下期间只有更新过N卡驱动
2. AI帮忙分析了下启动器执行结果和报错代码,问题基本指向驱动
3. 尝试了安全模式下DDU卸载驱动后,重新安装了印象中更新前版本的驱动
4. 重新下载了整合包
依旧报错如下:
启动后日志:
>> GPT weights restored from: checkpoints\gpt.pth
>> DeepSpeed加载失败,回退到标准推理: No module named 'deepspeed'
>> Failed to load custom CUDA kernel for BigVGAN. Falling back to torch.
Removing weight norm...
>> bigvgan weights restored from: checkpoints\bigvgan_generator.pth
2025-05-29 02:07:42,250 WETEXT INFO found existing fst: C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\tn\zh_tn_tagger.fst
2025-05-29 02:07:42,251 WETEXT INFO                     C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\tn\zh_tn_verbalizer.fst
2025-05-29 02:07:42,251 WETEXT INFO skip building fst for zh_normalizer ...
2025-05-29 02:07:42,716 WETEXT INFO found existing fst: C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\tn\en_tn_tagger.fst
2025-05-29 02:07:42,717 WETEXT INFO                     C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\tn\en_tn_verbalizer.fst
2025-05-29 02:07:42,717 WETEXT INFO skip building fst for en_normalizer ...
>> TextNormalizer loaded
2025-05-29 02:07:43,659 WETEXT INFO found existing fst: C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\tn\zh_tn_tagger.fst
2025-05-29 02:07:43,659 WETEXT INFO found existing fst: C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\tn\zh_tn_tagger.fst
2025-05-29 02:07:43,659 WETEXT INFO                     C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\tn\zh_tn_verbalizer.fst
2025-05-29 02:07:43,659 WETEXT INFO                     C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\tn\zh_tn_verbalizer.fst
2025-05-29 02:07:43,659 WETEXT INFO skip building fst for zh_normalizer ...
2025-05-29 02:07:43,659 WETEXT INFO skip building fst for zh_normalizer ...
2025-05-29 02:07:44,225 WETEXT INFO found existing fst: C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\tn\en_tn_tagger.fst
2025-05-29 02:07:44,225 WETEXT INFO found existing fst: C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\tn\en_tn_tagger.fst
2025-05-29 02:07:44,226 WETEXT INFO                     C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\tn\en_tn_verbalizer.fst
2025-05-29 02:07:44,226 WETEXT INFO                     C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\tn\en_tn_verbalizer.fst
2025-05-29 02:07:44,226 WETEXT INFO skip building fst for en_normalizer ...
2025-05-29 02:07:44,226 WETEXT INFO skip building fst for en_normalizer ...
>> bpe model loaded from: checkpoints\bpe.model
* Running on local URL:  http://127.0.0.1:7860

执行任务日志:
To create a public link, set `share=True` in `launch()`.
>> start inference...
wav shape: torch.Size([1, 195584]) min: tensor(-10504., device='cuda:0', dtype=torch.float16) max: tensor(10872., device='cuda:0', dtype=torch.float16)
wav shape: torch.Size([1, 223232]) min: tensor(-8536., device='cuda:0', dtype=torch.float16) max: tensor(9304., device='cuda:0', dtype=torch.float16)
wav shape: torch.Size([1, 173056]) min: tensor(-10312., device='cuda:0', dtype=torch.float16) max: tensor(12328., device='cuda:0', dtype=torch.float16)
Traceback (most recent call last):
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\gradio\queueing.py", line 625, in process_events
    response = await route_utils.call_process_api(
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\gradio\route_utils.py", line 322, in call_process_api
    output = await app.get_blocks().process_api(
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\gradio\blocks.py", line 2137, in process_api
    result = await self.call_function(
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\gradio\blocks.py", line 1663, in call_function
    prediction = await anyio.to_thread.run_sync(  # type: ignore
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\anyio\_backends\_asyncio.py", line 2470, in run_sync_in_worker_thread
    return await future
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\anyio\_backends\_asyncio.py", line 967, in run
    result = context.run(func, *args)
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\gradio\utils.py", line 890, in wrapper
    response = f(*args, **kwargs)
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\webui.py", line 36, in gen_single
    output = tts.infer(prompt, text, output_path) # 普通推理
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\indextts\infer.py", line 515, in infer
    wav, _ = self.bigvgan(latent, auto_conditioning.transpose(1, 2))
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\indextts\BigVGAN\models.py", line 242, in forward
    xs += self.resblocks[i * self.num_kernels + j](x)
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\indextts\BigVGAN\models.py", line 71, in forward
    xt = c2(xt)
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\torch\nn\modules\conv.py", line 375, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "C:\Users\cutyf\Desktop\IndexTTS_1.5_250517\env\lib\site-packages\torch\nn\modules\conv.py", line 370, in _conv_forward
    return F.conv1d(
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

求助大大帮忙看下该怎么解决,非技术背景爱好者,实在是没辙了。
台式机,win11系统,版本号26100.4188
显卡为公版4090FE,目前驱动版本为4月30号的576.28,应该就是我更新最新驱动(576.52)之间的版本
回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

QQ|Archiver|手机版|小黑屋|圈圈AI吧

GMT+8, 2025-6-17 05:45 , Processed in 0.127004 second(s), 19 queries .

Powered by Discuz! X3.5

© 2001-2025 Discuz! Team.

快速回复 返回顶部 返回列表