Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

musetalk 启动后 oom #192

Closed
licon opened this issue Aug 7, 2024 · 3 comments
Closed

musetalk 启动后 oom #192

licon opened this issue Aug 7, 2024 · 3 comments
Labels
good first issue Good for newcomers

Comments

@licon
Copy link

licon commented Aug 7, 2024

webrtcapi 页面正常,start 后有数字人画面,发送文本后卡住,服务端报错如下:
RuntimeError: CUDA out of memory. Tried to allocate 256.00 MiB (GPU 0; 23.65 GiB total capacity; 17.77 GiB already allocated; 45.19 MiB free; 18.47 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

环境信息:
机器系统 ubuntu22,
显卡:4090
cuda驱动: Driver Version: 535.183.01 CUDA Version: 12.2

请问程序需要怎么设置?

@licon licon changed the title musetalk 启动后 oop musetalk 启动后 oom Aug 7, 2024
@Ivyforever01
Copy link

4090运行不了的,得V100

@leos-code
Copy link

@licon

  1. 在 musereal.py 的 inference方法上添加 @torch.no_grad() 注解
  2. 启动app.py时,设置batch_size为8或更小,我设置8可以正常运行

@licon
Copy link
Author

licon commented Aug 12, 2024

@licon

  1. 在 musereal.py 的 inference方法上添加 @torch.no_grad() 注解
  2. 启动app.py时,设置batch_size为8或更小,我设置8可以正常运行

你的方法可以解决问题,非常感谢

@licon licon closed this as completed Aug 12, 2024
@lipku lipku pinned this issue Aug 12, 2024
@lipku lipku added the good first issue Good for newcomers label Aug 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

4 participants