Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CUDA out of memory. Tried to allocate 14.00 MiB. GPU 0 has a total capacty of 4.00 GiB of which 0 bytes is free. Of the allocated memory 3.41 GiB is allocated by PyTorch, and 65.23 MiB is reserved by PyTorch but unallocated #72

Open
GOATGAMER07 opened this issue Jan 25, 2024 · 14 comments

Comments

@GOATGAMER07
Copy link

The config attributes {'controlnet_list': ['controlnet', 'RPMultiControlNetModel'], 'requires_aesthetics_score': False} were passed to StableDiffusionXLInstantIDPipeline, but are not expected and will be ignored. Please verify your model_index.json configuration file.
Keyword arguments {'controlnet_list': ['controlnet', 'RPMultiControlNetModel'], 'requires_aesthetics_score': False, 'safety_checker': None} are not expected by StableDiffusionXLInstantIDPipeline and will be ignored.
Loading pipeline components...: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:03<00:00, 2.03it/s]
LCM
The config attributes {'skip_prk_steps': True} were passed to LCMScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.
default: num_inference_steps=2, guidance_scale=2
Traceback (most recent call last):
File "C:\Users\gibso\pinokio\api\instantid.git\app\app.py", line 91, in
pipe.cuda()
File "C:\Users\gibso\pinokio\api\instantid.git\app\pipeline_stable_diffusion_xl_instantid.py", line 489, in cuda
self.to('cuda', dtype)
File "C:\Users\gibso\pinokio\api\instantid.git\app\env\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 869, in to
module.to(device, dtype)
File "C:\Users\gibso\pinokio\api\instantid.git\app\env\lib\site-packages\torch\nn\modules\module.py", line 1160, in to
return self._apply(convert)
File "C:\Users\gibso\pinokio\api\instantid.git\app\env\lib\site-packages\torch\nn\modules\module.py", line 810, in _apply
module._apply(fn)
File "C:\Users\gibso\pinokio\api\instantid.git\app\env\lib\site-packages\torch\nn\modules\module.py", line 810, in _apply
module._apply(fn)
File "C:\Users\gibso\pinokio\api\instantid.git\app\env\lib\site-packages\torch\nn\modules\module.py", line 810, in _apply
module._apply(fn)
[Previous line repeated 7 more times]
File "C:\Users\gibso\pinokio\api\instantid.git\app\env\lib\site-packages\torch\nn\modules\module.py", line 833, in _apply
param_applied = fn(param)
File "C:\Users\gibso\pinokio\api\instantid.git\app\env\lib\site-packages\torch\nn\modules\module.py", line 1158, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
torch.cuda.OutOfMemoryError:Screenshot 2024-01-25 215131. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

(env) C:\Users\gibso\pinokio\api\instantid.git\app>

Screenshot 2024-01-25 215131

i have enough resources still getting error

@GOATGAMER07
Copy link
Author

i have enough resources still getting error

@haofanwang
Copy link
Member

What's VRAM? 16GB may not be enough, you have to do some optimizations.

@GOATGAMER07
Copy link
Author

What's VRAM? 16GB may not be enough, you have to do some optimizations.

understood, i will return to automatic1111 itself which is allows 350x350 photo size pics rendering,☹️🙁😖😞😟

@tw9mini
Copy link

tw9mini commented Jan 25, 2024

I'm getting similar error with 12 GB 3060. I tried setting max_split_size_mb to different values to no avail.

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 11.76 GiB total capacity; 9.33 GiB already allocated; 37.69 MiB free; 9.62 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

@astragartist
Copy link

I'm getting similar error with 12 GB 3060. I tried setting max_split_size_mb to different values to no avail.

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 11.76 GiB total capacity; 9.33 GiB already allocated; 37.69 MiB free; 9.62 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Same error here!

@hortom
Copy link

hortom commented Jan 26, 2024

Is it possible to use pruned models for controlnet and IP-Adapter to reduce memory usage?

@boehm-e
Copy link

boehm-e commented Jan 27, 2024

I am also very interested to know if it is possible to reduce the amount of vram needed. By pruning, quantizing ?

@dimtoneff
Copy link

dimtoneff commented Jan 28, 2024

I was able to run it on 12GB 3060. Single generation runs for aprox 1min with 35-40 steps i ComfyUI. The insight model running on CPU.

pipe.enable_xformers_memory_efficient_attention() on the StableDiffusionXLInstantIDPipeline

And before the vae decoding:
self.vae.enable_slicing()
self.vae.enable_tiling()

@apukale
Copy link

apukale commented Jan 28, 2024

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.53 GiB. GPU 0 has a total capacty of 6.00 GiB of which 0 bytes is free. Of the allocated memory 12.03 GiB is allocated by PyTorch, and 355.03 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

same error, I have 6GB VRAM 3060.

Will this never work on 6GB VRAM?

@hortom
Copy link

hortom commented Jan 28, 2024

Is ControlNet mandatory or it is only for the optional pose image? If it is not mandatory, could it save VRAM if ControlNet model is not loaded and applied?

@camoody1
Copy link

The VRAM requirements on this seem to be about 14GB. I have 12GB on my 3060 and whether or not my workflow completes is totally hit or miss. Sometimes it runs to completion (very slowly as it has been offloaded to system RAM) and other times, it blows up with an out of memory error after the next to last step hits 100% but before the final image is created. This really needs to be optimized better if at all possible.

@FurkanGozukara
Copy link

I was able to run it on 12GB 3060. Single generation runs for aprox 1min with 35-40 steps i ComfyUI. The insight model running on CPU.

pipe.enable_xformers_memory_efficient_attention() on the StableDiffusionXLInstantIDPipeline

And before the vae decoding: self.vae.enable_slicing() self.vae.enable_tiling()

enabled xformers but made 0 diff

and i dont see where to enable VAE

#104

@otukj52
Copy link

otukj52 commented Jan 31, 2024

return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 14.00 MiB. GPU 0 has a total capacity of 4.00 GiB of which 0 bytes is free. Of the allocated memory 6.97 GiB is allocated by PyTorch, and 184.73 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)

Screenshot 2024-02-01 044722

i dont know what to do i tried many thing even in comfyui i am getting same issue
i have 1050 4gb vram

@FurkanGozukara
Copy link

return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 14.00 MiB. GPU 0 has a total capacity of 4.00 GiB of which 0 bytes is free. Of the allocated memory 6.97 GiB is allocated by PyTorch, and 184.73 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)

Screenshot 2024-02-01 044722

i dont know what to do i tried many thing even in comfyui i am getting same issue i have 1050 4gb vram

forget to run it with 4 GB GPU you may run on CPU

hopefully I will make a Kaggle notebook and advanced UI follow me on youtube

https://www.youtube.com/SECourses

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests