Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"RuntimeError: mat1 and mat2 must have the same dtype, but got Half and Float" when trying to upscale using LDSR #2897

Open
thomaslee101 opened this issue Jul 30, 2024 · 1 comment

Comments

@thomaslee101
Copy link

hi,

i encountered this error: "RuntimeError: mat1 and mat2 must have the same dtype, but got Half and Float" when trying to upscale using LDSR. I am using the latest google collab to run i.e. rename my previous "SD" folder as "SD_older". this error persists even though i use different models SD1.5 and SDXL. below is log of the error. Hope someone can help. thanks.

Startup time: 123.7s (launcher: 1.2s, import torch: 29.8s, import gradio: 2.4s, setup paths: 24.9s, import ldm: 0.7s, initialize shared: 4.5s, other imports: 31.9s, setup codeformer: 0.5s, setup gfpgan: 0.3s, list SD models: 1.6s, load scripts: 17.8s, load upscalers: 1.3s, reload hypernetworks: 0.1s, initialize extra networks: 1.7s, create ui: 3.2s, gradio launch: 0.5s, add APIs: 1.3s).
6ce0161689b3853acaa03779ec93eafe75a02f4ced659bee03f50797806fa2fa
Loading weights [6ce0161689] from /content/gdrive/MyDrive/sd/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors
Creating model from config: /content/gdrive/MyDrive/sd/stable-diffusion-webui/configs/v1-inference.yaml
Applying attention optimization: xformers... done.
Model loaded in 36.7s (calculate hash: 21.9s, load weights from disk: 1.7s, create model: 4.7s, apply weights to model: 6.5s, hijack: 0.3s, load textual inversion embeddings: 0.6s, calculate empty prompt: 0.8s).
Loading model from /content/gdrive/MyDrive/sd/stable-diffusion-webui/models/LDSR/model.ckpt
LatentDiffusionV1: Running in eps-prediction mode
Keeping EMAs of 308.
Applying attention optimization: xformers... done.
Downsampling from [494, 1000] to [618, 1250]
Plotting: Switched to EMA weights
Sampling with eta = 1.0; steps: 100
Data shape for DDIM sampling is (1, 3, 1280, 640), eta 1.0
Running DDIM Sampling with 100 timesteps
DDIM Sampler 0% 0/100 [00:00<?, ?it/s]
Plotting: Restored training weights
*** Error completing request
*** Arguments: ('task(sh1w0zc1r6yize2)', 0.0, <PIL.Image.Image image mode=RGBA size=494x1000 at 0x7A864DFF7580>, None, '', '', True, True, 0.0, 5, 0.0, 512, 512, True, 'LDSR', 'None', 0, False, 1, False, 1, 0, False, 0.5, 0.2, False, 0.9, 0.15, 0.5, False, False, 384, 768, 4096, 409600, 'Maximize area', 0.1, False, ['Horizontal'], False, ['Deepbooru']) {}
Traceback (most recent call last):
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 74, in f
res = list(func(*args, **kwargs))
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 53, in f
res = func(*args, **kwargs)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/postprocessing.py", line 133, in run_postprocessing_webui
return run_postprocessing(*args, **kwargs)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/postprocessing.py", line 73, in run_postprocessing
scripts.scripts_postproc.run(initial_pp, args)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/scripts_postprocessing.py", line 198, in run
script.process(single_image, **process_args)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/scripts/postprocessing_upscale.py", line 152, in process
upscaled_image = self.upscale(pp.image, pp.info, upscaler1, upscale_mode, upscale_by, max_side_length, upscale_to_width, upscale_to_height, upscale_crop)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/scripts/postprocessing_upscale.py", line 107, in upscale
image = upscaler.scaler.upscale(image, upscale_by, upscaler.data_path)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/upscaler.py", line 68, in upscale
img = self.do_upscale(img, selected_model)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions-builtin/LDSR/scripts/ldsr_model.py", line 58, in do_upscale
return ldsr.super_resolution(img, ddim_steps, self.scale)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions-builtin/LDSR/ldsr_model_arch.py", line 137, in super_resolution
logs = self.run(model["model"], im_padded, diffusion_steps, eta)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions-builtin/LDSR/ldsr_model_arch.py", line 96, in run
logs = make_convolutional_sample(example, model,
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions-builtin/LDSR/ldsr_model_arch.py", line 228, in make_convolutional_sample
sample, intermediates = convsample_ddim(model, c, steps=custom_steps, shape=z.shape,
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions-builtin/LDSR/ldsr_model_arch.py", line 184, in convsample_ddim
samples, intermediates = ddim.sample(steps, batch_size=bs, shape=shape, conditioning=cond, callback=callback,
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/models/diffusion/ddim.py", line 103, in sample
samples, intermediates = self.ddim_sampling(conditioning, size,
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/models/diffusion/ddim.py", line 163, in ddim_sampling
outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps,
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/models/diffusion/ddim.py", line 188, in p_sample_ddim
model_output = self.model.apply_model(x, t, c)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions-builtin/LDSR/sd_hijack_ddpm_v1.py", line 964, in apply_model
output_list = [self.model(z_list[i], t, **cond_list[i]) for i in range(z.shape[-1])]
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions-builtin/LDSR/sd_hijack_ddpm_v1.py", line 964, in
output_list = [self.model(z_list[i], t, **cond_list[i]) for i in range(z.shape[-1])]
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions-builtin/LDSR/sd_hijack_ddpm_v1.py", line 1400, in forward
out = self.diffusion_model(xc, t)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/sd_unet.py", line 91, in UNetModel_forward
return original_forward(self, x, timesteps, context, *args, **kwargs)
File "/content/gdrive/MyDrive/sd/stablediffusion/ldm/modules/diffusionmodules/openaimodel.py", line 768, in forward
emb = self.time_embed(t_emb)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/container.py", line 217, in forward
input = module(input)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions-builtin/Lora/networks.py", line 584, in network_Linear_forward
return originals.Linear_forward(self, input)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/linear.py", line 116, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: mat1 and mat2 must have the same dtype, but got Half and Float


@thomaslee101
Copy link
Author

as an interim solution, i have added this line just before starting the server. it seems to work for upscaling using LDSR but i'm not sure if it affects "txt to img" and "img to img" functions:
os.environ["COMMANDLINE_ARGS"] = '--precision full --no-half'

Screenshot 2024-07-30 233728

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant