Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ERROR: Could not detect model type of: /Volumes/T7/ComfyUI/models/checkpoints/stable_audio_open_1.0.safetensors is:issue #4952

Open
Molnfront opened this issue Sep 17, 2024 · 3 comments
Labels
User Support A user needs help with something, probably not a bug.

Comments

@Molnfront
Copy link

Your question

I am trying to run the audio example.

Logs

# ComfyUI Error Report
## Error Details
- **Node Type:** CheckpointLoaderSimple
- **Exception Type:** RuntimeError
- **Exception Message:** ERROR: Could not detect model type of: /Volumes/T7/ComfyUI/models/checkpoints/stable_audio_open_1.0.safetensors
## Stack Trace

  File "/Volumes/T7/ComfyUI/execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Volumes/T7/ComfyUI/execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Volumes/T7/ComfyUI/execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)

  File "/Volumes/T7/ComfyUI/execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Volumes/T7/ComfyUI/nodes.py", line 540, in load_checkpoint
    out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings"))
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Volumes/T7/ComfyUI/comfy/sd.py", line 527, in load_checkpoint_guess_config
    raise RuntimeError("ERROR: Could not detect model type of: {}".format(ckpt_path))

System Information

  • ComfyUI Version: v0.2.2-42-g5e68a4c
  • Arguments: main.py
  • OS: posix
  • Python Version: 3.12.2 | packaged by conda-forge | (main, Feb 16 2024, 20:54:21) [Clang 16.0.6 ]
  • Embedded Python: false
  • PyTorch Version: 2.6.0.dev20240916

Devices

  • Name: mps
    • Type: mps
    • VRAM Total: 8589934592
    • VRAM Free: 1662222336
    • Torch VRAM Total: 8589934592
    • Torch VRAM Free: 1662222336

Logs

2024-09-17 08:19:48,482 - root - INFO - Total VRAM 8192 MB, total RAM 8192 MB
2024-09-17 08:19:48,482 - root - INFO - pytorch version: 2.6.0.dev20240916
2024-09-17 08:19:48,482 - root - INFO - Set vram state to: SHARED
2024-09-17 08:19:48,482 - root - INFO - Device: mps
2024-09-17 08:19:49,235 - root - INFO - Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention
2024-09-17 08:19:50,599 - root - INFO - [Prompt Server] web root: /Volumes/T7/ComfyUI/web
2024-09-17 08:19:52,772 - root - INFO - 
Import times for custom nodes:
2024-09-17 08:19:52,772 - root - INFO -    0.0 seconds: /Volumes/T7/ComfyUI/custom_nodes/websocket_image_save.py
2024-09-17 08:19:52,772 - root - INFO - 
2024-09-17 08:19:52,776 - root - INFO - Starting server

2024-09-17 08:19:52,776 - root - INFO - To see the GUI go to: http://127.0.0.1:8188
2024-09-17 08:22:39,692 - root - INFO - got prompt
2024-09-17 08:22:39,870 - root - ERROR - !!! Exception during processing !!! ERROR: Could not detect model type of: /Volumes/T7/ComfyUI/models/checkpoints/stable_audio_open_1.0.safetensors
2024-09-17 08:22:39,903 - root - ERROR - Traceback (most recent call last):
  File "/Volumes/T7/ComfyUI/execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Volumes/T7/ComfyUI/execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Volumes/T7/ComfyUI/execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "/Volumes/T7/ComfyUI/execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Volumes/T7/ComfyUI/nodes.py", line 540, in load_checkpoint
    out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings"))
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Volumes/T7/ComfyUI/comfy/sd.py", line 527, in load_checkpoint_guess_config
    raise RuntimeError("ERROR: Could not detect model type of: {}".format(ckpt_path))
RuntimeError: ERROR: Could not detect model type of: /Volumes/T7/ComfyUI/models/checkpoints/stable_audio_open_1.0.safetensors

2024-09-17 08:22:39,904 - root - INFO - Prompt executed in 0.19 seconds

Attached Workflow

Please make sure that workflow does not contain any sensitive information such as API keys or passwords.

{"last_node_id":17,"last_link_id":26,"nodes":[{"id":4,"type":"CheckpointLoaderSimple","pos":{"0":0,"1":240},"size":{"0":336,"1":98},"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"MODEL","type":"MODEL","links":[18],"slot_index":0},{"name":"CLIP","type":"CLIP","links":[],"slot_index":1},{"name":"VAE","type":"VAE","links":[14],"slot_index":2}],"properties":{"Node name for S&R":"CheckpointLoaderSimple"},"widgets_values":["stable_audio_open_1.0.safetensors"]},{"id":10,"type":"CLIPLoader","pos":{"0":0,"1":96},"size":{"0":335.6534118652344,"1":82},"flags":{},"order":1,"mode":0,"inputs":[],"outputs":[{"name":"CLIP","type":"CLIP","links":[25,26],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"CLIPLoader"},"widgets_values":["t5_base.safetensors","stable_audio"]},{"id":12,"type":"VAEDecodeAudio","pos":{"0":1200,"1":96},"size":{"0":210,"1":46},"flags":{},"order":6,"mode":0,"inputs":[{"name":"samples","type":"LATENT","link":13},{"name":"vae","type":"VAE","link":14,"slot_index":1}],"outputs":[{"name":"AUDIO","type":"AUDIO","links":[15],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"VAEDecodeAudio"}},{"id":11,"type":"EmptyLatentAudio","pos":{"0":576,"1":480},"size":{"0":240,"1":58},"flags":{},"order":2,"mode":0,"inputs":[],"outputs":[{"name":"LATENT","type":"LATENT","links":[12],"shape":3}],"properties":{"Node name for S&R":"EmptyLatentAudio"},"widgets_values":[47.6]},{"id":7,"type":"CLIPTextEncode","pos":{"0":384,"1":288},"size":{"0":432,"1":144},"flags":{},"order":4,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":26}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[6],"slot_index":0}],"properties":{"Node name for S&R":"CLIPTextEncode"},"widgets_values":[""],"color":"#322","bgcolor":"#533"},{"id":13,"type":"SaveAudio","pos":{"0":1440,"1":96},"size":{"0":355.22216796875,"1":100},"flags":{},"order":7,"mode":0,"inputs":[{"name":"audio","type":"AUDIO","link":15}],"outputs":[],"properties":{"Node name for S&R":"SaveAudio"},"widgets_values":["audio/ComfyUI",null]},{"id":3,"type":"KSampler","pos":{"0":864,"1":96},"size":{"0":315,"1":262},"flags":{},"order":5,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":18},{"name":"positive","type":"CONDITIONING","link":4},{"name":"negative","type":"CONDITIONING","link":6},{"name":"latent_image","type":"LATENT","link":12,"slot_index":3}],"outputs":[{"name":"LATENT","type":"LATENT","links":[13],"slot_index":0}],"properties":{"Node name for S&R":"KSampler"},"widgets_values":[556430949064775,"randomize",50,4.98,"dpmpp_3m_sde_gpu","exponential",1]},{"id":6,"type":"CLIPTextEncode","pos":{"0":384,"1":96},"size":{"0":432,"1":144},"flags":{},"order":3,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":25}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[4],"slot_index":0}],"properties":{"Node name for S&R":"CLIPTextEncode"},"widgets_values":["heaven church electronic dance music"],"color":"#232","bgcolor":"#353"}],"links":[[4,6,0,3,1,"CONDITIONING"],[6,7,0,3,2,"CONDITIONING"],[12,11,0,3,3,"LATENT"],[13,3,0,12,0,"LATENT"],[14,4,2,12,1,"VAE"],[15,12,0,13,0,"AUDIO"],[18,4,0,3,0,"MODEL"],[25,10,0,6,0,"CLIP"],[26,10,0,7,0,"CLIP"]],"groups":[],"config":{},"extra":{"ds":{"scale":1.4641000000000006,"offset":[413.1237928787707,304.6270199151615]}},"version":0.4}

Additional Context

(Please add any additional context or steps to reproduce the error here)



### Other

The model stable_audio_open_1.0.safetensors is in the folder...
@Molnfront Molnfront added the User Support A user needs help with something, probably not a bug. label Sep 17, 2024
@Molnfront
Copy link
Author

started with python main.py --disable-all-custom-nodes

@Molnfront
Copy link
Author

Tried to run it direct from my ssd instead of external SSD, same problem:

ComfyUI Error Report

Error Details

  • Node Type: CheckpointLoaderSimple
  • Exception Type: RuntimeError
  • Exception Message: ERROR: Could not detect model type of: /Users/moset/ComfyUI/models/checkpoints/stable_audio_open_1.0.safetensors

Stack Trace

  File "/Users/moset/ComfyUI/execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Users/moset/ComfyUI/execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Users/moset/ComfyUI/execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)

  File "/Users/moset/ComfyUI/execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Users/moset/ComfyUI/nodes.py", line 540, in load_checkpoint
    out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings"))
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "/Users/moset/ComfyUI/comfy/sd.py", line 527, in load_checkpoint_guess_config
    raise RuntimeError("ERROR: Could not detect model type of: {}".format(ckpt_path))

System Information

  • ComfyUI Version: v0.2.2-42-g5e68a4c
  • Arguments: main.py
  • OS: posix
  • Python Version: 3.12.2 | packaged by conda-forge | (main, Feb 16 2024, 20:54:21) [Clang 16.0.6 ]
  • Embedded Python: false
  • PyTorch Version: 2.6.0.dev20240916

Devices

  • Name: mps
    • Type: mps
    • VRAM Total: 8589934592
    • VRAM Free: 1995603968
    • Torch VRAM Total: 8589934592
    • Torch VRAM Free: 1995603968

Logs

2024-09-17 17:58:18,767 - root - INFO - Total VRAM 8192 MB, total RAM 8192 MB
2024-09-17 17:58:18,767 - root - INFO - pytorch version: 2.6.0.dev20240916
2024-09-17 17:58:18,767 - root - INFO - Set vram state to: SHARED
2024-09-17 17:58:18,767 - root - INFO - Device: mps
2024-09-17 17:58:24,324 - root - INFO - Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention
2024-09-17 17:58:36,644 - root - INFO - [Prompt Server] web root: /Users/moset/ComfyUI/web
2024-09-17 17:58:39,920 - root - INFO - 
Import times for custom nodes:
2024-09-17 17:58:39,920 - root - INFO -    0.0 seconds: /Users/moset/ComfyUI/custom_nodes/websocket_image_save.py
2024-09-17 17:58:39,920 - root - INFO - 
2024-09-17 17:58:39,927 - root - INFO - Starting server

2024-09-17 17:58:39,927 - root - INFO - To see the GUI go to: http://127.0.0.1:8188
2024-09-17 18:01:47,549 - root - INFO - got prompt
2024-09-17 18:01:47,651 - root - ERROR - !!! Exception during processing !!! ERROR: Could not detect model type of: /Users/moset/ComfyUI/models/checkpoints/stable_audio_open_1.0.safetensors
2024-09-17 18:01:47,675 - root - ERROR - Traceback (most recent call last):
  File "/Users/moset/ComfyUI/execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/moset/ComfyUI/execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/moset/ComfyUI/execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "/Users/moset/ComfyUI/execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/moset/ComfyUI/nodes.py", line 540, in load_checkpoint
    out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings"))
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/moset/ComfyUI/comfy/sd.py", line 527, in load_checkpoint_guess_config
    raise RuntimeError("ERROR: Could not detect model type of: {}".format(ckpt_path))
RuntimeError: ERROR: Could not detect model type of: /Users/moset/ComfyUI/models/checkpoints/stable_audio_open_1.0.safetensors

2024-09-17 18:01:47,676 - root - INFO - Prompt executed in 0.12 seconds

Attached Workflow

Please make sure that workflow does not contain any sensitive information such as API keys or passwords.

{"last_node_id":17,"last_link_id":26,"nodes":[{"id":4,"type":"CheckpointLoaderSimple","pos":{"0":0,"1":240},"size":{"0":336,"1":98},"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"MODEL","type":"MODEL","links":[18],"slot_index":0},{"name":"CLIP","type":"CLIP","links":[],"slot_index":1},{"name":"VAE","type":"VAE","links":[14],"slot_index":2}],"properties":{"Node name for S&R":"CheckpointLoaderSimple"},"widgets_values":["stable_audio_open_1.0.safetensors"]},{"id":10,"type":"CLIPLoader","pos":{"0":0,"1":96},"size":{"0":335.6534118652344,"1":82},"flags":{},"order":1,"mode":0,"inputs":[],"outputs":[{"name":"CLIP","type":"CLIP","links":[25,26],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"CLIPLoader"},"widgets_values":["t5_base.safetensors","stable_audio"]},{"id":12,"type":"VAEDecodeAudio","pos":{"0":1200,"1":96},"size":{"0":210,"1":46},"flags":{},"order":6,"mode":0,"inputs":[{"name":"samples","type":"LATENT","link":13},{"name":"vae","type":"VAE","link":14,"slot_index":1}],"outputs":[{"name":"AUDIO","type":"AUDIO","links":[15],"slot_index":0,"shape":3}],"properties":{"Node name for S&R":"VAEDecodeAudio"}},{"id":11,"type":"EmptyLatentAudio","pos":{"0":576,"1":480},"size":{"0":240,"1":58},"flags":{},"order":2,"mode":0,"inputs":[],"outputs":[{"name":"LATENT","type":"LATENT","links":[12],"shape":3}],"properties":{"Node name for S&R":"EmptyLatentAudio"},"widgets_values":[47.6]},{"id":7,"type":"CLIPTextEncode","pos":{"0":384,"1":288},"size":{"0":432,"1":144},"flags":{},"order":4,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":26}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[6],"slot_index":0}],"properties":{"Node name for S&R":"CLIPTextEncode"},"widgets_values":[""],"color":"#322","bgcolor":"#533"},{"id":13,"type":"SaveAudio","pos":{"0":1440,"1":96},"size":{"0":355.22216796875,"1":100},"flags":{},"order":7,"mode":0,"inputs":[{"name":"audio","type":"AUDIO","link":15}],"outputs":[],"properties":{"Node name for S&R":"SaveAudio"},"widgets_values":["audio/ComfyUI",null]},{"id":3,"type":"KSampler","pos":{"0":864,"1":96},"size":{"0":315,"1":262},"flags":{},"order":5,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":18},{"name":"positive","type":"CONDITIONING","link":4},{"name":"negative","type":"CONDITIONING","link":6},{"name":"latent_image","type":"LATENT","link":12,"slot_index":3}],"outputs":[{"name":"LATENT","type":"LATENT","links":[13],"slot_index":0}],"properties":{"Node name for S&R":"KSampler"},"widgets_values":[118679945870296,"randomize",50,4.98,"dpmpp_3m_sde_gpu","exponential",1]},{"id":6,"type":"CLIPTextEncode","pos":{"0":384,"1":96},"size":{"0":432,"1":144},"flags":{},"order":3,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":25}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[4],"slot_index":0}],"properties":{"Node name for S&R":"CLIPTextEncode"},"widgets_values":["heaven church electronic dance music"],"color":"#232","bgcolor":"#353"}],"links":[[4,6,0,3,1,"CONDITIONING"],[6,7,0,3,2,"CONDITIONING"],[12,11,0,3,3,"LATENT"],[13,3,0,12,0,"LATENT"],[14,4,2,12,1,"VAE"],[15,12,0,13,0,"AUDIO"],[18,4,0,3,0,"MODEL"],[25,10,0,6,0,"CLIP"],[26,10,0,7,0,"CLIP"]],"groups":[],"config":{},"extra":{"ds":{"scale":1,"offset":[0,0]}},"version":0.4}

Additional Context

(Please add any additional context or steps to reproduce the error here)

@Molnfront
Copy link
Author

I read similar issues, and it looks as if it´s easy to put LLM's in the wrong folder. Could it be that the instructions for the audio example are wrong?

I have to little experience to make a qualified guess about this..

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
User Support A user needs help with something, probably not a bug.
Projects
None yet
Development

No branches or pull requests

1 participant