Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug?: Embedded and Malicious Probabilities (index 2 is out of bounds for dimension 1 with size 2) #27

Open
mburges-cvl opened this issue Jul 30, 2024 · 3 comments
Assignees

Comments

@mburges-cvl
Copy link

Hello, I get this error when running the model locally, as described by the repo. however, I get this error:

index 2 is out of bounds for dimension 1 with size 2

with this traceback:

/python3.10/site-packages/mesop/server/server.py:153 | generate_data
 for _ in result: 
/python3.10/site-packages/mesop/runtime/context.py:161 | run_event_handler
 yield from result 
./llama-agentic-system/app/utils/chat.py:228 | on_input_enter
         state = me.state(State) 
         state.input = e.value 
         yield from submit() 
     def submit(): 
         state = me.state(State) 
         if state.in_progress or not state.input: 
./llama-agentic-system/app/utils/chat.py:270 | submit
         cur_uuids = set(state.output) 
         for op_uuid, op in transform(content): 
             KEY_TO_OUTPUTS[op_uuid] = op 
             if op_uuid not in cur_uuids: 
                 output.append(op_uuid) 
                 cur_uuids.add(op_uuid) 
./llama-agentic-system/app/utils/transform.py:39 | transform
     generator = sync_generator(EVENT_LOOP, client.run([input_message])) 
     for chunk in generator: 
         if not hasattr(chunk, "event"): 
             # Need to check for custom tool first 
             # since it does not produce event but instead 
             # a Message 
./llama-agentic-system/app/utils/common.py:36 | generator
         while True: 
             try: 
                 yield loop.run_until_complete(async_generator.__anext__()) 
             except StopAsyncIteration: 
                 break 
     return generator() 
/python3.10/asyncio/base_events.py:649 | run_until_complete
 return future.result() 
./llama-agentic-system/llama_agentic_system/utils.py:73 | run
 async for chunk in execute_with_custom_tools( 
./llama-agentic-system/llama_agentic_system/client.py:122 | execute_with_custom_tools
 async for chunk in system.create_agentic_system_turn(request): 
./llama-agentic-system/llama_agentic_system/agentic_system.py:763 | create_agentic_system_turn
 async for event in agent.create_and_execute_turn(request): 
./llama-agentic-system/llama_agentic_system/agentic_system.py:270 | create_and_execute_turn
 async for chunk in self.run( 
./llama-agentic-system/llama_agentic_system/agentic_system.py:396 | run
 async for res in self.run_shields_wrapper( 
./llama-agentic-system/llama_agentic_system/agentic_system.py:341 | run_shields_wrapper
 await self.run_shields(messages, shields) 
/python3.10/site-packages/llama_toolchain/safety/shields/shield_runner.py:41 | run_shields
 results = await asyncio.gather(*[s.run(messages) for s in shields]) 
/python3.10/site-packages/llama_toolchain/safety/shields/base.py:55 | run
 return await self.run_impl(text) 
/python3.10/site-packages/llama_toolchain/safety/shields/prompt_guard.py:96 | run_impl
 score_malicious = probabilities[0, 2].item() 

to fix it I changed :

         score_embedded = probabilities[0, 1].item()
         score_malicious = probabilities[0, 2].item()

to

        score_embedded = probabilities[0, 0].item()
        score_malicious = probabilities[0, 1].item()

in llama_toolchain/safety/shields/prompt_guard.py PromptGuardShield run_impl (Line 95/96)

Not sure if that is correct, but for me the model works now.

@cynikolai cynikolai self-assigned this Jul 30, 2024
@cynikolai
Copy link
Member

Hi there, are you certain you're using loading the correct PromptGuard local model? It should produce an output vector of length 3 rather than length 2.

@mburges-cvl
Copy link
Author

mburges-cvl commented Jul 31, 2024

Hi, i downloaded the model and the PromptGuard as stated in the read me:

# download the 8B model, this can be run on a single GPU
llama download meta-llama/Meta-Llama-3.1-8B-Instruct

# llama-agents have safety enabled by default. For this, you will need
# safety models -- Llama-Guard and Prompt-Guard
llama download meta-llama/Llama-Guard-3-8B --ignore-patterns original

Could it be the version miss match between the models 3.1 and the PromptGuards 3?

@cynikolai
Copy link
Member

Hm so PromptGuard is a V1 model, LlamaGuard 3 is a separate model for content moderation. There should also be a download command for the PromptGuard model separately in the readme - the command in your comment downloads LlamaGuard.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants