Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DOCS] Sample Workflow Steps #314

Closed
dbwiddis opened this issue Dec 22, 2023 · 0 comments
Closed

[DOCS] Sample Workflow Steps #314

dbwiddis opened this issue Dec 22, 2023 · 0 comments
Labels
documentation Improvements or additions to documentation

Comments

@dbwiddis
Copy link
Member

dbwiddis commented Dec 22, 2023

The below content is intended for publication to opensearch.org.

Please comment below on anything you think needs to be added/changed/deleted.

This references the API docs in #311


Example Workflow Steps

Flow Framework can be used to set up a common use case, conversational chat using a Chain of Thought Agent. This setup requires the following sequence of API requests, with provisioned resources used in subsequent requests:

  • Deploy a model on the cluster
    • Create a connector to a remote model
    • Register a model using the connector just created
    • Deploy that model
  • Use the deployed model for inference
    • Set up several tools which perform specific tasks
    • Set up one or more agents which use some combination of the tools
    • Set up tools representing these agents
    • Set up a root agent which may delegate the task to either tools or another agent

A full template to perform the first three steps is documented in the Create Workflow API documentation.

Deploy a model on the cluster

The first step in the workflow is to create a connector to a remote model. The content in the user_inputs field exactly matches the ML Commons Create Connector API.

nodes:
- id: create_connector_1
  type: create_connector
  user_inputs:
    name: OpenAI Chat Connector
    description: The connector to public OpenAI model service for GPT 3.5
    version: '1'
    protocol: http
    parameters:
      endpoint: api.openai.com
      model: gpt-3.5-turbo
    credential:
      openAI_key: '12345'
    actions:
    - action_type: predict
      method: POST
      url: https://${parameters.endpoint}/v1/chat/completions

This API results in a connector_id which is needed to register this remote model. The previous_node_inputs field indicates that the required connector_id will be provided from the output of the create_connector_1 step. Other inputs required by the Register Model API are included in the user_inputs field.

- id: register_model_2
  type: register_remote_model
  previous_node_inputs:
    create_connector_1: connector_id
  user_inputs:
    name: openAI-gpt-3.5-turbo
    function_name: remote
    description: test model

The output of this step is a model_id. The registered model must then be deployed to the cluster. The Deploy Model API requires this model_id which is included in the previous_node_inputs field of the next step.

- id: deploy_model_3
  type: deploy_model
  # This step needs the model_id produced as an output of the previous step
  previous_node_inputs:
    register_model_2: model_id

When using the Deploy Model API directly, a task ID is returned, requiring use of the Tasks API to determine when the deployment is complete. Flow Framework automates the process of checking this progress and returns the final model_id directly.

In order to connect these steps in sequence, they must be connected by an edge in the graph. The presence of a previous_node_input field in a step requires that node to be the source with the step requiring that input as the dest field. The register_model_2 step requires the connector_id from the create_connector_1 step, and the deploy_model_3 step requires the model_id from the register_model_2 step, so the first two edges in the graph enforce this.

edges:
- source: create_connector_1
  dest: register_model_2
- source: register_model_2
  dest: deploy_model_3

Use the deployed model for inference

A Chain-of-Thought (CoT) Agent can leverage this model in a tool using the ML Commons Agent Framework [Link TBD]. This step doesn’t correspond exactly to an API, but represents a component of the body required by the Register Agent API. This simplifies the register request and allows re-use of the same tool in multiple agents.

A Math Tool can also be configured. It does not depend on any previous steps.

- id: math_tool
  type: create_tool
  user_inputs:
    name: MathTool
    type: MathTool
    description: A general tool to calculate any math problem. The action input
      must be a valid math expression, like 2+3
    parameters:
      max_iteration: 5

Other tools can be configured. The CoT Agent can be configured to use these tools. The previous_node_inputs field identifies the math_tool as its tool. Additional tools could be added here.

The Agent also needs an LLM to reason with the tools, and that is defined by the llm.model_id field. This example assumes the model_id from the deploy_model_3 step will be used. However, if another model is already deployed, the model_id of that previously deployed model could be included in the user_inputs field instead.

- id: sub_agent
  type: register_agent
  previous_node_inputs:
    deploy-model-3: llm.model_id
    math_tool: tools
  user_inputs:
    name: Sub Agent
    type: cot
    description: this is a test agent
    parameters:
      hello: world
    llm.parameters:
      max_iteration: '5'
      stop_when_no_tool_found: 'true'
    memory:
      type: conversation_index
    app_type: chatbot

Edges must be defined to permit the agent to retrieve these fields from the previous node.

- source: math_tool
  dest: sub_agent
- source: deploy_model_3
  dest: sub_agent

Agents can be used as tools for another agent. Registering an agent produces an agent_id in the output. This step defines a tool which uses the agent_id from the previous step.

- id: agent_tool
  type: create_tool
  previous_node_inputs:
    sub_agent: agent_id
  user_inputs:
    name: AgentTool
    type: AgentTool
    description: Agent Tool
    parameters:
      max_iteration: 5

An edge connection is required to enable the previous_node_input.

- source: sub_agent
  dest: agent_tool

A tool may reference an ML Model. This example gets the required model_id from the model deployed in a previous step.

- id: ml_model_tool
  type: create_tool
  previous_node_inputs:
    deploy-model-3: model_id
  user_inputs:
    name: MLModelTool
    type: MLModelTool
    alias: language_model_tool
    description: A general tool to answer any question.
    parameters:
      prompt: Answer the question as best you can.
      response_filter: choices[0].message.content

An edge is required to use this previous_node_input.

- source: deploy-model-3
  dest: ml_model_tool

A conversational chat application will communicate with a single root agent which includes the ML Model Tool and the Agent Tool in its tools field. It will also obtain the llm.model_id from the the deployed model. Some agents require tools to be in a specific order, which can be enforced with the tools_order field.

- id: root_agent
  type: register_agent
  previous_node_inputs:
    deploy-model-3: llm.model_id
    ml_model_tool: tools
    agent_tool: tools
  user_inputs:
    name: DEMO-Test_Agent_For_CoT
    type: cot
    description: this is a test agent
    parameters:
      prompt: Answer the question as best you can.
    llm.parameters:
      max_iteration: '5'
      stop_when_no_tool_found: 'true'
    tools_order: ['agent_tool', 'ml_model_tool']
    memory:
      type: conversation_index
    app_type: chatbot

Edges are required for te previous_node_input sources.

- source: deploy-model-3
  dest: root_agent
- source: ml_model_tool
  dest: root_agent
- source: agent_tool
  dest: root_agent

A final template including all of these steps in the provision workflow is shown below in YAML format.

# This template demonstrates provisioning the resources for a 
# Chain-of-Thought chat bot
name: tool-register-agent
description: test case
use_case: REGISTER_AGENT
version:
  template: 1.0.0
  compatibility:
  - 2.12.0
  - 3.0.0
workflows:
  # This workflow defines the actions to be taken when the Provision Workflow API is used
  provision:
    nodes:
    # The first three nodes create a connector to a remote model, registers and deploy that model
    - id: create_connector_1
      type: create_connector
      user_inputs:
        name: OpenAI Chat Connector
        description: The connector to public OpenAI model service for GPT 3.5
        version: '1'
        protocol: http
        parameters:
          endpoint: api.openai.com
          model: gpt-3.5-turbo
        credential:
          openAI_key: '12345'
        actions:
        - action_type: predict
          method: POST
          url: https://${parameters.endpoint}/v1/chat/completions
    - id: register_model_2
      type: register_remote_model
      previous_node_inputs:
        create_connector_1: connector_id
      user_inputs:
        name: openAI-gpt-3.5-turbo
        function_name: remote
        description: test model
    - id: deploy_model_3
      type: deploy_model
      previous_node_inputs:
        register_model_2: model_id
    # For example purposes, the model_id obtained as the output of the deploy_model_3 step will be used
    # for several below steps.  However, any other deployed model_id can be used for those steps.
    # This is one example tool from the Agent Framework.
    - id: math_tool
      type: create_tool
      user_inputs:
        name: MathTool
        type: MathTool
        description: A general tool to calculate any math problem. The action input
          must be a valid math expression, like 2+3
        parameters:
          max_iteration: 5
    # This simple agent onlyhas one tool, but could be configured with many tools
    - id: sub_agent
      type: register_agent
      previous_node_inputs:
        deploy-model-3: llm.model_id
        math_tool: tools
      user_inputs:
        name: Sub Agent
        type: cot
        description: this is a test agent
        parameters:
          hello: world
        llm.parameters:
          max_iteration: '5'
          stop_when_no_tool_found: 'true'
        memory:
          type: conversation_index
        app_type: chatbot
    # An agent can be used itself as a tool in a nested relationship
    - id: agent_tool
      type: create_tool
      previous_node_inputs:
        sub_agent: agent_id
      user_inputs:
        name: AgentTool
        type: AgentTool
        description: Agent Tool
        parameters:
          max_iteration: 5
    # An ML Model can be used as a tool
    - id: ml_model_tool
      type: create_tool
      previous_node_inputs:
        deploy-model-3: model_id
      user_inputs:
        name: MLModelTool
        type: MLModelTool
        alias: language_model_tool
        description: A general tool to answer any question.
        parameters:
          prompt: Answer the question as best you can.
          response_filter: choices[0].message.content
    # This final agent will be the interface for the CoT chat user
    - id: root_agent
      type: register_agent
      previous_node_inputs:
        deploy-model-3: llm.model_id
        ml_model_tool: tools
        agent_tool: tools
      user_inputs:
        name: DEMO-Test_Agent_For_CoT
        type: cot
        description: this is a test agent
        parameters:
          prompt: Answer the question as best you can.
        llm.parameters:
          max_iteration: '5'
          stop_when_no_tool_found: 'true'
        tools_order: ['agent_tool', 'ml_model_tool']
        memory:
          type: conversation_index
        app_type: chatbot
    # These edges define nodes which must provide output as input for later nodes in the workflow
    edges:
    - source: create_connector_1
      dest: register_model_2
    - source: register_model_2
      dest: deploy_model_3
    - source: math_tool
      dest: sub_agent
    - source: deploy_model_3
      dest: sub_agent
    - source: sub_agent
      dest: agent_tool
    - source: deploy-model-3
      dest: ml_model_tool
    - source: deploy-model-3
      dest: root_agent
    - source: ml_model_tool
      dest: root_agent
    - source: agent_tool
      dest: root_agent

The same template is shown in JSON format.

{
  "name": "tool-register-agent",
  "description": "test case",
  "use_case": "REGISTER_AGENT",
  "version": {
    "template": "1.0.0",
    "compatibility": [
      "2.12.0",
      "3.0.0"
    ]
  },
  "workflows": {
    "provision": {
      "nodes": [
        {
          "id": "create_connector_1",
          "type": "create_connector",
          "user_inputs": {
            "name": "OpenAI Chat Connector",
            "description": "The connector to public OpenAI model service for GPT 3.5",
            "version": "1",
            "protocol": "http",
            "parameters": {
              "endpoint": "api.openai.com",
              "model": "gpt-3.5-turbo"
            },
            "credential": {
              "openAI_key": "12345"
            },
            "actions": [
              {
                "action_type": "predict",
                "method": "POST",
                "url": "https://${parameters.endpoint}/v1/chat/completions"
              }
            ]
          }
        },
        {
          "id": "register_model_2",
          "type": "register_remote_model",
          "previous_node_inputs": {
            "create_connector_1": "connector_id"
          },
          "user_inputs": {
            "name": "openAI-gpt-3.5-turbo",
            "function_name": "remote",
            "description": "test model"
          }
        },
        {
          "id": "deploy_model_3",
          "type": "deploy_model",
          "previous_node_inputs": {
            "register_model_2": "model_id"
          }
        },
        {
          "id": "math_tool",
          "type": "create_tool",
          "user_inputs": {
            "name": "MathTool",
            "type": "MathTool",
            "description": "A general tool to calculate any math problem. The action input must be a valid math expression, like 2+3",
            "parameters": {
              "max_iteration": 5
            }
          }
        },
        {
          "id": "sub_agent",
          "type": "register_agent",
          "previous_node_inputs": {
            "deploy-model-3": "llm.model_id",
            "math_tool": "tools"
          },
          "user_inputs": {
            "name": "Sub Agent",
            "type": "cot",
            "description": "this is a test agent",
            "parameters": {
              "hello": "world"
            },
            "llm.parameters": {
              "max_iteration": "5",
              "stop_when_no_tool_found": "true"
            },
            "memory": {
              "type": "conversation_index"
            },
            "app_type": "chatbot"
          }
        },
        {
          "id": "agent_tool",
          "type": "create_tool",
          "previous_node_inputs": {
            "sub_agent": "agent_id"
          },
          "user_inputs": {
            "name": "AgentTool",
            "type": "AgentTool",
            "description": "Agent Tool",
            "parameters": {
              "max_iteration": 5
            }
          }
        },
        {
          "id": "ml_model_tool",
          "type": "create_tool",
          "previous_node_inputs": {
            "deploy-model-3": "model_id"
          },
          "user_inputs": {
            "name": "MLModelTool",
            "type": "MLModelTool",
            "alias": "language_model_tool",
            "description": "A general tool to answer any question.",
            "parameters": {
              "prompt": "Answer the question as best you can.",
              "response_filter": "choices[0].message.content"
            }
          }
        },
        {
          "id": "root_agent",
          "type": "register_agent",
          "previous_node_inputs": {
            "deploy-model-3": "llm.model_id",
            "ml_model_tool": "tools",
            "agent_tool": "tools"
          },
          "user_inputs": {
            "name": "DEMO-Test_Agent_For_CoT",
            "type": "cot",
            "description": "this is a test agent",
            "parameters": {
              "prompt": "Answer the question as best you can."
            },
            "llm.parameters": {
              "max_iteration": "5",
              "stop_when_no_tool_found": "true"
            },
            "tools_order": [
              "agent_tool",
              "ml_model_tool"
            ],
            "memory": {
              "type": "conversation_index"
            },
            "app_type": "chatbot"
          }
        }
      ],
      "edges": [
        {
          "source": "create_connector_1",
          "dest": "register_model_2"
        },
        {
          "source": "register_model_2",
          "dest": "deploy_model_3"
        },
        {
          "source": "math_tool",
          "dest": "sub_agent"
        },
        {
          "source": "deploy_model_3",
          "dest": "sub_agent"
        },
        {
          "source": "sub_agent",
          "dest": "agent_tool"
        },
        {
          "source": "deploy-model-3",
          "dest": "ml_model_tool"
        },
        {
          "source": "deploy-model-3",
          "dest": "root_agent"
        },
        {
          "source": "ml_model_tool",
          "dest": "root_agent"
        },
        {
          "source": "agent_tool",
          "dest": "root_agent"
        }
      ]
    }
  }
}
@dbwiddis dbwiddis added documentation Improvements or additions to documentation and removed untriaged labels Dec 26, 2023
@dbwiddis dbwiddis closed this as completed Jan 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests

1 participant