Skip to content

Commit

Permalink
Merge pull request #44 from microsoft/python
Browse files Browse the repository at this point in the history
Python Runtime
  • Loading branch information
sethjuarez committed Jul 23, 2024
2 parents 33c3a34 + a05e955 commit 0f0dc81
Show file tree
Hide file tree
Showing 14 changed files with 560 additions and 65 deletions.
5 changes: 5 additions & 0 deletions runtime/prompty/.gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -160,3 +160,8 @@ cython_debug/
# and can be added to the global gitignore or merged into this file. For a more nuclear
# option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/


# trace
trace.json
.runs/
1 change: 0 additions & 1 deletion runtime/prompty/.vscode/settings.json
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
{
"python.defaultInterpreterPath": "../.venv/Scripts/python.exe",
"python.testing.pytestArgs": [
"./tests"
],
Expand Down
119 changes: 119 additions & 0 deletions runtime/prompty/README.md
Original file line number Diff line number Diff line change
@@ -1 +1,120 @@

Prompty is an asset class and format for LLM prompts designed to enhance observability, understandability, and portability for developers. The primary goal is to accelerate the developer inner loop of prompt engineering and prompt source management in a cross-language and cross-platform implentation.

The file format has a supporting toolchain with a VS Code extension and runtimes in multiple programming languages to simplify and accelerate your AI application development.

The tooling comes together in three ways: the *prompty file asset*, the *VS Code extension tool*, and *runtimes* in multiple programming languges.

## The Prompty File Format
Prompty is a language agnostic prompt asset for creating prompts and engineering the responses. Learn more about the format [here](https://prompty.ai/docs/prompty-file-spec).

Examples prompty file:
```markdown
---
name: Basic Prompt
description: A basic prompt that uses the GPT-3 chat API to answer questions
authors:
- sethjuarez
- jietong
model:
api: chat
configuration:
azure_deployment: gpt-35-turbo
sample:
firstName: Jane
lastName: Doe
question: What is the meaning of life?
---
system:
You are an AI assistant who helps people find information.
As the assistant, you answer questions briefly, succinctly,
and in a personable manner using markdown and even add some personal flair with appropriate emojis.

# Customer
You are helping {{firstName}} {{lastName}} to find answers to their questions.
Use their name to address them in your responses.

user:
{{question}}
```


## The Prompty VS Code Extension
Run Prompty files directly in VS Code. This Visual Studio Code extension offers an intuitive prompt playground within VS Code to streamline the prompt engineering process. You can find the Prompty extension in the Visual Studio Code Marketplace.

Download the [VS Code extension here](https://marketplace.visualstudio.com/items?itemName=ms-toolsai.prompty).


## Using this Prompty Runtime
The Python runtime is a simple way to run your prompts in Python. The runtime is available as a Python package and can be installed using pip.

```bash
pip install prompty
```

Simple usage example:

```python
import prompty

# execute the prompt
response = prompty.execute("path/to/prompty/file")

print(response)
```

## Using Tracing in Prompty
Prompty supports tracing to help you understand the execution of your prompts. The built-in tracing dumps the execution of the prompt to a file.

```python
import prompty
from prompty.tracer import Trace, PromptyTracer

# add default tracer
Trace.add_tracerTrace.add_tracer("prompty", PromptyTracer("path/to/trace/dir"))

# execute the prompt
response = prompty.execute("path/to/prompty/file")

print(response)
```

You can also bring your own tracer by creating a `Tracer` class.
Simple example:

```python
import prompty
from prompty.tracer import Tracer

class MyTracer(Tracer):

def start(self, name: str) -> None:
print(f"Starting {name}")

def add(self, key: str, value: Any) -> None:
print(f"Adding {key} with value {value}")

def end(self) -> None:
print("Ending")

# add your tracer
Trace.add_tracer("my_tracer", MyTracer())

# execute the prompt
response = prompty.execute("path/to/prompty/file")

```

To define your own tracer, you can subclass the `Tracer` class and implement the `start`, `add`, and `end` methods and then add it to the `Trace` instance. You can add as many tracers as you like - the will all of them will be called in order.

## CLI
The Prompty runtime also comes with a CLI tool that allows you to run prompts from the command line. The CLI tool is installed with the Python package.

```bash
prompty -s path/to/prompty/file
```

This will execute the prompt and print the response to the console. It also has default tracing enabled.

## Contributing
We welcome contributions to the Prompty project! This community led project is open to all contributors. The project cvan be found on [GitHub](https://github.com/Microsoft/prompty).
27 changes: 26 additions & 1 deletion runtime/prompty/pdm.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

24 changes: 19 additions & 5 deletions runtime/prompty/prompty/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@
import traceback
from pathlib import Path
from typing import Dict, List, Union

from .tracer import trace
from .core import (
Frontmatter,
InvokerFactory,
Expand All @@ -16,6 +18,16 @@
from .parsers import *
from .executors import *
from .processors import *
from ._version import __version__

__all__ = [
"load",
"prepare",
"run",
"execute",
"headless",
"__version__",
]


def load_global_config(
Expand Down Expand Up @@ -46,6 +58,7 @@ def load_global_config(
return {}


@trace(description="Create a headless prompty object for programmatic use.")
def headless(
api: str,
content: str | List[str] | dict,
Expand Down Expand Up @@ -102,6 +115,7 @@ def headless(
return Prompty(model=modelSettings, template=templateSettings, content=content)


@trace(description="Load a prompty file.")
def load(prompty_file: str, configuration: str = "default") -> Prompty:
"""Load a prompty file.
Expand All @@ -127,7 +141,7 @@ def load(prompty_file: str, configuration: str = "default") -> Prompty:
p = Path(prompty_file)
if not p.is_absolute():
# get caller's path (take into account trace frame)
caller = Path(traceback.extract_stack()[-2].filename)
caller = Path(traceback.extract_stack()[-3].filename)
p = Path(caller.parent / p).resolve().absolute()

# load dictionary from prompty file
Expand Down Expand Up @@ -228,7 +242,7 @@ def load(prompty_file: str, configuration: str = "default") -> Prompty:
)
return p


@trace(description="Prepare the inputs for the prompt.")
def prepare(
prompt: Prompty,
inputs: Dict[str, any] = {},
Expand Down Expand Up @@ -274,7 +288,7 @@ def prepare(

return result


@trace(description="Run the prepared Prompty content against the model.")
def run(
prompt: Prompty,
content: dict | list | str,
Expand Down Expand Up @@ -335,7 +349,7 @@ def run(

return result


@trace(description="Execute a prompty")
def execute(
prompt: Union[str, Prompty],
configuration: Dict[str, any] = {},
Expand Down Expand Up @@ -376,7 +390,7 @@ def execute(
path = Path(prompt)
if not path.is_absolute():
# get caller's path (take into account trace frame)
caller = Path(traceback.extract_stack()[-2].filename)
caller = Path(traceback.extract_stack()[-3].filename)
path = Path(caller.parent / path).resolve().absolute()
prompt = load(path, connection)

Expand Down
1 change: 1 addition & 0 deletions runtime/prompty/prompty/_version.py
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
__version__ = '0.1.1.dev21+g3644676.d20240722'
85 changes: 85 additions & 0 deletions runtime/prompty/prompty/cli.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
import os
import json
import click


from pathlib import Path
from pydantic import BaseModel

from . import load, execute
from .tracer import trace, Trace, PromptyTracer
from dotenv import load_dotenv

load_dotenv()
Trace.add_tracer("prompty", PromptyTracer())

def normalize_path(p, create_dir=False) -> Path:
path = Path(p)
if not path.is_absolute():
path = Path(os.getcwd()).joinpath(path).absolute().resolve()
else:
path = path.absolute().resolve()

if create_dir:
if not path.exists():
print(f"Creating directory {str(path)}")
os.makedirs(str(path))

return path


@trace
def chat_mode(prompt_path: str):
W = "\033[0m" # white (normal)
R = "\033[31m" # red
G = "\033[32m" # green
O = "\033[33m" # orange
B = "\033[34m" # blue
P = "\033[35m" # purple
print(f"Executing {str(prompt_path)} in chat mode...")
prompty = load(str(prompt_path))
if "chat_history" not in prompty.sample:
print(
f"{R}{str(prompt_path)} needs to have a chat_history input to work in chat mode{W}"
)
return
else:
chat_history = prompty.sample["chat_history"]
while True:
user_input = input(f"{B}User:{W} ")
if user_input == "exit":
break
chat_history.append({"role": "user", "content": user_input})
# reloadable prompty file
result = execute(prompt_path, inputs={"chat_history": chat_history})
print(f"\n{G}Assistant:{W} {result}\n")
chat_history.append({"role": "assistant", "content": result})
print("Goodbye!")


@click.command()
@click.option("--source", "-s", required=True)
@click.option("--verbose", "-v", is_flag=True)
@click.option("--chat", "-c", is_flag=True)
@click.version_option()
@trace
def run(source, verbose, chat):
prompt_path = normalize_path(source)
if not prompt_path.exists():
print(f"{str(prompt_path)} does not exist")
return

if chat:
chat_mode(str(prompt_path))
else:
result = execute(str(prompt_path), raw=verbose)
if issubclass(type(result), BaseModel):
print(json.dumps(result.model_dump(), indent=4))
elif isinstance(result, list):
print(json.dumps([item.model_dump() for item in result], indent=4))
else:
print(result)


if __name__ == "__main__":
chat_mode(source="./tests/prompts/basic.prompt")
Loading

0 comments on commit 0f0dc81

Please sign in to comment.