-
Notifications
You must be signed in to change notification settings - Fork 30
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #44 from microsoft/python
Python Runtime
- Loading branch information
Showing
14 changed files
with
560 additions
and
65 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,5 +1,4 @@ | ||
{ | ||
"python.defaultInterpreterPath": "../.venv/Scripts/python.exe", | ||
"python.testing.pytestArgs": [ | ||
"./tests" | ||
], | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1 +1,120 @@ | ||
|
||
Prompty is an asset class and format for LLM prompts designed to enhance observability, understandability, and portability for developers. The primary goal is to accelerate the developer inner loop of prompt engineering and prompt source management in a cross-language and cross-platform implentation. | ||
|
||
The file format has a supporting toolchain with a VS Code extension and runtimes in multiple programming languages to simplify and accelerate your AI application development. | ||
|
||
The tooling comes together in three ways: the *prompty file asset*, the *VS Code extension tool*, and *runtimes* in multiple programming languges. | ||
|
||
## The Prompty File Format | ||
Prompty is a language agnostic prompt asset for creating prompts and engineering the responses. Learn more about the format [here](https://prompty.ai/docs/prompty-file-spec). | ||
|
||
Examples prompty file: | ||
```markdown | ||
--- | ||
name: Basic Prompt | ||
description: A basic prompt that uses the GPT-3 chat API to answer questions | ||
authors: | ||
- sethjuarez | ||
- jietong | ||
model: | ||
api: chat | ||
configuration: | ||
azure_deployment: gpt-35-turbo | ||
sample: | ||
firstName: Jane | ||
lastName: Doe | ||
question: What is the meaning of life? | ||
--- | ||
system: | ||
You are an AI assistant who helps people find information. | ||
As the assistant, you answer questions briefly, succinctly, | ||
and in a personable manner using markdown and even add some personal flair with appropriate emojis. | ||
|
||
# Customer | ||
You are helping {{firstName}} {{lastName}} to find answers to their questions. | ||
Use their name to address them in your responses. | ||
|
||
user: | ||
{{question}} | ||
``` | ||
|
||
|
||
## The Prompty VS Code Extension | ||
Run Prompty files directly in VS Code. This Visual Studio Code extension offers an intuitive prompt playground within VS Code to streamline the prompt engineering process. You can find the Prompty extension in the Visual Studio Code Marketplace. | ||
|
||
Download the [VS Code extension here](https://marketplace.visualstudio.com/items?itemName=ms-toolsai.prompty). | ||
|
||
|
||
## Using this Prompty Runtime | ||
The Python runtime is a simple way to run your prompts in Python. The runtime is available as a Python package and can be installed using pip. | ||
|
||
```bash | ||
pip install prompty | ||
``` | ||
|
||
Simple usage example: | ||
|
||
```python | ||
import prompty | ||
|
||
# execute the prompt | ||
response = prompty.execute("path/to/prompty/file") | ||
|
||
print(response) | ||
``` | ||
|
||
## Using Tracing in Prompty | ||
Prompty supports tracing to help you understand the execution of your prompts. The built-in tracing dumps the execution of the prompt to a file. | ||
|
||
```python | ||
import prompty | ||
from prompty.tracer import Trace, PromptyTracer | ||
|
||
# add default tracer | ||
Trace.add_tracerTrace.add_tracer("prompty", PromptyTracer("path/to/trace/dir")) | ||
|
||
# execute the prompt | ||
response = prompty.execute("path/to/prompty/file") | ||
|
||
print(response) | ||
``` | ||
|
||
You can also bring your own tracer by creating a `Tracer` class. | ||
Simple example: | ||
|
||
```python | ||
import prompty | ||
from prompty.tracer import Tracer | ||
|
||
class MyTracer(Tracer): | ||
|
||
def start(self, name: str) -> None: | ||
print(f"Starting {name}") | ||
|
||
def add(self, key: str, value: Any) -> None: | ||
print(f"Adding {key} with value {value}") | ||
|
||
def end(self) -> None: | ||
print("Ending") | ||
|
||
# add your tracer | ||
Trace.add_tracer("my_tracer", MyTracer()) | ||
|
||
# execute the prompt | ||
response = prompty.execute("path/to/prompty/file") | ||
|
||
``` | ||
|
||
To define your own tracer, you can subclass the `Tracer` class and implement the `start`, `add`, and `end` methods and then add it to the `Trace` instance. You can add as many tracers as you like - the will all of them will be called in order. | ||
|
||
## CLI | ||
The Prompty runtime also comes with a CLI tool that allows you to run prompts from the command line. The CLI tool is installed with the Python package. | ||
|
||
```bash | ||
prompty -s path/to/prompty/file | ||
``` | ||
|
||
This will execute the prompt and print the response to the console. It also has default tracing enabled. | ||
|
||
## Contributing | ||
We welcome contributions to the Prompty project! This community led project is open to all contributors. The project cvan be found on [GitHub](https://github.com/Microsoft/prompty). |
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
__version__ = '0.1.1.dev21+g3644676.d20240722' |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,85 @@ | ||
import os | ||
import json | ||
import click | ||
|
||
|
||
from pathlib import Path | ||
from pydantic import BaseModel | ||
|
||
from . import load, execute | ||
from .tracer import trace, Trace, PromptyTracer | ||
from dotenv import load_dotenv | ||
|
||
load_dotenv() | ||
Trace.add_tracer("prompty", PromptyTracer()) | ||
|
||
def normalize_path(p, create_dir=False) -> Path: | ||
path = Path(p) | ||
if not path.is_absolute(): | ||
path = Path(os.getcwd()).joinpath(path).absolute().resolve() | ||
else: | ||
path = path.absolute().resolve() | ||
|
||
if create_dir: | ||
if not path.exists(): | ||
print(f"Creating directory {str(path)}") | ||
os.makedirs(str(path)) | ||
|
||
return path | ||
|
||
|
||
@trace | ||
def chat_mode(prompt_path: str): | ||
W = "\033[0m" # white (normal) | ||
R = "\033[31m" # red | ||
G = "\033[32m" # green | ||
O = "\033[33m" # orange | ||
B = "\033[34m" # blue | ||
P = "\033[35m" # purple | ||
print(f"Executing {str(prompt_path)} in chat mode...") | ||
prompty = load(str(prompt_path)) | ||
if "chat_history" not in prompty.sample: | ||
print( | ||
f"{R}{str(prompt_path)} needs to have a chat_history input to work in chat mode{W}" | ||
) | ||
return | ||
else: | ||
chat_history = prompty.sample["chat_history"] | ||
while True: | ||
user_input = input(f"{B}User:{W} ") | ||
if user_input == "exit": | ||
break | ||
chat_history.append({"role": "user", "content": user_input}) | ||
# reloadable prompty file | ||
result = execute(prompt_path, inputs={"chat_history": chat_history}) | ||
print(f"\n{G}Assistant:{W} {result}\n") | ||
chat_history.append({"role": "assistant", "content": result}) | ||
print("Goodbye!") | ||
|
||
|
||
@click.command() | ||
@click.option("--source", "-s", required=True) | ||
@click.option("--verbose", "-v", is_flag=True) | ||
@click.option("--chat", "-c", is_flag=True) | ||
@click.version_option() | ||
@trace | ||
def run(source, verbose, chat): | ||
prompt_path = normalize_path(source) | ||
if not prompt_path.exists(): | ||
print(f"{str(prompt_path)} does not exist") | ||
return | ||
|
||
if chat: | ||
chat_mode(str(prompt_path)) | ||
else: | ||
result = execute(str(prompt_path), raw=verbose) | ||
if issubclass(type(result), BaseModel): | ||
print(json.dumps(result.model_dump(), indent=4)) | ||
elif isinstance(result, list): | ||
print(json.dumps([item.model_dump() for item in result], indent=4)) | ||
else: | ||
print(result) | ||
|
||
|
||
if __name__ == "__main__": | ||
chat_mode(source="./tests/prompts/basic.prompt") |
Oops, something went wrong.