Skip to content

Commit

Permalink
Make some formatting updates after testing, some grammar updates
Browse files Browse the repository at this point in the history
Signed-off-by: Diana <[email protected]>
  • Loading branch information
cloudjumpercat committed Sep 18, 2024
1 parent 2bacd86 commit d40aa8b
Show file tree
Hide file tree
Showing 2 changed files with 12 additions and 12 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ title: Set up AI Semantic Cache with Mistral
* Mistral's API key
* [Redis configured as a vector database](https://redis.io/docs/latest/develop/get-started/vector-database/)
* [Redis configured as a cache](https://redis.io/docs/latest/operate/oss_and_stack/management/config/#configuring-redis-as-a-cache)
* You need a service to contain the route for the LLM provider. Create a service **first**:
* A service and a route for the LLM provider. You need a service to contain the route for the LLM provider. Create a service **first**:
```sh
curl -X POST http://localhost:8001/services \
--data "name=ai-semantic-cache" \
Expand All @@ -34,11 +34,11 @@ config:
auth:
header_name: Authorization
header_value: Bearer MISTRAL_API_KEY
model:
provider: mistral
name: mistral-embed
options:
upstream_url: https://api.mistral.ai/v1/embeddings
model:
provider: mistral
name: mistral-embed
options:
upstream_url: https://api.mistral.ai/v1/embeddings
vectordb:
dimensions: 1024
distance_metric: cosine
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ title: Set up AI Semantic Cache with OpenAI
* OpenAI account and subscription
* [Redis configured as a vector database](https://redis.io/docs/latest/develop/get-started/vector-database/)
* [Redis configured as a cache](https://redis.io/docs/latest/operate/oss_and_stack/management/config/#configuring-redis-as-a-cache)
* You need a service to contain the route for the LLM provider. Create a service **first**:
* A service and a route for the LLM provider. You need a service to contain the route for the LLM provider. Create a service **first**:
```sh
curl -X POST http://localhost:8001/services \
--data "name=ai-semantic-cache" \
Expand All @@ -33,11 +33,11 @@ config:
auth:
header_name: Authorization
header_value: Bearer OPENAI_API_KEY
model:
provider: openai
name: text-embedding-3-large
options:
upstream_url: https://api.openai.com/v1/embeddings
model:
provider: openai
name: text-embedding-3-large
options:
upstream_url: https://api.openai.com/v1/embeddings
vectordb:
dimensions: 3072
distance_metric: cosine
Expand Down

0 comments on commit d40aa8b

Please sign in to comment.