|
# Helicone |
|
|
|
This page covers how to use the [Helicone](https://helicone.ai) within LangChain. |
|
|
|
## What is Helicone? |
|
|
|
Helicone is an [open source](https://github.com/Helicone/helicone) observability platform that proxies your OpenAI traffic and provides you key insights into your spend, latency and usage. |
|
|
|
![Helicone](../_static/HeliconeDashboard.png) |
|
|
|
## Quick start |
|
|
|
With your LangChain environment you can just add the following parameter. |
|
|
|
```bash |
|
export OPENAI_API_BASE="https://oai.hconeai.com/v1" |
|
``` |
|
|
|
Now head over to [helicone.ai](https://helicone.ai/onboarding?step=2) to create your account, and add your OpenAI API key within our dashboard to view your logs. |
|
|
|
![Helicone](../_static/HeliconeKeys.png) |
|
|
|
## How to enable Helicone caching |
|
|
|
```python |
|
from langchain.llms import OpenAI |
|
import openai |
|
openai.api_base = "https://oai.hconeai.com/v1" |
|
|
|
llm = OpenAI(temperature=0.9, headers={"Helicone-Cache-Enabled": "true"}) |
|
text = "What is a helicone?" |
|
print(llm(text)) |
|
``` |
|
|
|
[Helicone caching docs](https://docs.helicone.ai/advanced-usage/caching) |
|
|
|
## How to use Helicone custom properties |
|
|
|
```python |
|
from langchain.llms import OpenAI |
|
import openai |
|
openai.api_base = "https://oai.hconeai.com/v1" |
|
|
|
llm = OpenAI(temperature=0.9, headers={ |
|
"Helicone-Property-Session": "24", |
|
"Helicone-Property-Conversation": "support_issue_2", |
|
"Helicone-Property-App": "mobile", |
|
}) |
|
text = "What is a helicone?" |
|
print(llm(text)) |
|
``` |
|
|
|
[Helicone property docs](https://docs.helicone.ai/advanced-usage/custom-properties) |
|
|