|
--- |
|
title: Examples |
|
description: Get started by copying these code snippets into your terminal, a `.py` file, or a Jupyter notebook. |
|
--- |
|
|
|
<CardGroup> |
|
|
|
<Card |
|
title="Interactive demo" |
|
icon="gamepad-modern" |
|
iconType="solid" |
|
href="https://colab.research.google.com/drive/1WKmRXZgsErej2xUriKzxrEAXdxMSgWbb?usp=sharing" |
|
> |
|
Try Open Interpreter without installing anything on your computer |
|
</Card> |
|
|
|
<Card |
|
title="Example voice interface" |
|
icon="circle" |
|
iconType="solid" |
|
href="https://colab.research.google.com/drive/1NojYGHDgxH6Y1G1oxThEBBb2AtyODBIK" |
|
> |
|
An example implementation of Open Interpreter's streaming capabilities |
|
</Card> |
|
|
|
</CardGroup> |
|
|
|
--- |
|
|
|
|
|
|
|
To start an interactive chat in your terminal, either run `interpreter` from the command line: |
|
|
|
```shell |
|
interpreter |
|
``` |
|
|
|
Or `interpreter.chat()` from a .py file: |
|
|
|
```python |
|
interpreter.chat() |
|
``` |
|
|
|
--- |
|
|
|
|
|
|
|
For more precise control, you can pass messages directly to `.chat(message)` in Python: |
|
|
|
```python |
|
interpreter.chat("Add subtitles to all videos in /videos.") |
|
|
|
|
|
|
|
interpreter.chat("These look great but can you make the subtitles bigger?") |
|
|
|
|
|
``` |
|
|
|
--- |
|
|
|
|
|
|
|
In your terminal, Open Interpreter behaves like ChatGPT and will not remember previous conversations. Simply run `interpreter` to start a new chat: |
|
|
|
```shell |
|
interpreter |
|
``` |
|
|
|
In Python, Open Interpreter remembers conversation history. If you want to start fresh, you can reset it: |
|
|
|
```python |
|
interpreter.messages = [] |
|
``` |
|
|
|
--- |
|
|
|
|
|
|
|
In your terminal, Open Interpreter will save previous conversations to `<your application directory>/Open Interpreter/conversations/`. |
|
|
|
You can resume any of them by running `--conversations`. Use your arrow keys to select one , then press `ENTER` to resume it. |
|
|
|
```shell |
|
interpreter --conversations |
|
``` |
|
|
|
In Python, `interpreter.chat()` returns a List of messages, which can be used to resume a conversation with `interpreter.messages = messages`: |
|
|
|
```python |
|
|
|
messages = interpreter.chat("My name is Killian.") |
|
|
|
|
|
interpreter.messages = [] |
|
|
|
|
|
interpreter.messages = messages |
|
``` |
|
|
|
--- |
|
|
|
|
|
|
|
We save default settings to a profile which can be edited by running the following command: |
|
|
|
```shell |
|
interpreter --profiles |
|
``` |
|
|
|
You can use this to set your default language model, system message (custom instructions), max budget, etc. |
|
|
|
<Info> |
|
**Note:** The Python library will also inherit settings from the default |
|
profile file. You can change it by running `interpreter --profiles` and |
|
editing `default.yaml`. |
|
</Info> |
|
|
|
--- |
|
|
|
|
|
|
|
In your terminal, modify the system message by [editing your configuration file as described here](#configure-default-settings). |
|
|
|
In Python, you can inspect and configure Open Interpreter's system message to extend its functionality, modify permissions, or give it more context. |
|
|
|
```python |
|
interpreter.system_message += """ |
|
Run shell commands with -y so the user doesn't have to confirm them. |
|
""" |
|
print(interpreter.system_message) |
|
``` |
|
|
|
--- |
|
|
|
|
|
|
|
Open Interpreter uses [LiteLLM](https://docs.litellm.ai/docs/providers/) to connect to language models. |
|
|
|
You can change the model by setting the model parameter: |
|
|
|
```shell |
|
interpreter --model gpt-3.5-turbo |
|
interpreter --model claude-2 |
|
interpreter --model command-nightly |
|
``` |
|
|
|
In Python, set the model on the object: |
|
|
|
```python |
|
interpreter.llm.model = "gpt-3.5-turbo" |
|
``` |
|
|
|
[Find the appropriate "model" string for your language model here.](https://docs.litellm.ai/docs/providers/) |
|
|