|
--- |
|
title: Arguments |
|
--- |
|
|
|
<Card |
|
title="New: Streaming responses in Python" |
|
icon="arrow-up-right" |
|
href="/usage/python/streaming-response" |
|
> |
|
Learn how to build Open Interpreter into your application. |
|
</Card> |
|
|
|
|
|
|
|
This property holds a list of `messages` between the user and the interpreter. |
|
|
|
You can use it to restore a conversation: |
|
|
|
```python |
|
interpreter.chat("Hi! Can you print hello world?") |
|
|
|
print(interpreter.messages) |
|
|
|
|
|
|
|
[ |
|
{ |
|
"role": "user", |
|
"message": "Hi! Can you print hello world?" |
|
}, |
|
{ |
|
"role": "assistant", |
|
"message": "Sure!" |
|
} |
|
{ |
|
"role": "assistant", |
|
"language": "python", |
|
"code": "print('Hello, World!')", |
|
"output": "Hello, World!" |
|
} |
|
] |
|
``` |
|
|
|
You can use this to restore `interpreter` to a previous conversation. |
|
|
|
```python |
|
interpreter.messages = messages |
|
``` |
|
|
|
--- |
|
|
|
|
|
|
|
<Info>This replaced `interpreter.local` in the New Computer Update (`0.2.0`).</Info> |
|
|
|
This boolean flag determines whether to enable or disable some offline features like [open procedures](https://open-procedures.replit.app/). |
|
|
|
```python |
|
interpreter.offline = True |
|
interpreter.offline = False |
|
``` |
|
|
|
Use this in conjunction with the `model` parameter to set your language model. |
|
|
|
--- |
|
|
|
|
|
|
|
Setting this flag to `True` allows Open Interpreter to automatically run the generated code without user confirmation. |
|
|
|
```python |
|
interpreter.auto_run = True |
|
interpreter.auto_run = False |
|
``` |
|
|
|
--- |
|
|
|
|
|
|
|
Use this boolean flag to toggle verbose mode on or off. Verbose mode will print information at every step to help diagnose problems. |
|
|
|
```python |
|
interpreter.verbose = True |
|
interpreter.verbose = False |
|
``` |
|
|
|
--- |
|
|
|
|
|
|
|
This property sets the maximum number of tokens for the output response. |
|
|
|
```python |
|
interpreter.max_output = 2000 |
|
``` |
|
|
|
--- |
|
|
|
|
|
|
|
A boolean flag to indicate if the conversation history should be stored or not. |
|
|
|
```python |
|
interpreter.conversation_history = True |
|
interpreter.conversation_history = False |
|
``` |
|
|
|
--- |
|
|
|
|
|
|
|
This property sets the filename where the conversation history will be stored. |
|
|
|
```python |
|
interpreter.conversation_filename = "my_conversation.json" |
|
``` |
|
|
|
--- |
|
|
|
|
|
|
|
You can set the path where the conversation history will be stored. |
|
|
|
```python |
|
import os |
|
interpreter.conversation_history_path = os.path.join("my_folder", "conversations") |
|
``` |
|
|
|
--- |
|
|
|
|
|
|
|
Specifies the language model to be used. |
|
|
|
```python |
|
interpreter.llm.model = "gpt-3.5-turbo" |
|
``` |
|
|
|
--- |
|
|
|
|
|
|
|
Sets the randomness level of the model's output. |
|
|
|
```python |
|
interpreter.llm.temperature = 0.7 |
|
``` |
|
|
|
--- |
|
|
|
|
|
|
|
This stores the model's system message as a string. Explore or modify it: |
|
|
|
```python |
|
interpreter.system_message += "\nRun all shell commands with -y." |
|
``` |
|
|
|
--- |
|
|
|
|
|
|
|
This manually sets the context window size in tokens. |
|
|
|
We try to guess the right context window size for you model, but you can override it with this parameter. |
|
|
|
```python |
|
interpreter.llm.context_window = 16000 |
|
``` |
|
|
|
--- |
|
|
|
|
|
|
|
Sets the maximum number of tokens the model can generate in a single response. |
|
|
|
```python |
|
interpreter.llm.max_tokens = 100 |
|
``` |
|
|
|
--- |
|
|
|
|
|
|
|
If you are using a custom API, you can specify its base URL here. |
|
|
|
```python |
|
interpreter.llm.api_base = "https://api.example.com" |
|
``` |
|
|
|
--- |
|
|
|
|
|
|
|
Set your API key for authentication. |
|
|
|
```python |
|
interpreter.llm.api_key = "your_api_key_here" |
|
``` |
|
|
|
--- |
|
|
|
|
|
|
|
This property sets the maximum budget limit for the session in USD. |
|
|
|
```python |
|
interpreter.max_budget = 0.01 |
|
``` |
|
|