|
--- |
|
title: Custom Models |
|
--- |
|
|
|
In addition to hosted and local language models, Open Interpreter also supports custom models. |
|
|
|
As long as your system can accept an input and stream an output (and can be interacted with via a Python generator) it can be used as a language model in Open Interpreter. |
|
|
|
Simply replace the OpenAI-compatible `completions` function in your language model with one of your own: |
|
|
|
```python |
|
def custom_language_model(openai_message): |
|
""" |
|
OpenAI-compatible completions function (this one just echoes what the user said back). |
|
""" |
|
users_content = openai_message[-1].get("content") |
|
|
|
|
|
yield {"delta": {"role": "assistant"}} |
|
|
|
for character in users_content: |
|
yield {"delta": {"content": character}} |
|
|
|
|
|
|
|
interpreter.llm.completion = custom_language_model |
|
``` |
|
|
|
Then, set the following settings: |
|
|
|
``` |
|
interpreter.llm.context_window = 2000 |
|
interpreter.llm.max_tokens = 1000 |
|
interpreter.llm.supports_vision = False |
|
interpreter.llm.supports_functions = False |
|
``` |
|
|
|
And start using it: |
|
|
|
``` |
|
interpreter.chat("Hi!") |
|
``` |