File size: 1,211 Bytes
acc4ffe
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
# Runhouse

This page covers how to use the [Runhouse](https://github.com/run-house/runhouse) ecosystem within LangChain.
It is broken into three parts: installation and setup, LLMs, and Embeddings.

## Installation and Setup
- Install the Python SDK with `pip install runhouse`
- If you'd like to use on-demand cluster, check your cloud credentials with `sky check`

## Self-hosted LLMs
For a basic self-hosted LLM, you can use the `SelfHostedHuggingFaceLLM` class. For more
custom LLMs, you can use the `SelfHostedPipeline` parent class.

```python
from langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLM
```

For a more detailed walkthrough of the Self-hosted LLMs, see [this notebook](../modules/llms/integrations/self_hosted_examples.ipynb)

## Self-hosted Embeddings
There are several ways to use self-hosted embeddings with LangChain via Runhouse.

For a basic self-hosted embedding from a Hugging Face Transformers model, you can use 
the `SelfHostedEmbedding` class.
```python
from langchain.llms import SelfHostedPipeline, SelfHostedHuggingFaceLLM
```

For a more detailed walkthrough of the Self-hosted Embeddings, see [this notebook](../modules/indexes/examples/embeddings.ipynb)

##