Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
@@ -1,10 +1,117 @@
|
|
1 |
---
|
2 |
title: README
|
3 |
-
emoji:
|
4 |
colorFrom: yellow
|
5 |
colorTo: purple
|
6 |
sdk: static
|
7 |
pinned: false
|
8 |
---
|
9 |
|
10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
title: README
|
3 |
+
emoji: π¦
|
4 |
colorFrom: yellow
|
5 |
colorTo: purple
|
6 |
sdk: static
|
7 |
pinned: false
|
8 |
---
|
9 |
|
10 |
+
|
11 |
+
# ποΈ LlamaIndex π¦ (GPT Index)
|
12 |
+
|
13 |
+
LlamaIndex (GPT Index) is a project that provides a central interface to connect your LLM's with external data.
|
14 |
+
|
15 |
+
PyPi:
|
16 |
+
- LlamaIndex: https://pypi.org/project/llama-index/.
|
17 |
+
- GPT Index (duplicate): https://pypi.org/project/gpt-index/.
|
18 |
+
|
19 |
+
Documentation: https://gpt-index.readthedocs.io/en/latest/.
|
20 |
+
|
21 |
+
Twitter: https://twitter.com/gpt_index.
|
22 |
+
|
23 |
+
Discord: https://discord.gg/dGcwcsnxhU.
|
24 |
+
|
25 |
+
LlamaHub (community library of data loaders): https://llamahub.ai
|
26 |
+
|
27 |
+
## π Overview
|
28 |
+
|
29 |
+
**NOTE**: This README is not updated as frequently as the documentation. Please check out the documentation above for the latest updates!
|
30 |
+
|
31 |
+
### Context
|
32 |
+
- LLMs are a phenomenonal piece of technology for knowledge generation and reasoning. They are pre-trained on large amounts of publicly available data.
|
33 |
+
- How do we best augment LLMs with our own private data?
|
34 |
+
- One paradigm that has emerged is *in-context* learning (the other is finetuning), where we insert context into the input prompt. That way,
|
35 |
+
we take advantage of the LLM's reasoning capabilities to generate a response.
|
36 |
+
|
37 |
+
To perform LLM's data augmentation in a performant, efficient, and cheap manner, we need to solve two components:
|
38 |
+
- Data Ingestion
|
39 |
+
- Data Indexing
|
40 |
+
|
41 |
+
### Proposed Solution
|
42 |
+
|
43 |
+
That's where the **LlamaIndex** comes in. LlamaIndex is a simple, flexible interface between your external data and LLMs. It provides the following tools in an easy-to-use fashion:
|
44 |
+
|
45 |
+
- Offers **data connectors** to your existing data sources and data formats (API's, PDF's, docs, SQL, etc.)
|
46 |
+
- Provides **indices** over your unstructured and structured data for use with LLM's.
|
47 |
+
These indices help to abstract away common boilerplate and pain points for in-context learning:
|
48 |
+
- Storing context in an easy-to-access format for prompt insertion.
|
49 |
+
- Dealing with prompt limitations (e.g. 4096 tokens for Davinci) when context is too big.
|
50 |
+
- Dealing with text splitting.
|
51 |
+
- Provides users an interface to **query** the index (feed in an input prompt) and obtain a knowledge-augmented output.
|
52 |
+
- Offers you a comprehensive toolset trading off cost and performance.
|
53 |
+
|
54 |
+
|
55 |
+
## π‘ Contributing
|
56 |
+
|
57 |
+
Interesting in contributing? See our [Contribution Guide](CONTRIBUTING.md) for more details.
|
58 |
+
|
59 |
+
## π Documentation
|
60 |
+
|
61 |
+
Full documentation can be found here: https://gpt-index.readthedocs.io/en/latest/.
|
62 |
+
|
63 |
+
Please check it out for the most up-to-date tutorials, how-to guides, references, and other resources!
|
64 |
+
|
65 |
+
|
66 |
+
## π» Example Usage
|
67 |
+
|
68 |
+
```
|
69 |
+
pip install llama-index
|
70 |
+
```
|
71 |
+
|
72 |
+
Examples are in the `examples` folder. Indices are in the `indices` folder (see list of indices below).
|
73 |
+
|
74 |
+
To build a simple vector store index:
|
75 |
+
```python
|
76 |
+
import os
|
77 |
+
os.environ["OPENAI_API_KEY"] = 'YOUR_OPENAI_API_KEY'
|
78 |
+
|
79 |
+
from llama_index import GPTSimpleVectorIndex, SimpleDirectoryReader
|
80 |
+
documents = SimpleDirectoryReader('data').load_data()
|
81 |
+
index = GPTSimpleVectorIndex.from_documents(documents)
|
82 |
+
```
|
83 |
+
|
84 |
+
To save to and load from disk:
|
85 |
+
```python
|
86 |
+
# save to disk
|
87 |
+
index.save_to_disk('index.json')
|
88 |
+
# load from disk
|
89 |
+
index = GPTSimpleVectorIndex.load_from_disk('index.json')
|
90 |
+
```
|
91 |
+
|
92 |
+
To query:
|
93 |
+
```python
|
94 |
+
index.query("<question_text>?")
|
95 |
+
```
|
96 |
+
|
97 |
+
## π§ Dependencies
|
98 |
+
|
99 |
+
The main third-party package requirements are `tiktoken`, `openai`, and `langchain`.
|
100 |
+
|
101 |
+
All requirements should be contained within the `setup.py` file. To run the package locally without building the wheel, simply run `pip install -r requirements.txt`.
|
102 |
+
|
103 |
+
|
104 |
+
## π Citation
|
105 |
+
|
106 |
+
Reference to cite if you use LlamaIndex in a paper:
|
107 |
+
|
108 |
+
```
|
109 |
+
@software{Liu_LlamaIndex_2022,
|
110 |
+
author = {Liu, Jerry},
|
111 |
+
doi = {10.5281/zenodo.1234},
|
112 |
+
month = {11},
|
113 |
+
title = {{LlamaIndex}},
|
114 |
+
url = {https://github.com/jerryjliu/gpt_index},
|
115 |
+
year = {2022}
|
116 |
+
}
|
117 |
+
```
|