Spaces:
Build error
Build error
Add files via upload
Browse files
README.md
CHANGED
|
@@ -1,2 +1,104 @@
|
|
| 1 |
-
#
|
| 2 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# AI Tutor App Data Workflows
|
| 2 |
+
|
| 3 |
+
This directory contains scripts for managing the AI Tutor App's data pipeline.
|
| 4 |
+
|
| 5 |
+
## Workflow Scripts
|
| 6 |
+
|
| 7 |
+
### 1. Adding a New Course
|
| 8 |
+
|
| 9 |
+
To add a new course to the AI Tutor:
|
| 10 |
+
|
| 11 |
+
```bash
|
| 12 |
+
python add_course_workflow.py --course [COURSE_NAME]
|
| 13 |
+
```
|
| 14 |
+
|
| 15 |
+
This will guide you through the complete process:
|
| 16 |
+
|
| 17 |
+
1. Process markdown files from Notion exports
|
| 18 |
+
2. Prompt you to manually add URLs to the course content
|
| 19 |
+
3. Merge the course data into the main dataset
|
| 20 |
+
4. Add contextual information to document nodes
|
| 21 |
+
5. Create vector stores
|
| 22 |
+
6. Upload databases to HuggingFace
|
| 23 |
+
7. Update UI configuration
|
| 24 |
+
|
| 25 |
+
**Requirements before running:**
|
| 26 |
+
|
| 27 |
+
- The course name must be properly configured in `process_md_files.py` under `SOURCE_CONFIGS`
|
| 28 |
+
- Course markdown files must be placed in the directory specified in the configuration
|
| 29 |
+
- You must have access to the live course platform to add URLs
|
| 30 |
+
|
| 31 |
+
### 2. Updating Documentation via GitHub API
|
| 32 |
+
|
| 33 |
+
To update library documentation from GitHub repositories:
|
| 34 |
+
|
| 35 |
+
```bash
|
| 36 |
+
python update_docs_workflow.py
|
| 37 |
+
```
|
| 38 |
+
|
| 39 |
+
This will update all supported documentation sources. You can also specify specific sources:
|
| 40 |
+
|
| 41 |
+
```bash
|
| 42 |
+
python update_docs_workflow.py --sources transformers peft
|
| 43 |
+
```
|
| 44 |
+
|
| 45 |
+
The workflow includes:
|
| 46 |
+
|
| 47 |
+
1. Downloading documentation from GitHub using the API
|
| 48 |
+
2. Processing markdown files to create JSONL data
|
| 49 |
+
3. Adding contextual information to document nodes
|
| 50 |
+
4. Creating vector stores
|
| 51 |
+
5. Uploading databases to HuggingFace
|
| 52 |
+
|
| 53 |
+
### 3. Uploading JSONL to HuggingFace
|
| 54 |
+
|
| 55 |
+
To upload the main JSONL file to a private HuggingFace repository:
|
| 56 |
+
|
| 57 |
+
```bash
|
| 58 |
+
python upload_jsonl_to_hf.py
|
| 59 |
+
```
|
| 60 |
+
|
| 61 |
+
This is useful for sharing the latest data with team members.
|
| 62 |
+
|
| 63 |
+
## Individual Components
|
| 64 |
+
|
| 65 |
+
If you need to run specific steps individually:
|
| 66 |
+
|
| 67 |
+
- **GitHub to Markdown**: `github_to_markdown_ai_docs.py`
|
| 68 |
+
- **Process Markdown**: `process_md_files.py`
|
| 69 |
+
- **Add Context**: `add_context_to_nodes.py`
|
| 70 |
+
- **Create Vector Stores**: `create_vector_stores.py`
|
| 71 |
+
- **Upload to HuggingFace**: `upload_dbs_to_hf.py`
|
| 72 |
+
|
| 73 |
+
## Tips for New Team Members
|
| 74 |
+
|
| 75 |
+
1. To update the AI Tutor with new content:
|
| 76 |
+
- For new courses, use `add_course_workflow.py`
|
| 77 |
+
- For updated documentation, use `update_docs_workflow.py`
|
| 78 |
+
|
| 79 |
+
2. When adding URLs to course content:
|
| 80 |
+
- Get the URLs from the live course platform
|
| 81 |
+
- Add them to the generated JSONL file in the `url` field
|
| 82 |
+
- Example URL format: `https://academy.towardsai.net/courses/take/python-for-genai/multimedia/62515980-course-structure`
|
| 83 |
+
- Make sure every document has a valid URL
|
| 84 |
+
|
| 85 |
+
3. By default, only new content will have context added to save time and resources. Use `--process-all-context` only if you need to regenerate context for all documents. Use `--skip-data-upload` if you don't want to upload data files to the private HuggingFace repo (they're uploaded by default).
|
| 86 |
+
|
| 87 |
+
4. When adding a new course, verify that it appears in the Gradio UI:
|
| 88 |
+
- The workflow automatically updates `main.py` and `setup.py` to include the new source
|
| 89 |
+
- Check that the new source appears in the dropdown menu in the UI
|
| 90 |
+
- Make sure it's properly included in the default selected sources
|
| 91 |
+
- Restart the Gradio app to see the changes
|
| 92 |
+
|
| 93 |
+
5. First time setup or missing files:
|
| 94 |
+
- Both workflows automatically check for and download required data files:
|
| 95 |
+
- `all_sources_data.jsonl` - Contains the raw document data
|
| 96 |
+
- `all_sources_contextual_nodes.pkl` - Contains the processed nodes with added context
|
| 97 |
+
- If the PKL file exists, the `--new-context-only` flag will only process new content
|
| 98 |
+
- You must have proper HuggingFace credentials with access to the private repository
|
| 99 |
+
|
| 100 |
+
6. Make sure you have the required environment variables set:
|
| 101 |
+
- `OPENAI_API_KEY` for LLM processing
|
| 102 |
+
- `COHERE_API_KEY` for embeddings
|
| 103 |
+
- `HF_TOKEN` for HuggingFace uploads
|
| 104 |
+
- `GITHUB_TOKEN` for accessing documentation via the GitHub API
|