File size: 4,442 Bytes
7e48c89 453f6a6 7e48c89 453f6a6 7e48c89 453f6a6 7e48c89 453f6a6 7e48c89 34d7aaa 453f6a6 7e48c89 453f6a6 7e48c89 453f6a6 7e48c89 34d7aaa 7e48c89 453f6a6 7e48c89 453f6a6 7e48c89 453f6a6 7e48c89 453f6a6 7e48c89 453f6a6 7e48c89 453f6a6 7e48c89 453f6a6 7e48c89 453f6a6 7e48c89 453f6a6 7e48c89 453f6a6 7e48c89 453f6a6 78211e3 7e48c89 453f6a6 7e48c89 453f6a6 7e48c89 453f6a6 7e48c89 453f6a6 7e48c89 453f6a6 7e48c89 453f6a6 7e48c89 453f6a6 7e48c89 453f6a6 7e48c89 453f6a6 7e48c89 453f6a6 7e48c89 453f6a6 7e48c89 453f6a6 7e48c89 453f6a6 7e48c89 453f6a6 7e48c89 453f6a6 7e48c89 453f6a6 7e48c89 453f6a6 7e48c89 453f6a6 7e48c89 453f6a6 7e48c89 453f6a6 7e48c89 453f6a6 7e48c89 453f6a6 7e48c89 453f6a6 78211e3 7e48c89 453f6a6 7e48c89 453f6a6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 |
---
library_name: transformers
tags: [retrieval-augmented-generation, finetuning, llm, huggingface]
---
# Model Card for Finetuned Llama 3.2 (ROS Query System)
This model is a finetuned version of Llama 3.2 specifically designed to answer questions related to the Robot Operating System (ROS). It was finetuned on Kaggle using domain-specific data scraped from GitHub repositories and Medium articles. The model powers a Retrieval-Augmented Generation (RAG) pipeline in our AI final project.
---
## Model Details
### Model Description
- **Developed by:** Krish Murjani (netid: km6520) & Shresth Kapoor (netid: sk11677)
- **Project Name:** CS-GY-6613 AI Final Project: ROS Query System
- **Finetuned From:** `sentence-transformers/all-MiniLM-L6-v2`
- **Language(s):** English
- **License:** Apache 2.0
---
### Model Sources
- **Repository:** [GitHub Repository](https://github.com/krishmurjani/cs-gy-6613-final-project)
---
## Uses
### Direct Use
The model is used in a Retrieval-Augmented Generation (RAG) pipeline for answering questions related to the Robot Operating System (ROS). It integrates with a vector search engine (Qdrant) and MongoDB for efficient retrieval and query response generation.
### Downstream Use
The model can be extended for other technical domains through additional finetuning or plug-in integration into larger AI systems.
### Out-of-Scope Use
The model is not designed for tasks outside of technical documentation retrieval and answering ROS-related queries.
---
## Bias, Risks, and Limitations
- **Bias:** The model may reflect biases inherent in the scraped ROS documentation and articles.
- **Limitations:** Responses are limited to the scraped and finetuned dataset. It may not generalize to broader queries.
### Recommendations
- Use the model for educational and research purposes in robotics and ROS-specific domains.
- Avoid using the model in high-stakes applications where critical decisions rely on the accuracy of generated responses.
---
## How to Get Started with the Model
```python
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("your-model-id")
tokenizer = AutoTokenizer.from_pretrained("your-model-id")
input_text = "How can I navigate to a specific pose using ROS?"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model(**inputs)
print(outputs)
```
## Training Details
### Training Data
- **Sources:**
- GitHub repositories related to the Robot Operating System (ROS).
- Medium articles discussing ROS topics.
### Training Procedure
- **Preprocessing:**
- Data cleaning, text chunking, and embedding using Sentence-BERT (`all-MiniLM-L6-v2`).
- Used ClearML orchestrator for ETL and finetuning pipelines.
- **Training Framework:**
- Hugging Face Transformers, PyTorch
- **Training Regime:**
- fp16 mixed precision (for efficiency and memory optimization)
---
## Evaluation
### Testing Data
- **Dataset:**
- Internal evaluation dataset created from project-specific queries and generated question-answer pairs.
### Factors & Metrics
- **Metrics:**
- Query relevance, answer accuracy, and completeness.
- **Evaluation Results:**
- Achieved high relevance and precision for domain-specific questions related to ROS.
---
## Environmental Impact
- **Hardware Type:**
- NVIDIA Tesla T4 (Kaggle)
- **Hours Used:**
- Approximately 15-20 hours of training
- **Compute Region:**
- US Central (Kaggle Cloud)
- **Carbon Emitted:**
- Estimated using the [Machine Learning Impact Calculator](https://mlco2.github.io/impact#compute).
---
## Technical Specifications
- **Model Architecture:**
- Transformer-based language model (Llama 3.2)
- **Compute Infrastructure:**
- Kaggle Cloud with NVIDIA Tesla T4 GPUs
- **Frameworks:**
- Hugging Face Transformers, PyTorch, ClearML
---
## Citation
```bibtex
@misc{kapoor2024rosquery,
title={ROS Query System: A Retrieval-Augmented Generation Pipeline},
author={Shresth Kapoor and Krish Murjani},
year={2024},
note={CS-GY-6613 AI Final Project, NYU Tandon School of Engineering}
}
```
## Model Card Authors
- Krish Murjani ([krishmurjani](https://huggingface.co/krishmurjani))
- Shresth Kapoor ([shresthkapoor7](https://huggingface.co/shresthkapoor7))
## Model Card Contact
For any inquiries, please contact us through our ([GitHub Repository](https://github.com/krishmurjani/cs-gy-6613-final-project)). |