Update README.md
Browse files
README.md
CHANGED
@@ -3,6 +3,7 @@ license: mit
|
|
3 |
datasets:
|
4 |
- avemio/GRAG-CPT-HESSIAN-AI
|
5 |
- avemio/GRAG-SFT-ShareGPT-HESSIAN-AI
|
|
|
6 |
language:
|
7 |
- en
|
8 |
- de
|
@@ -28,7 +29,7 @@ tags:
|
|
28 |
|
29 |
**GRAG** (**G**erman **R**etrieval **A**ugmented **G**eneration) models are designed for the German-speaking market, enabling innovation and AI solutions to drive German research collaboration in business-focused Generative AI by 2025
|
30 |
|
31 |
-
Our GRAG-LLAMA-SFT model are trained on this **[GRAG-
|
32 |
|
33 |
## Model Details
|
34 |
|
@@ -133,37 +134,75 @@ Four evaluation metrics were employed across all subsets: language quality, over
|
|
133 |
- **Overall score:** This metric combined the results from the previous three metrics, offering a comprehensive evaluation of the model's capabilities across all subsets.
|
134 |
|
135 |
|
136 |
-
| Metric | [Vanila-Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407) |
|
137 |
|------------------------------------------|---------------------------------------------------------------------------------|--------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------|-----------------------------|----------------|
|
138 |
-
| Average Language Quality | 85.88 |
|
139 |
| **OVERALL SCORES (weighted):** | | | | | |
|
140 |
-
| extraction_recall | 35.2 |
|
141 |
-
| qa_multiple_references | 65.3 |
|
142 |
-
| qa_without_time_difference | 71.5 |
|
143 |
-
| qa_with_time_difference | 65.3 |
|
144 |
-
| reasoning | 69.4 |
|
145 |
-
| relevant_context | 71.3 |
|
146 |
-
| summarizations | 73.8 |
|
147 |
|
148 |
## Model Details
|
149 |
|
150 |
### Data
|
151 |
-
For training data details, please see the [GRAG-
|
152 |
|
153 |
-
|
154 |
-
|
155 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
156 |
|
157 |
-
#### Task Instruction Format
|
158 |
-
The implementation of these SFT tasks follows a carefully structured format designed for consistency and clarity. Each task begins with comprehensive system instructions often wrapped in XML tags that meta-define expected inputs, outputs, constraints, and example interactions. This standardization enables clear communication between the model and users while ensuring reliable results.
|
159 |
-
The context information utilized in these tasks is provided in a standardized JSON structure, including unique identifiers, source text, timestamps where relevant, and task-specific metadata. This format was specifically chosen to allow seamless integration with retrieved data from RAG systems, eliminating the need for additional formatting steps in production environments.
|
160 |
-
Source references are handled through a consistent system of numerical indices for context references, JSON-formatted citation markers, and clear time-difference notifications when temporal aspects are relevant. This systematic approach to referencing ensures traceability and reliability in the model's responses.
|
161 |
-
The implementation of these tasks within RAG systems can significantly improve organizational efficiency by reducing manual processing time, ensuring consistency in information handling, improving accuracy in data extraction and analysis, and enabling faster decision-making through better information access.
|
162 |
|
163 |
### Architecture
|
164 |
|
165 |
|
166 |
-
| Parameter | GRAG-NEMO-
|
167 |
|-----------------------|-----------------------------------------------------------------------------------------------|
|
168 |
| **d_model** | 5120 |
|
169 |
| **num heads** | 32 |
|
@@ -181,7 +220,7 @@ The implementation of these tasks within RAG systems can significantly improve o
|
|
181 |
### Hyperparameters
|
182 |
|
183 |
|
184 |
-
| Parameter | GRAG-NEMO-
|
185 |
|---------------------------|--------------------|
|
186 |
| **warmup steps** | 50 |
|
187 |
| **peak LR** | 5.0E-07 |
|
@@ -192,19 +231,19 @@ The implementation of these tasks within RAG systems can significantly improve o
|
|
192 |
|
193 |
## Environmental Impact
|
194 |
|
195 |
-
GRAG-NEMO-
|
196 |
|
197 |
It's important to note that the actual power consumption may vary depending on the specific workload and operational conditions. For accurate power consumption measurements, using dedicated power monitoring tools is recommended.
|
198 |
|
199 |
| Model | GPU Type | Power Consumption From GPUs |
|
200 |
|----------------|---------------------|-----------------------------|
|
201 |
-
| GRAG-NEMO-
|
202 |
## Bias, Risks, and Limitations
|
203 |
|
204 |
Like any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content.
|
205 |
Such content can also be produced unintentionally, especially in the case of bias, so we recommend users consider the risks of applications of this technology.
|
206 |
|
207 |
-
Otherwise, many facts from GRAG-NEMO-
|
208 |
|
209 |
|
210 |
|
|
|
3 |
datasets:
|
4 |
- avemio/GRAG-CPT-HESSIAN-AI
|
5 |
- avemio/GRAG-SFT-ShareGPT-HESSIAN-AI
|
6 |
+
- avemio/GRAG-ORPO-ShareGPT-HESSIAN-AI
|
7 |
language:
|
8 |
- en
|
9 |
- de
|
|
|
29 |
|
30 |
**GRAG** (**G**erman **R**etrieval **A**ugmented **G**eneration) models are designed for the German-speaking market, enabling innovation and AI solutions to drive German research collaboration in business-focused Generative AI by 2025
|
31 |
|
32 |
+
Our GRAG-LLAMA-SFT model are trained on this **[GRAG-ORPO](https://huggingface.co/datasets/avemio/GRAG-ORPO-ShareGPT-HESSIAN-AI) dataset.**
|
33 |
|
34 |
## Model Details
|
35 |
|
|
|
134 |
- **Overall score:** This metric combined the results from the previous three metrics, offering a comprehensive evaluation of the model's capabilities across all subsets.
|
135 |
|
136 |
|
137 |
+
| Metric | [Vanila-Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407) | [GRAG-NEMO-SFT](https://huggingface.co/avemio/GRAG-NEMO-12B-SFT-HESSIAN-AI) | **[GRAG-NEMO-ORPO](https://huggingface.co/avemio/GRAG-NEMO-12B-ORPO-HESSIAN-AI)** | [GRAG-NEMO-MERGED]() | GPT-3.5-TURBO |
|
138 |
|------------------------------------------|---------------------------------------------------------------------------------|--------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------|-----------------------------|----------------|
|
139 |
+
| Average Language Quality | 85.88 | 89.61 | **89.1** | | |
|
140 |
| **OVERALL SCORES (weighted):** | | | | | |
|
141 |
+
| extraction_recall | 35.2 | 52.3 | **48.8** | | |
|
142 |
+
| qa_multiple_references | 65.3 | 71.0 | **74.0** | | |
|
143 |
+
| qa_without_time_difference | 71.5 | 85.6 | **85.6** | | |
|
144 |
+
| qa_with_time_difference | 65.3 | 87.9 | **85.4** | | |
|
145 |
+
| reasoning | 69.4 | 71.5 | **73.4** | | |
|
146 |
+
| relevant_context | 71.3 | 69.1 | **65.5** | | |
|
147 |
+
| summarizations | 73.8 | 81.6 | **80.3** | | |
|
148 |
|
149 |
## Model Details
|
150 |
|
151 |
### Data
|
152 |
+
For training data details, please see the [GRAG-ORPO-Dataset](https://huggingface.co/datasets/avemio/GRAG-ORPO-ShareGPT-HESSIAN-AI) documentation.
|
153 |
|
154 |
+
The ORPO Tasks Dataset represents a specialized collection for fine-tuning language models with a focus on RAG-specific capabilities.
|
155 |
+
|
156 |
+
The subsets can be for this training step are derived from 3 different sources:
|
157 |
+
- **SauerkrautLM Preference Datasets**:
|
158 |
+
- [SauerkrautLM-Fermented-GER-DPO](https://huggingface.co/datasets/VAGOsolutions/SauerkrautLM-Fermented-GER-DPO): is a specialized dataset designed for training language models in function calling irrelevance detection using Preference Optimization. The dataset consists of 2,000 carefully evaluated instruction-response pairs, specifically curated to help models recognize situations where function calls are unnecessary and direct responses are more appropriate.
|
159 |
+
- [SauerkrautLM-Fermented-Irrelevance-GER-DPO](https://huggingface.co/datasets/VAGOsolutions/SauerkrautLM-Fermented-Irrelevance-GER-DPO): is a high-quality German instruction-response dataset specifically designed for Preference Optimization training. The dataset consists of 3,305 instruction-response pairs. Rather than being merged from existing German datasets, it was carefully created through a sophisticated augmentation process, transforming curated English instructions and responses into culturally adapted German content. Each pair includes comprehensive quality metrics and rejected responses for Preference training.
|
160 |
+
- **Hard Reasoning DE & EN**: Synthetic generation inspired by Tencent's ([鈥淪caling Synthetic Data Creation with 1,000,000,000 Personas鈥漖(https://arxiv.org/abs/2406.20094)).
|
161 |
+
- **Multi-Turn-QA**: Developed by Avemio AG, this dataset builds upon and enhances the German Wikipedia dump provided by Cohere ([wikipedia-22-12-de-embeddings](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings)), expanding it with synthetic examples and structured tasks to create a robust training resource.
|
162 |
+
|
163 |
+
### Data Subsets
|
164 |
+
|
165 |
+
| Subset | Examples per Task |
|
166 |
+
|-------|------------------|
|
167 |
+
| SauerkrautLM-Fermented-GER-DPO | 3.31k |
|
168 |
+
| SauerkrautLM-Fermented-Irrelevance-GER-DPO | 2k |
|
169 |
+
| hard-reasoning-de | 3.19k |
|
170 |
+
| hard-reasoning-en | 1.97k |
|
171 |
+
| multi-turn-qa | 3.2k |
|
172 |
+
|
173 |
+
|
174 |
+
### Source Data: SauerkrautLM
|
175 |
+
[SauerkrautLM-Fermented-GER-DPO](https://huggingface.co/datasets/VAGOsolutions/SauerkrautLM-Fermented-GER-DPO)
|
176 |
+
|
177 |
+
[SauerkrautLM-Fermented-Irrelevance-GER-DPO](https://huggingface.co/datasets/VAGOsolutions/SauerkrautLM-Fermented-Irrelevance-GER-DPO)
|
178 |
+
|
179 |
+
### Source Data: Hard-Reasoning DE & EN
|
180 |
+
- Base: ([proj-Persona/PersonaHub](https://huggingface.co/datasets/proj-persona/PersonaHub))
|
181 |
+
- Enhancement: Synthetic data generation by Avemio AG
|
182 |
+
- Quality: Automatic validation and curation of examples by Open Source LLM's
|
183 |
+
|
184 |
+
### Methodology: Reasoning-DE & Reasoning-EN
|
185 |
+
- Providing Persona Descriptions and rewriting in a similar style with a different focus area and name in german/english language
|
186 |
+
- Generating Simple Logical Problems out of Persona-specific Views & Language.
|
187 |
+
- Generating Approaches, Thinking-Steps & Solutions separately verified by Llama-3.1-405B-Instruct
|
188 |
+
- Quality assurance and validation
|
189 |
+
|
190 |
+
### Source Data: Multi-Turn-QA
|
191 |
+
- Base: ([cohere/wikipedia-22-12-de-embeddings](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings))
|
192 |
+
- Enhancement: Synthetic data generation by Avemio AG
|
193 |
+
- Quality: Automatic validation and curation of examples by Open Source LLM's
|
194 |
+
|
195 |
+
### Methodology: Multi-Turn-QA
|
196 |
+
1. Extraction of base content from German Wikipedia
|
197 |
+
2. Enhancement through synthetic example generation
|
198 |
+
3. Structure addition for specific task types
|
199 |
+
4. Quality assurance and validation
|
200 |
|
|
|
|
|
|
|
|
|
|
|
201 |
|
202 |
### Architecture
|
203 |
|
204 |
|
205 |
+
| Parameter | GRAG-NEMO-ORPO |
|
206 |
|-----------------------|-----------------------------------------------------------------------------------------------|
|
207 |
| **d_model** | 5120 |
|
208 |
| **num heads** | 32 |
|
|
|
220 |
### Hyperparameters
|
221 |
|
222 |
|
223 |
+
| Parameter | GRAG-NEMO-ORPO |
|
224 |
|---------------------------|--------------------|
|
225 |
| **warmup steps** | 50 |
|
226 |
| **peak LR** | 5.0E-07 |
|
|
|
231 |
|
232 |
## Environmental Impact
|
233 |
|
234 |
+
GRAG-NEMO-ORPO, running on NVIDIA A100 with 8 GPUs for 5 days, has an approximate power consumption as follows:
|
235 |
|
236 |
It's important to note that the actual power consumption may vary depending on the specific workload and operational conditions. For accurate power consumption measurements, using dedicated power monitoring tools is recommended.
|
237 |
|
238 |
| Model | GPU Type | Power Consumption From GPUs |
|
239 |
|----------------|---------------------|-----------------------------|
|
240 |
+
| GRAG-NEMO-NEMO | A100 ([Hessian AI supercomputer](https://hessian.ai/de/)) | 0.288 MWh |
|
241 |
## Bias, Risks, and Limitations
|
242 |
|
243 |
Like any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content.
|
244 |
Such content can also be produced unintentionally, especially in the case of bias, so we recommend users consider the risks of applications of this technology.
|
245 |
|
246 |
+
Otherwise, many facts from GRAG-NEMO-ORPO or any LLM will often not be true, so they should be checked.
|
247 |
|
248 |
|
249 |
|