Update README.md
Browse files
README.md
CHANGED
@@ -1,199 +1,205 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
-
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
-
#
|
7 |
-
|
8 |
-
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
-
|
10 |
-
|
11 |
|
12 |
## Model Details
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
- **
|
21 |
-
- **
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
|
57 |
-
|
58 |
-
|
59 |
-
|
60 |
-
|
61 |
-
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
|
70 |
-
|
71 |
-
|
72 |
-
|
73 |
-
|
74 |
-
|
75 |
-
|
76 |
-
|
77 |
-
|
78 |
-
|
79 |
-
|
80 |
-
|
81 |
-
|
82 |
-
|
83 |
-
|
84 |
-
|
85 |
-
|
86 |
-
|
87 |
-
|
88 |
-
|
89 |
-
|
90 |
-
|
91 |
-
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
-
- **
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
101 |
-
|
102 |
-
|
103 |
-
|
104 |
-
|
105 |
-
|
106 |
-
|
107 |
-
###
|
108 |
-
|
109 |
-
|
110 |
-
|
111 |
-
|
112 |
-
|
113 |
-
|
114 |
-
|
115 |
-
|
116 |
-
|
117 |
-
|
118 |
-
|
119 |
-
|
120 |
-
|
121 |
-
|
122 |
-
|
123 |
-
|
124 |
-
|
125 |
-
|
126 |
-
|
127 |
-
|
128 |
-
|
129 |
-
|
130 |
-
|
131 |
-
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
##
|
136 |
-
|
137 |
-
|
138 |
-
|
139 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
140 |
|
141 |
## Environmental Impact
|
142 |
-
|
143 |
-
|
144 |
-
|
145 |
-
|
146 |
-
|
147 |
-
- **
|
148 |
-
- **
|
149 |
-
- **
|
150 |
-
- **
|
151 |
-
-
|
152 |
-
|
153 |
-
|
154 |
-
|
155 |
-
|
156 |
-
|
157 |
-
|
158 |
-
|
159 |
-
|
160 |
-
|
161 |
-
|
162 |
-
|
163 |
-
|
164 |
-
|
165 |
-
|
166 |
-
|
167 |
-
|
168 |
-
|
169 |
-
|
170 |
-
|
171 |
-
|
172 |
-
|
173 |
-
|
174 |
-
|
175 |
-
|
176 |
-
|
177 |
-
|
178 |
-
|
179 |
-
|
180 |
-
|
181 |
-
|
182 |
-
|
183 |
-
|
184 |
-
|
185 |
-
|
186 |
-
|
187 |
-
[More Information Needed]
|
188 |
-
|
189 |
-
## More Information [optional]
|
190 |
-
|
191 |
-
[More Information Needed]
|
192 |
-
|
193 |
-
## Model Card Authors [optional]
|
194 |
-
|
195 |
-
[More Information Needed]
|
196 |
-
|
197 |
-
## Model Card Contact
|
198 |
-
|
199 |
-
[More Information Needed]
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
- hy
|
6 |
+
base_model:
|
7 |
+
- intfloat/multilingual-e5-base
|
8 |
---
|
9 |
|
10 |
+
# Armenian-Text-Embeddings-1
|
|
|
|
|
|
|
|
|
11 |
|
12 |
## Model Details
|
13 |
+
- **Model Name**: Armenian-Text-Embeddings-1
|
14 |
+
- **Model Type**: Text Embeddings for Armenian Language
|
15 |
+
- **Base Model**: intfloat/multilingual-e5-base
|
16 |
+
- **Version**: 1.0.0
|
17 |
+
- **License**: Apache 2.0
|
18 |
+
- **Last Updated**: November 2024
|
19 |
+
- **Model Architecture**: Transformer-based embeddings model
|
20 |
+
- **Input**: Armenian text
|
21 |
+
- **Output**: Dense vector embeddings
|
22 |
+
|
23 |
+
## Quick Start
|
24 |
+
```python
|
25 |
+
import torch.nn.functional as F
|
26 |
+
|
27 |
+
from torch import Tensor
|
28 |
+
from transformers import AutoTokenizer, AutoModel
|
29 |
+
|
30 |
+
tokenizer = AutoTokenizer.from_pretrained('Metric-AI/armenian-text-embeddings-1')
|
31 |
+
model = AutoModel.from_pretrained('Metric-AI/armenian-text-embeddings-1')
|
32 |
+
|
33 |
+
|
34 |
+
def average_pool(last_hidden_states: Tensor,
|
35 |
+
attention_mask: Tensor) -> Tensor:
|
36 |
+
last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
|
37 |
+
return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
|
38 |
+
|
39 |
+
|
40 |
+
# Each input text should start with "query: " or "passage: ", even for non-English texts.
|
41 |
+
# For tasks other than retrieval, you can simply use the "query: " prefix.
|
42 |
+
input_texts = [
|
43 |
+
'query: Ինչպե՞ս պատրաստել տոլմա', # How to make tolma
|
44 |
+
'query: Քանի՞ գրամ սպիտակուց է հարկավոր օրական', # How many grams of protein needed daily
|
45 |
+
|
46 |
+
"""passage: Տոլմայի բաղադրատոմս՝
|
47 |
+
Բաղադրիչներ՝
|
48 |
+
- 500գ աղացած միս
|
49 |
+
- 1 բաժակ բրինձ
|
50 |
+
- Խաղողի տերևներ
|
51 |
+
- 2 գլուխ սոխ
|
52 |
+
- Համեմունքներ՝ աղ, սև պղպեղ, քարի
|
53 |
+
|
54 |
+
Պատրաստման եղանակը՝
|
55 |
+
1. Միսը խառնել բրնձի, մանր կտրատած սոխի և համեմունքների հետ
|
56 |
+
2. Խաղողի տերևները լվանալ և թողնել տաք ջրի մեջ 10 րոպե
|
57 |
+
3. Լցոնել տերևները և դասավորել կաթսայի մեջ
|
58 |
+
4. Եփել դանդաղ կրակի վրա 45-60 րոպե""", # Detailed tolma recipe
|
59 |
+
|
60 |
+
"""passage: Սպիտակուցի օրական չափաբաժինը կախված է մարդու քաշից, սեռից և ֆիզիկական ակտիվությունից:
|
61 |
+
Միջին հաշվով, կանանց համար խորհուրդ է տրվում 46-50 գրամ սպիտակուց օրական:
|
62 |
+
Մարզիկների համար այս թիվը կարող է հասնել մինչև 1.6-2 գրամ մարմնի քաշի յուրաքանչյուր կիլոգրամի համար:
|
63 |
+
Հղիների համար պահանջվում է լրացուցիչ 25 գրամ սպիտակուց:
|
64 |
+
|
65 |
+
Սպիտակուցի հարուստ աղբյուրներ են՝
|
66 |
+
- Հավի միս (31գ/100գ)
|
67 |
+
- Ձու (13գ/100գ)
|
68 |
+
- Ոսպ (25գ/100գ)
|
69 |
+
- Մածուն (3.5գ/100գ)"""] # Detailed protein intake advice
|
70 |
+
|
71 |
+
# Tokenize the input texts
|
72 |
+
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
|
73 |
+
outputs = model(**batch_dict)
|
74 |
+
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
|
75 |
+
|
76 |
+
# normalize embeddings
|
77 |
+
embeddings = F.normalize(embeddings, p=2, dim=1)
|
78 |
+
scores = (embeddings[:2] @ embeddings[2:].T) * 100
|
79 |
+
print(scores.tolist())
|
80 |
+
|
81 |
+
# [[83.96063232421875, 30.283924102783203], [32.504661560058594, 82.4246826171875]]
|
82 |
+
```
|
83 |
+
|
84 |
+
## Intended Use
|
85 |
+
### Primary Intended Uses
|
86 |
+
- Semantic search in Armenian
|
87 |
+
- Document similarity computation
|
88 |
+
- Cross-lingual text understanding
|
89 |
+
- Text classification tasks
|
90 |
+
- Information retrieval
|
91 |
+
|
92 |
+
## Training Data
|
93 |
+
### Dataset Details
|
94 |
+
- **Source**: Reddit dataset with English-Armenian translations
|
95 |
+
- **Size**: 1.08M pairs of rows
|
96 |
+
- **Content Type**: Title and body text pairs
|
97 |
+
- **Token Statistics**:
|
98 |
+
- Training Set:
|
99 |
+
- Translated Title Tokens: 23,921,393
|
100 |
+
- Translated Body Tokens: 194,200,654
|
101 |
+
- Test Set:
|
102 |
+
- Translated Title Tokens: 242,443
|
103 |
+
- Translated Body Tokens: 1,946,164
|
104 |
+
- **Split Ratio**: 99% train, 1% test
|
105 |
+
|
106 |
+
## Training Procedure
|
107 |
+
### Training Details
|
108 |
+
- **Weight Averaging**:
|
109 |
+
- Base model (multilingual-e5-base): 0.6 weight
|
110 |
+
- Fine-tuned model: 0.4 weight
|
111 |
+
- **Training Duration**: 2 days
|
112 |
+
- **Hardware**: 4 x NVIDIA A100 40GB GPUs
|
113 |
+
- **Training Parameters**:
|
114 |
+
- Epochs: 5
|
115 |
+
- Batch Size: 256 per GPU, (256*4 in total)
|
116 |
+
- Learning Rate: 5e-5
|
117 |
+
- Weight Decay: 0.01
|
118 |
+
- Warmup Steps: 1000
|
119 |
+
- Maximum Sequence Length: 128 tokens
|
120 |
+
- FP16 Training: Enabled
|
121 |
+
- Gradient Clipping: 1.0
|
122 |
+
|
123 |
+
### Optimization Configuration
|
124 |
+
- **Framework**: DeepSpeed Stage 2
|
125 |
+
- **Optimizer**: AdamW with auto weight decay
|
126 |
+
- **Mixed Precision**: FP16 with dynamic loss scaling
|
127 |
+
- **ZeRO Optimization**: Stage 2 with:
|
128 |
+
- Allgather partitions
|
129 |
+
- Overlap communications
|
130 |
+
- Contiguous gradients
|
131 |
+
- **Additional Features**:
|
132 |
+
- Gradient checkpointing
|
133 |
+
- Tensor parallelism (size: 2)
|
134 |
+
|
135 |
+
## Performance and Limitations
|
136 |
+
### Capabilities
|
137 |
+
- Effective for semantic similarity tasks in Armenian
|
138 |
+
- Suitable for document classification and clustering
|
139 |
+
|
140 |
+
### Limitations
|
141 |
+
- Performance may vary on domain-specific terminology
|
142 |
+
- May not capture Armenian-specific cultural contexts effectively
|
143 |
+
- Limited by the quality of training data translations
|
144 |
+
|
145 |
+
### Known Biases
|
146 |
+
- May exhibit biases present in Reddit content
|
147 |
+
|
148 |
+
## Computational Requirements
|
149 |
+
### Training Infrastructure
|
150 |
+
- 4 x NVIDIA A100 40GB GPUs
|
151 |
+
- DeepSpeed-compatible setup
|
152 |
+
- Minimum 128GB System RAM
|
153 |
+
|
154 |
+
### Inference Requirements
|
155 |
+
- Minimum: 8GB GPU RAM
|
156 |
+
- Recommended: 16GB GPU RAM
|
157 |
+
- CPU Inference: Possible but significantly slower
|
158 |
+
- RAM: 16GB minimum
|
159 |
|
160 |
## Environmental Impact
|
161 |
+
- **Training Hardware**: 4 x NVIDIA A100 40GB
|
162 |
+
- **Training Duration**: 48 hours
|
163 |
+
- **Estimated Energy Consumption**: 384 kWh (estimated based on A100 power consumption)
|
164 |
+
|
165 |
+
## Ethical Considerations
|
166 |
+
- **Data Privacy**: Training data from public Reddit content
|
167 |
+
- **Potential Misuse**: Could be misused for content manipulation or spam
|
168 |
+
- **Bias**: May perpetuate social biases present in Reddit content
|
169 |
+
- **Recommendations**:
|
170 |
+
- Monitor system outputs for harmful content
|
171 |
+
- Implement content filtering for production use
|
172 |
+
- Regular bias assessment recommended
|
173 |
+
|
174 |
+
## Technical Specifications
|
175 |
+
- **Model Size**: ~278M parameters (based on e5-base)
|
176 |
+
- **Embedding Dimension**: 384
|
177 |
+
- **Max Sequence Length**: 128 tokens
|
178 |
+
- **Framework Compatibility**:
|
179 |
+
- PyTorch
|
180 |
+
- Hugging Face Transformers
|
181 |
+
- DeepSpeed
|
182 |
+
|
183 |
+
## Citation
|
184 |
+
```bibtex
|
185 |
+
@misc{armenian-text-embeddings-2024,
|
186 |
+
author = {[Your Organization]},
|
187 |
+
title = {Armenian-Text-Embeddings-1: Enhanced Armenian Language Embeddings},
|
188 |
+
year = {2024},
|
189 |
+
publisher = {GitHub},
|
190 |
+
journal = {GitHub repository},
|
191 |
+
howpublished = {\url{[repository-url]}}
|
192 |
+
}
|
193 |
+
```
|
194 |
+
|
195 |
+
## Additional Information
|
196 |
+
### Base Model References
|
197 |
+
- multilingual-e5-base: [https://huggingface.co/intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base)
|
198 |
+
|
199 |
+
### Acknowledgments
|
200 |
+
- intfloat for the original multilingual-e5-base model
|
201 |
+
- Reddit community for the source content
|
202 |
+
- DeepSpeed team for optimization toolkit
|
203 |
+
|
204 |
+
## Version History
|
205 |
+
- 1.0.0 (November 2024): Initial release
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|