Update README.md
Browse files
README.md
CHANGED
@@ -11,7 +11,6 @@ pipeline_tag: translation
|
|
11 |
|
12 |
# Model Card for Model ID
|
13 |
|
14 |
-
<!-- Provide a quick summary of what the model is/does. -->
|
15 |
|
16 |
|
17 |
|
@@ -19,8 +18,6 @@ pipeline_tag: translation
|
|
19 |
|
20 |
### Model Description
|
21 |
|
22 |
-
<!-- Provide a longer summary of what this model is. -->
|
23 |
-
|
24 |
This model is a fine-tuned version of the GEMMA 2B multilingual transformer, specifically optimized for translating text from English to Hindi. It leverages the capabilities of the original GEMMA architecture to provide accurate and efficient translations.
|
25 |
|
26 |
Model Name: Gemma-2b-mt-Hindi-Fintuned
|
@@ -31,11 +28,6 @@ Framework: Transformers
|
|
31 |
|
32 |
### Model Sources [optional]
|
33 |
|
34 |
-
<!-- Provide the basic links for the model. -->
|
35 |
-
|
36 |
-
- **Repository:** [More Information Needed]
|
37 |
-
- **Paper [optional]:** [More Information Needed]
|
38 |
-
- **Demo [optional]:** [More Information Needed]
|
39 |
|
40 |
## Uses
|
41 |
|
@@ -58,10 +50,6 @@ The model can be integrated into larger systems or applications that require Eng
|
|
58 |
|
59 |
### Out-of-Scope Use
|
60 |
|
61 |
-
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
62 |
-
|
63 |
-
[More Information Needed]
|
64 |
-
|
65 |
## Bias, Risks, and Limitations
|
66 |
|
67 |
-The model may struggle with idiomatic expressions or culturally specific content.
|
@@ -76,129 +64,45 @@ The model can be integrated into larger systems or applications that require Eng
|
|
76 |
|
77 |
## How to Get Started with the Model
|
78 |
|
79 |
-
Use the code below to get started with the model
|
80 |
-
|
81 |
-
[More Information Needed]
|
82 |
-
|
83 |
-
## Training Details
|
84 |
-
|
85 |
-
### Training Data
|
86 |
-
|
87 |
-
The model was fine-tuned on the cfilt/iitb-english-hindi dataset, which contains English-Hindi sentence pairs. For more details about the dataset, refer to the dataset card on Hugging Face.
|
88 |
-
|
89 |
-
### Training Procedure
|
90 |
-
|
91 |
-
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
92 |
-
|
93 |
-
#### Preprocessing [optional]
|
94 |
-
|
95 |
-
[More Information Needed]
|
96 |
-
|
97 |
-
|
98 |
-
#### Training Hyperparameters
|
99 |
|
100 |
-
|
101 |
|
102 |
-
|
|
|
|
|
103 |
|
104 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
105 |
|
106 |
-
|
107 |
-
|
108 |
-
|
109 |
-
|
110 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
111 |
-
|
112 |
-
### Testing Data, Factors & Metrics
|
113 |
-
|
114 |
-
#### Testing Data
|
115 |
-
|
116 |
-
<!-- This should link to a Dataset Card if possible. -->
|
117 |
-
|
118 |
-
[More Information Needed]
|
119 |
-
|
120 |
-
#### Factors
|
121 |
-
|
122 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
123 |
-
|
124 |
-
[More Information Needed]
|
125 |
-
|
126 |
-
#### Metrics
|
127 |
-
|
128 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
129 |
-
|
130 |
-
[More Information Needed]
|
131 |
-
|
132 |
-
### Results
|
133 |
-
|
134 |
-
[More Information Needed]
|
135 |
|
136 |
-
|
|
|
|
|
|
|
137 |
|
138 |
|
|
|
139 |
|
140 |
-
|
141 |
-
|
142 |
-
<!-- Relevant interpretability work for the model goes here -->
|
143 |
-
|
144 |
-
[More Information Needed]
|
145 |
-
|
146 |
-
## Environmental Impact
|
147 |
-
|
148 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
149 |
-
|
150 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
151 |
-
|
152 |
-
- **Hardware Type:** [More Information Needed]
|
153 |
-
- **Hours used:** [More Information Needed]
|
154 |
-
- **Cloud Provider:** [More Information Needed]
|
155 |
-
- **Compute Region:** [More Information Needed]
|
156 |
-
- **Carbon Emitted:** [More Information Needed]
|
157 |
-
|
158 |
-
## Technical Specifications [optional]
|
159 |
-
|
160 |
-
### Model Architecture and Objective
|
161 |
-
|
162 |
-
[More Information Needed]
|
163 |
-
|
164 |
-
### Compute Infrastructure
|
165 |
-
|
166 |
-
[More Information Needed]
|
167 |
-
|
168 |
-
#### Hardware
|
169 |
-
|
170 |
-
[More Information Needed]
|
171 |
-
|
172 |
-
#### Software
|
173 |
-
|
174 |
-
[More Information Needed]
|
175 |
-
|
176 |
-
## Citation [optional]
|
177 |
-
|
178 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
179 |
-
|
180 |
-
**BibTeX:**
|
181 |
-
|
182 |
-
[More Information Needed]
|
183 |
-
|
184 |
-
**APA:**
|
185 |
-
|
186 |
-
[More Information Needed]
|
187 |
-
|
188 |
-
## Glossary [optional]
|
189 |
-
|
190 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
191 |
-
|
192 |
-
[More Information Needed]
|
193 |
-
|
194 |
-
## More Information [optional]
|
195 |
|
196 |
-
|
197 |
|
198 |
-
## Model Card Authors [optional]
|
199 |
|
200 |
-
[More Information Needed]
|
201 |
|
202 |
## Model Card Contact
|
203 |
|
204 |
-
|
|
|
11 |
|
12 |
# Model Card for Model ID
|
13 |
|
|
|
14 |
|
15 |
|
16 |
|
|
|
18 |
|
19 |
### Model Description
|
20 |
|
|
|
|
|
21 |
This model is a fine-tuned version of the GEMMA 2B multilingual transformer, specifically optimized for translating text from English to Hindi. It leverages the capabilities of the original GEMMA architecture to provide accurate and efficient translations.
|
22 |
|
23 |
Model Name: Gemma-2b-mt-Hindi-Fintuned
|
|
|
28 |
|
29 |
### Model Sources [optional]
|
30 |
|
|
|
|
|
|
|
|
|
|
|
31 |
|
32 |
## Uses
|
33 |
|
|
|
50 |
|
51 |
### Out-of-Scope Use
|
52 |
|
|
|
|
|
|
|
|
|
53 |
## Bias, Risks, and Limitations
|
54 |
|
55 |
-The model may struggle with idiomatic expressions or culturally specific content.
|
|
|
64 |
|
65 |
## How to Get Started with the Model
|
66 |
|
67 |
+
Use the code below to get started with the model:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
68 |
|
69 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
70 |
|
71 |
+
# Load the model and tokenizer
|
72 |
+
tokenizer = AutoTokenizer.from_pretrained("Satwik11/gemma-2b-mt-Hindi-Fintuned")
|
73 |
+
model = AutoModelForCausalLM.from_pretrained("Satwik11/gemma-2b-mt-Hindi-Fintuned")
|
74 |
|
75 |
+
def generate_translation(prompt, max_length=90):
|
76 |
+
# Prepare the input
|
77 |
+
inputs = tokenizer(prompt, return_tensors='pt')
|
78 |
+
|
79 |
+
# Generate the translation
|
80 |
+
outputs = model.generate(**inputs, max_length=max_length)
|
81 |
+
|
82 |
+
# Decode the generated output
|
83 |
+
translated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
|
84 |
+
|
85 |
+
return translated_text
|
86 |
|
87 |
+
# Test the model with some example sentences
|
88 |
+
test_sentences = [
|
89 |
+
"Today is August 19.The maximum temperature is 70 degrees Fahrenheit"
|
90 |
+
]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
91 |
|
92 |
+
for sentence in test_sentences:
|
93 |
+
prompt = f"Translate the following English text to Hindi: {sentence}"
|
94 |
+
translation = generate_translation(prompt)
|
95 |
+
print(translation)
|
96 |
|
97 |
|
98 |
+
## Training Details
|
99 |
|
100 |
+
### Training Data
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
101 |
|
102 |
+
The model was fine-tuned on the cfilt/iitb-english-hindi dataset, which contains English-Hindi sentence pairs. For more details about the dataset, refer to the dataset card on Hugging Face.
|
103 |
|
|
|
104 |
|
|
|
105 |
|
106 |
## Model Card Contact
|
107 |
|
108 |
+
For more information, please contact the model creators through the Hugging Face model repository: https://www.linkedin.com/in/satwik-sinha/
|