mandelakori commited on
Commit
a5cf2e8
1 Parent(s): 562536b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -419
README.md CHANGED
@@ -1,437 +1,44 @@
1
- ---
2
- language:
3
- - en
4
- - zh
5
- - de
6
- - es
7
- - ru
8
- - ko
9
- - fr
10
- - ja
11
- - pt
12
- - tr
13
- - pl
14
- - ca
15
- - nl
16
- - ar
17
- - sv
18
- - it
19
- - id
20
- - hi
21
- - fi
22
- - vi
23
- - he
24
- - uk
25
- - el
26
- - ms
27
- - cs
28
- - ro
29
- - da
30
- - hu
31
- - ta
32
- - no
33
- - th
34
- - ur
35
- - hr
36
- - bg
37
- - lt
38
- - la
39
- - mi
40
- - ml
41
- - cy
42
- - sk
43
- - te
44
- - fa
45
- - lv
46
- - bn
47
- - sr
48
- - az
49
- - sl
50
- - kn
51
- - et
52
- - mk
53
- - br
54
- - eu
55
- - is
56
- - hy
57
- - ne
58
- - mn
59
- - bs
60
- - kk
61
- - sq
62
- - sw
63
- - gl
64
- - mr
65
- - pa
66
- - si
67
- - km
68
- - sn
69
- - yo
70
- - so
71
- - af
72
- - oc
73
- - ka
74
- - be
75
- - tg
76
- - sd
77
- - gu
78
- - am
79
- - yi
80
- - lo
81
- - uz
82
- - fo
83
- - ht
84
- - ps
85
- - tk
86
- - nn
87
- - mt
88
- - sa
89
- - lb
90
- - my
91
- - bo
92
- - tl
93
- - mg
94
- - as
95
- - tt
96
- - haw
97
- - ln
98
- - ha
99
- - ba
100
- - jw
101
- - su
102
- tags:
103
- - audio
104
- - automatic-speech-recognition
105
- - hf-asr-leaderboard
106
- widget:
107
- - example_title: Librispeech sample 1
108
- src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
109
- - example_title: Librispeech sample 2
110
- src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
111
- model-index:
112
- - name: whisper-tiny
113
- results:
114
- - task:
115
- name: Automatic Speech Recognition
116
- type: automatic-speech-recognition
117
- dataset:
118
- name: LibriSpeech (clean)
119
- type: librispeech_asr
120
- config: clean
121
- split: test
122
- args:
123
- language: en
124
- metrics:
125
- - name: Test WER
126
- type: wer
127
- value: 7.54
128
- - task:
129
- name: Automatic Speech Recognition
130
- type: automatic-speech-recognition
131
- dataset:
132
- name: LibriSpeech (other)
133
- type: librispeech_asr
134
- config: other
135
- split: test
136
- args:
137
- language: en
138
- metrics:
139
- - name: Test WER
140
- type: wer
141
- value: 17.15
142
- - task:
143
- name: Automatic Speech Recognition
144
- type: automatic-speech-recognition
145
- dataset:
146
- name: Common Voice 11.0
147
- type: mozilla-foundation/common_voice_11_0
148
- config: hi
149
- split: test
150
- args:
151
- language: hi
152
- metrics:
153
- - name: Test WER
154
- type: wer
155
- value: 141
156
- pipeline_tag: automatic-speech-recognition
157
- license: apache-2.0
158
- ---
159
 
160
- # Whisper
161
 
162
- Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours
163
- of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains **without** the need
164
- for fine-tuning.
165
 
166
- Whisper was proposed in the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://arxiv.org/abs/2212.04356)
167
- by Alec Radford et al from OpenAI. The original code repository can be found [here](https://github.com/openai/whisper).
168
 
169
- **Disclaimer**: Content for this model card has partly been written by the Hugging Face team, and parts of it were
170
- copied and pasted from the original model card.
 
 
171
 
172
- ## Model details
173
 
174
- Whisper is a Transformer based encoder-decoder model, also referred to as a _sequence-to-sequence_ model.
175
- It was trained on 680k hours of labelled speech data annotated using large-scale weak supervision.
176
 
177
- The models were trained on either English-only data or multilingual data. The English-only models were trained
178
- on the task of speech recognition. The multilingual models were trained on both speech recognition and speech
179
- translation. For speech recognition, the model predicts transcriptions in the *same* language as the audio.
180
- For speech translation, the model predicts transcriptions to a *different* language to the audio.
181
 
182
- Whisper checkpoints come in five configurations of varying model sizes.
183
- The smallest four are trained on either English-only or multilingual data.
184
- The largest checkpoints are multilingual only. All ten of the pre-trained checkpoints
185
- are available on the [Hugging Face Hub](https://huggingface.co/models?search=openai/whisper). The
186
- checkpoints are summarised in the following table with links to the models on the Hub:
187
 
188
- | Size | Parameters | English-only | Multilingual |
189
- |----------|------------|------------------------------------------------------|-----------------------------------------------------|
190
- | tiny | 39 M | [✓](https://huggingface.co/openai/whisper-tiny.en) | [✓](https://huggingface.co/openai/whisper-tiny) |
191
- | base | 74 M | [✓](https://huggingface.co/openai/whisper-base.en) | [✓](https://huggingface.co/openai/whisper-base) |
192
- | small | 244 M | [✓](https://huggingface.co/openai/whisper-small.en) | [✓](https://huggingface.co/openai/whisper-small) |
193
- | medium | 769 M | [✓](https://huggingface.co/openai/whisper-medium.en) | [✓](https://huggingface.co/openai/whisper-medium) |
194
- | large | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large) |
195
- | large-v2 | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large-v2) |
196
 
197
- # Usage
 
198
 
199
- To transcribe audio samples, the model has to be used alongside a [`WhisperProcessor`](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperProcessor).
200
 
201
- The `WhisperProcessor` is used to:
202
- 1. Pre-process the audio inputs (converting them to log-Mel spectrograms for the model)
203
- 2. Post-process the model outputs (converting them from tokens to text)
204
 
205
- The model is informed of which task to perform (transcription or translation) by passing the appropriate "context tokens". These context tokens
206
- are a sequence of tokens that are given to the decoder at the start of the decoding process, and take the following order:
207
- 1. The transcription always starts with the `<|startoftranscript|>` token
208
- 2. The second token is the language token (e.g. `<|en|>` for English)
209
- 3. The third token is the "task token". It can take one of two values: `<|transcribe|>` for speech recognition or `<|translate|>` for speech translation
210
- 4. In addition, a `<|notimestamps|>` token is added if the model should not include timestamp prediction
211
 
212
- Thus, a typical sequence of context tokens might look as follows:
213
- ```
214
- <|startoftranscript|> <|en|> <|transcribe|> <|notimestamps|>
215
- ```
216
- Which tells the model to decode in English, under the task of speech recognition, and not to predict timestamps.
217
 
218
- These tokens can either be forced or un-forced. If they are forced, the model is made to predict each token at
219
- each position. This allows one to control the output language and task for the Whisper model. If they are un-forced,
220
- the Whisper model will automatically predict the output langauge and task itself.
221
 
222
- The context tokens can be set accordingly:
223
 
224
- ```python
225
- model.config.forced_decoder_ids = WhisperProcessor.get_decoder_prompt_ids(language="english", task="transcribe")
226
- ```
227
 
228
- Which forces the model to predict in English under the task of speech recognition.
229
-
230
- ## Transcription
231
-
232
- ### English to English
233
- In this example, the context tokens are 'unforced', meaning the model automatically predicts the output language
234
- (English) and task (transcribe).
235
-
236
- ```python
237
- >>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
238
- >>> from datasets import load_dataset
239
-
240
- >>> # load model and processor
241
- >>> processor = WhisperProcessor.from_pretrained("openai/whisper-tiny")
242
- >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny")
243
- >>> model.config.forced_decoder_ids = None
244
-
245
- >>> # load dummy dataset and read audio files
246
- >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
247
- >>> sample = ds[0]["audio"]
248
- >>> input_features = processor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features
249
-
250
- >>> # generate token ids
251
- >>> predicted_ids = model.generate(input_features)
252
- >>> # decode token ids to text
253
- >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=False)
254
- ['<|startoftranscript|><|en|><|transcribe|><|notimestamps|> Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.<|endoftext|>']
255
-
256
- >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
257
- [' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.']
258
- ```
259
- The context tokens can be removed from the start of the transcription by setting `skip_special_tokens=True`.
260
-
261
- ### French to French
262
- The following example demonstrates French to French transcription by setting the decoder ids appropriately.
263
-
264
- ```python
265
- >>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
266
- >>> from datasets import Audio, load_dataset
267
-
268
- >>> # load model and processor
269
- >>> processor = WhisperProcessor.from_pretrained("openai/whisper-tiny")
270
- >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny")
271
- >>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="transcribe")
272
-
273
- >>> # load streaming dataset and read first audio sample
274
- >>> ds = load_dataset("common_voice", "fr", split="test", streaming=True)
275
- >>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
276
- >>> input_speech = next(iter(ds))["audio"]
277
- >>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features
278
-
279
- >>> # generate token ids
280
- >>> predicted_ids = model.generate(input_features, forced_decoder_ids=forced_decoder_ids)
281
- >>> # decode token ids to text
282
- >>> transcription = processor.batch_decode(predicted_ids)
283
- ['<|startoftranscript|><|fr|><|transcribe|><|notimestamps|> Un vrai travail intéressant va enfin être mené sur ce sujet.<|endoftext|>']
284
-
285
- >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
286
- [' Un vrai travail intéressant va enfin être mené sur ce sujet.']
287
- ```
288
-
289
- ## Translation
290
- Setting the task to "translate" forces the Whisper model to perform speech translation.
291
-
292
- ### French to English
293
-
294
- ```python
295
- >>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
296
- >>> from datasets import Audio, load_dataset
297
-
298
- >>> # load model and processor
299
- >>> processor = WhisperProcessor.from_pretrained("openai/whisper-tiny")
300
- >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny")
301
- >>> forced_decoder_ids = processor.get_decoder_prompt_ids(language="french", task="translate")
302
-
303
- >>> # load streaming dataset and read first audio sample
304
- >>> ds = load_dataset("common_voice", "fr", split="test", streaming=True)
305
- >>> ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
306
- >>> input_speech = next(iter(ds))["audio"]
307
- >>> input_features = processor(input_speech["array"], sampling_rate=input_speech["sampling_rate"], return_tensors="pt").input_features
308
-
309
- >>> # generate token ids
310
- >>> predicted_ids = model.generate(input_features, forced_decoder_ids=forced_decoder_ids)
311
- >>> # decode token ids to text
312
- >>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
313
- [' A very interesting work, we will finally be given on this subject.']
314
- ```
315
-
316
- ## Evaluation
317
-
318
- This code snippet shows how to evaluate Whisper Tiny on [LibriSpeech test-clean](https://huggingface.co/datasets/librispeech_asr):
319
-
320
- ```python
321
- >>> from datasets import load_dataset
322
- >>> from transformers import WhisperForConditionalGeneration, WhisperProcessor
323
- >>> import torch
324
- >>> from evaluate import load
325
-
326
- >>> librispeech_test_clean = load_dataset("librispeech_asr", "clean", split="test")
327
-
328
- >>> processor = WhisperProcessor.from_pretrained("openai/whisper-tiny")
329
- >>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny").to("cuda")
330
-
331
- >>> def map_to_pred(batch):
332
- >>> audio = batch["audio"]
333
- >>> input_features = processor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt").input_features
334
- >>> batch["reference"] = processor.tokenizer._normalize(batch['text'])
335
- >>>
336
- >>> with torch.no_grad():
337
- >>> predicted_ids = model.generate(input_features.to("cuda"))[0]
338
- >>> transcription = processor.decode(predicted_ids)
339
- >>> batch["prediction"] = processor.tokenizer._normalize(transcription)
340
- >>> return batch
341
-
342
- >>> result = librispeech_test_clean.map(map_to_pred)
343
-
344
- >>> wer = load("wer")
345
- >>> print(100 * wer.compute(references=result["reference"], predictions=result["prediction"]))
346
- 7.547098647858638
347
- ```
348
-
349
- ## Long-Form Transcription
350
-
351
- The Whisper model is intrinsically designed to work on audio samples of up to 30s in duration. However, by using a chunking
352
- algorithm, it can be used to transcribe audio samples of up to arbitrary length. This is possible through Transformers
353
- [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline)
354
- method. Chunking is enabled by setting `chunk_length_s=30` when instantiating the pipeline. With chunking enabled, the pipeline
355
- can be run with batched inference. It can also be extended to predict sequence level timestamps by passing `return_timestamps=True`:
356
-
357
- ```python
358
- >>> import torch
359
- >>> from transformers import pipeline
360
- >>> from datasets import load_dataset
361
-
362
- >>> device = "cuda:0" if torch.cuda.is_available() else "cpu"
363
-
364
- >>> pipe = pipeline(
365
- >>> "automatic-speech-recognition",
366
- >>> model="openai/whisper-tiny",
367
- >>> chunk_length_s=30,
368
- >>> device=device,
369
- >>> )
370
-
371
- >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
372
- >>> sample = ds[0]["audio"]
373
-
374
- >>> prediction = pipe(sample.copy(), batch_size=8)["text"]
375
- " Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel."
376
-
377
- >>> # we can also return timestamps for the predictions
378
- >>> prediction = pipe(sample.copy(), batch_size=8, return_timestamps=True)["chunks"]
379
- [{'text': ' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.',
380
- 'timestamp': (0.0, 5.44)}]
381
- ```
382
-
383
- Refer to the blog post [ASR Chunking](https://huggingface.co/blog/asr-chunking) for more details on the chunking algorithm.
384
-
385
- ## Fine-Tuning
386
-
387
- The pre-trained Whisper model demonstrates a strong ability to generalise to different datasets and domains. However,
388
- its predictive capabilities can be improved further for certain languages and tasks through *fine-tuning*. The blog
389
- post [Fine-Tune Whisper with 🤗 Transformers](https://huggingface.co/blog/fine-tune-whisper) provides a step-by-step
390
- guide to fine-tuning the Whisper model with as little as 5 hours of labelled data.
391
-
392
- ### Evaluated Use
393
-
394
- The primary intended users of these models are AI researchers studying robustness, generalization, capabilities, biases, and constraints of the current model. However, Whisper is also potentially quite useful as an ASR solution for developers, especially for English speech recognition. We recognize that once models are released, it is impossible to restrict access to only “intended” uses or to draw reasonable guidelines around what is or is not research.
395
-
396
- The models are primarily trained and evaluated on ASR and speech translation to English tasks. They show strong ASR results in ~10 languages. They may exhibit additional capabilities, particularly if fine-tuned on certain tasks like voice activity detection, speaker classification, or speaker diarization but have not been robustly evaluated in these areas. We strongly recommend that users perform robust evaluations of the models in a particular context and domain before deploying them.
397
-
398
- In particular, we caution against using Whisper models to transcribe recordings of individuals taken without their consent or purporting to use these models for any kind of subjective classification. We recommend against use in high-risk domains like decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes. The models are intended to transcribe and translate speech, use of the model for classification is not only not evaluated but also not appropriate, particularly to infer human attributes.
399
-
400
-
401
- ## Training Data
402
-
403
- The models are trained on 680,000 hours of audio and the corresponding transcripts collected from the internet. 65% of this data (or 438,000 hours) represents English-language audio and matched English transcripts, roughly 18% (or 126,000 hours) represents non-English audio and English transcripts, while the final 17% (or 117,000 hours) represents non-English audio and the corresponding transcript. This non-English data represents 98 different languages.
404
-
405
- As discussed in [the accompanying paper](https://cdn.openai.com/papers/whisper.pdf), we see that performance on transcription in a given language is directly correlated with the amount of training data we employ in that language.
406
-
407
-
408
- ## Performance and Limitations
409
-
410
- Our studies show that, over many existing ASR systems, the models exhibit improved robustness to accents, background noise, technical language, as well as zero shot translation from multiple languages into English; and that accuracy on speech recognition and translation is near the state-of-the-art level.
411
-
412
- However, because the models are trained in a weakly supervised manner using large-scale noisy data, the predictions may include texts that are not actually spoken in the audio input (i.e. hallucination). We hypothesize that this happens because, given their general knowledge of language, the models combine trying to predict the next word in audio with trying to transcribe the audio itself.
413
-
414
- Our models perform unevenly across languages, and we observe lower accuracy on low-resource and/or low-discoverability languages or languages where we have less training data. The models also exhibit disparate performance on different accents and dialects of particular languages, which may include higher word error rate across speakers of different genders, races, ages, or other demographic criteria. Our full evaluation results are presented in [the paper accompanying this release](https://cdn.openai.com/papers/whisper.pdf).
415
-
416
- In addition, the sequence-to-sequence architecture of the model makes it prone to generating repetitive texts, which can be mitigated to some degree by beam search and temperature scheduling but not perfectly. Further analysis on these limitations are provided in [the paper](https://cdn.openai.com/papers/whisper.pdf). It is likely that this behavior and hallucinations may be worse on lower-resource and/or lower-discoverability languages.
417
-
418
-
419
- ## Broader Implications
420
-
421
- We anticipate that Whisper models’ transcription capabilities may be used for improving accessibility tools. While Whisper models cannot be used for real-time transcription out of the box – their speed and size suggest that others may be able to build applications on top of them that allow for near-real-time speech recognition and translation. The real value of beneficial applications built on top of Whisper models suggests that the disparate performance of these models may have real economic implications.
422
-
423
- There are also potential dual use concerns that come with releasing Whisper. While we hope the technology will be used primarily for beneficial purposes, making ASR technology more accessible could enable more actors to build capable surveillance technologies or scale up existing surveillance efforts, as the speed and accuracy allow for affordable automatic transcription and translation of large volumes of audio communication. Moreover, these models may have some capabilities to recognize specific individuals out of the box, which in turn presents safety concerns related both to dual use and disparate performance. In practice, we expect that the cost of transcription is not the limiting factor of scaling up surveillance projects.
424
-
425
-
426
- ### BibTeX entry and citation info
427
- ```bibtex
428
- @misc{radford2022whisper,
429
- doi = {10.48550/ARXIV.2212.04356},
430
- url = {https://arxiv.org/abs/2212.04356},
431
- author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
432
- title = {Robust Speech Recognition via Large-Scale Weak Supervision},
433
- publisher = {arXiv},
434
- year = {2022},
435
- copyright = {arXiv.org perpetual, non-exclusive license}
436
- }
437
- ```
 
1
+ # AISAK-Listen
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
 
3
+ ### Overview:
4
 
5
+ AISAK, short for Artificially Intelligent Swiss Army Knife, is a general-purpose AI system comprising various models designed for different tasks. Developed by Mandela Logan, one of the models within AISAK is a state-of-the-art automatic speech recognition (ASR) model. This model, named AISAK-Listen, is fine-tuned on extensive datasets to excel in converting spoken language into written text.
 
 
6
 
7
+ ### Model Information:
 
8
 
9
+ - **Model Name**: AISAK-Listen
10
+ - **Version**: 1.0
11
+ - **Model Architecture**: Seq2seq
12
+ - **Specialization**: AISAK-Listen is a dedicated ASR model within the AISAK system, built off the impressive https://huggingface.co/openai/whisper-tiny model architecture. It has been fine-tuned to optimize performance for quick speech recognition tasks.
13
 
14
+ ### Intended Use:
15
 
16
+ AISAK-Listen, as part of AISAK, is developed to provide reliable and high-quality speech-to-text conversion capabilities. It is intended to be a versatile tool for various applications such as transcription services, voice assistants, voice-controlled systems, and more. AISAK-Listen excels in accurately transcribing quick speech with minimal delay, making it suitable for real-time speech recognition requirements.
 
17
 
18
+ ### Performance:
 
 
 
19
 
20
+ AISAK-Listen has undergone extensive testing to ensure its performance meets demanding standards. It consistently achieves impressive accuracy rates in converting spoken language to written text, outperforming other ASR models in terms of speed and efficiency. The model's performance has been evaluated on diverse speech datasets to ensure its generalization across different speakers and speech patterns.
 
 
 
 
21
 
22
+ ### Ethical Considerations:
 
 
 
 
 
 
 
23
 
24
+ - **Bias Mitigation**: AISAK-Listen undergoes training processes that aim to minimize bias. However, it is important to note that biases may still be present in the transcriptions generated by the model.
25
+ - **Fair Use**: Users are advised to exercise caution when utilizing AISAK-Listen in sensitive or critical contexts. The generated transcriptions should be reviewed and verified to ensure their accuracy and fairness.
26
 
27
+ ### Limitations:
28
 
29
+ - AISAK-Listen's performance is optimized for quick speech recognition and may not be as effective for specialized speech styles or accents.
30
+ - The model's accuracy may vary when exposed to speech data that significantly differs from the quick speech it was trained on.
 
31
 
32
+ ### Deployment:
 
 
 
 
 
33
 
34
+ Inferencing for AISAK-Listen will be handled as part of the full deployment of the AISAK system in the future. The process is lengthy and intensive in many areas, emphasizing the goal of achieving the optimal system rather than the quickest. However, work is being done as fast as humanly possible. Updates will be provided as frequently as possible.
 
 
 
 
35
 
36
+ ### Caveats:
 
 
37
 
38
+ - It is recommended to review and validate the transcriptions generated by AISAK-Listen, particularly in critical or high-stakes situations where accuracy is crucial.
39
 
40
+ ### Model Card Information:
 
 
41
 
42
+ - **Model Card Created**: February 19, 2024
43
+ - **Last Updated**: February 19, 2024
44
+ - **Contact Information**: For any inquiries or communication purposes, please reach out to [email protected].