Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
SpideyDLK
/
wav2vec2-large-xls-r-300m-sinhala-aug-data-with-original-split-part3
like
0
Automatic Speech Recognition
Transformers
TensorBoard
Safetensors
wav2vec2
Generated from Trainer
Inference Endpoints
License:
apache-2.0
Model card
Files
Files and versions
Metrics
Training metrics
Community
Train
Deploy
Use this model
main
wav2vec2-large-xls-r-300m-sinhala-aug-data-with-original-split-part3
1 contributor
History:
59 commits
SpideyDLK
wav2vec2-large-xls-r-300m-sinhala-aug-data-with-original-split-part3
2ba4893
9 months ago
language_model
wav2vec2-large-xls-r-300m-sinhala-aug-data-with-original-split-part3
9 months ago
last-checkpoint
Training in progress, step 30800, checkpoint
9 months ago
runs
End of training
9 months ago
.gitattributes
1.59 kB
wav2vec2-large-xls-r-300m-sinhala-aug-data-with-original-split-part3
9 months ago
README.md
6.21 kB
End of training
9 months ago
added_tokens.json
30 Bytes
Upload tokenizer
9 months ago
alphabet.json
809 Bytes
wav2vec2-large-xls-r-300m-sinhala-aug-data-with-original-split-part3
9 months ago
config.json
2.09 kB
Training in progress, step 20400
9 months ago
model.safetensors
1.26 GB
LFS
End of training
9 months ago
preprocessor_config.json
262 Bytes
wav2vec2-large-xls-r-300m-sinhala-aug-data-with-original-split-part3
9 months ago
special_tokens_map.json
406 Bytes
wav2vec2-large-xls-r-300m-sinhala-aug-data-with-original-split-part3
9 months ago
tokenizer_config.json
1.1 kB
wav2vec2-large-xls-r-300m-sinhala-aug-data-with-original-split-part3
9 months ago
training_args.bin
pickle
Detected Pickle imports (9)
"transformers.trainer_utils.IntervalStrategy"
,
"transformers.trainer_utils.HubStrategy"
,
"transformers.trainer_utils.SchedulerType"
,
"transformers.trainer_pt_utils.AcceleratorConfig"
,
"torch.device"
,
"transformers.training_args.TrainingArguments"
,
"accelerate.utils.dataclasses.DistributedType"
,
"transformers.training_args.OptimizerNames"
,
"accelerate.state.PartialState"
How to fix it?
5.05 kB
LFS
Training in progress, step 30400
9 months ago
vocab.json
1.01 kB
Upload tokenizer
9 months ago