modelId
stringlengths 4
81
| tags
sequence | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
timestamp[ns, tz=UTC] | card
stringlengths 51
438k
|
---|---|---|---|---|---|---|
CohleM/bert-nepali-tokenizer | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | This model is a downstream optimization of [```vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt```](https://huggingface.co/vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt) using [OpenVINO/NNCF](https://github.com/openvinotoolkit/nncf). Applied optimization includes:
1. NNCF Quantize-Aware Training - Symmetric 8-bit for both weight and activation on all learnable layers.
2. Custom distillation with large model ```bert-large-uncased-whole-word-masking-finetuned-squad```
```
eval_exact_match = 80.7001
eval_f1 = 87.9777
eval_samples = 10784
```
# Setup
```bash
# OpenVINO/NNCF
git clone https://github.com/vuiseng9/nncf && cd nncf
git checkout tld-poc
git reset --hard 1dec7afe7a4b567c059fcf287ea2c234980fded2
python setup.py develop
pip install -r examples/torch/requirements.txt
# Huggingface nn_pruning
git clone https://github.com/vuiseng9/nn_pruning && cd nn_pruning
git checkout reproduce-evaluation
git reset --hard 2d4e196d694c465e43e5fbce6c3836d0a60e1446
pip install -e ".[dev]"
# Huggingface Transformers
git clone https://github.com/vuiseng9/transformers && cd transformers
git checkout tld-poc
git reset --hard 10a1e29d84484e48fd106f58957d9ffc89dc43c5
pip install -e .
head -n 1 examples/pytorch/question-answering/requirements.txt | xargs -i pip install {}
# Additional dependencies
pip install onnx
```
# Train
```bash
git clone https://huggingface.co/vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt
BASE_MODEL=/path/to/cloned_repo_above #to-revise
wget https://huggingface.co/vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt-qat-lt/raw/main/nncf_bert_squad_qat.json
NNCF_CFG=/path/to/downloaded_nncf_cfg_above #to-revise
OUTROOT=/path/to/train_output_root #to-revise
WORKDIR=transformers/examples/pytorch/question-answering #to-revise
RUNID=bert-base-squadv1-block-pruning-hybrid-filled-lt-qat-lt
cd $WORKDIR
OUTDIR=$OUTROOT/$RUNID
mkdir -p $OUTDIR
export CUDA_VISIBLE_DEVICES=0
NEPOCH=5
python run_qa.py \
--model_name_or_path vuiseng9/bert-base-squadv1-block-pruning-hybrid \
--optimize_model_before_eval \
--optimized_checkpoint $BASE_MODEL \
--dataset_name squad \
--do_eval \
--do_train \
--evaluation_strategy steps \
--eval_steps 250 \
--learning_rate 3e-5 \
--lr_scheduler_type cosine_with_restarts \
--warmup_ratio 0.25 \
--cosine_cycles 1 \
--teacher bert-large-uncased-whole-word-masking-finetuned-squad \
--teacher_ratio 0.9 \
--num_train_epochs $NEPOCH \
--per_device_eval_batch_size 128 \
--per_device_train_batch_size 16 \
--max_seq_length 384 \
--doc_stride 128 \
--save_steps 250 \
--nncf_config $NNCF_CFG \
--logging_steps 1 \
--overwrite_output_dir \
--run_name $RUNID \
--output_dir $OUTDIR
```
# Eval
This repo must be cloned locally.
```bash
git clone https://huggingface.co/vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt-qat-lt
MODELROOT=/path/to/cloned_repo_above #to-revise
export CUDA_VISIBLE_DEVICES=0
OUTDIR=eval-bert-base-squadv1-block-pruning-hybrid-filled-lt-qat-lt
WORKDIR=transformers/examples/pytorch/question-answering #to-revise
cd $WORKDIR
mkdir $OUTDIR
nohup python run_qa.py \
--model_name_or_path vuiseng9/bert-base-squadv1-block-pruning-hybrid \
--dataset_name squad \
--optimize_model_before_eval \
--qat_checkpoint $MODELROOT/checkpoint-26750 \
--nncf_config $MODELROOT/nncf_bert_squad_qat.json \
--to_onnx $OUTDIR/bert-base-squadv1-block-pruning-hybrid-filled-lt-qat-lt.onnx \
--do_eval \
--per_device_eval_batch_size 128 \
--max_seq_length 384 \
--doc_stride 128 \
--overwrite_output_dir \
--output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log &
```
### tile-alignment
to evaluate tile-alignment checkpoint, add ```--tile_alignment``` and point ```--qat_checkpoint``` to checkpoint with 'tilealigned' postfix. Use branch ```tld-poc``` with commit id ```c525c52cq``` |
CohleM/mbert-nepali-tokenizer | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | This model is a downstream fine-tuning of [```vuiseng9/bert-base-squadv1-block-pruning-hybrid```](https://huggingface.co/vuiseng9/bert-base-squadv1-block-pruning-hybrid). "filled" means unstructured fine-grained sparsified parameters are allowed to learn during fine-tuning. "lt" means distillation of larger model as teacher, i.e. ```bert-large-uncased-whole-word-masking-finetuned-squad```
```
eval_exact_match = 80.3311
eval_f1 = 87.69
eval_samples = 10784
```
This model is a replication of [block pruning paper](https://arxiv.org/abs/2109.04838) with its open-sourced codebase (forked and modified).
To reproduce this model, pls follow [documentation here](https://github.com/vuiseng9/nn_pruning/blob/reproduce-evaluation/reproduce-eval/readme.md) until step 3.
# Eval
The model cannot be evaluated with HF QA example out-of-the-box as the final dimension of the model architecture has been realized. Follow the custom setup below.
```bash
# OpenVINO/NNCF
git clone https://github.com/vuiseng9/nncf && cd nncf
git checkout tld-poc
git reset --hard 1dec7afe7a4b567c059fcf287ea2c234980fded2
python setup.py develop
pip install -r examples/torch/requirements.txt
# Huggingface nn_pruning
git clone https://github.com/vuiseng9/nn_pruning && cd nn_pruning
git checkout reproduce-evaluation
git reset --hard 2d4e196d694c465e43e5fbce6c3836d0a60e1446
pip install -e ".[dev]"
# Huggingface Transformers
git clone https://github.com/vuiseng9/transformers && cd transformers
git checkout tld-poc
git reset --hard 10a1e29d84484e48fd106f58957d9ffc89dc43c5
pip install -e .
head -n 1 examples/pytorch/question-answering/requirements.txt | xargs -i pip install {}
```
This repo must be cloned locally.
```bash
git clone https://huggingface.co/vuiseng9/bert-base-squadv1-block-pruning-hybrid-filled-lt
```
Add ```--optimize_model_before_eval``` and ```--optimized_checkpoint /path/to/clone``` during evaluation.
```bash
export CUDA_VISIBLE_DEVICES=0
OUTDIR=eval-bert-base-squadv1-block-pruning-hybrid-filled-lt-cropped
WORKDIR=transformers/examples/pytorch/question-answering
cd $WORKDIR
mkdir $OUTDIR
nohup python run_qa.py \
--model_name_or_path vuiseng9/bert-base-squadv1-block-pruning-hybrid \
--dataset_name squad \
--optimize_model_before_eval \
--optimized_checkpoint /path/to/clone/bert-base-squadv1-block-pruning-hybrid-filled-lt \
--do_eval \
--per_device_eval_batch_size 128 \
--max_seq_length 384 \
--doc_stride 128 \
--overwrite_output_dir \
--output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log &
```
|
Coldestadam/Breakout_Mentors_SpongeBob_Model | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | BERT-base tuned for Squadv1.1 is pruned with movement pruning algorithm in hybrid fashion, i.e. 32x32 block for self-attention layers, per-dimension grain size for ffn layers.
```
eval_exact_match = 78.5241
eval_f1 = 86.4138
eval_samples = 10784
```
This model is a replication of [block pruning paper](https://arxiv.org/abs/2109.04838) with its open-sourced codebase (forked and modified).
To reproduce this model, pls follow [documentation here](https://github.com/vuiseng9/nn_pruning/blob/reproduce-evaluation/reproduce-eval/readme.md) until step 2.
# Eval
The model can be evaluated out-of-the-box with HF QA example. Note that only pruned self-attention heads are discarded where pruned ffn dimension are sparsified instead of removal. Verified in v4.13.0, v4.9.1.
```bash
export CUDA_VISIBLE_DEVICES=0
OUTDIR=eval-bert-base-squadv1-block-pruning-hybrid
WORKDIR=transformers/examples/pytorch/question-answering
cd $WORKDIR
mkdir $OUTDIR
nohup python run_qa.py \
--model_name_or_path vuiseng9/bert-base-squadv1-block-pruning-hybrid \
--dataset_name squad \
--do_eval \
--per_device_eval_batch_size 16 \
--max_seq_length 384 \
--doc_stride 128 \
--overwrite_output_dir \
--output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log &
```
If the intent is to observe inference acceleration, the pruned structure in the model must be "cropped"/discarded. Follow the custom setup below.
```bash
# OpenVINO/NNCF
git clone https://github.com/vuiseng9/nncf && cd nncf
git checkout tld-poc
git reset --hard 1dec7afe7a4b567c059fcf287ea2c234980fded2
python setup.py develop
pip install -r examples/torch/requirements.txt
# Huggingface nn_pruning
git clone https://github.com/vuiseng9/nn_pruning && cd nn_pruning
git checkout reproduce-evaluation
git reset --hard 2d4e196d694c465e43e5fbce6c3836d0a60e1446
pip install -e ".[dev]"
# Huggingface Transformers
git clone https://github.com/vuiseng9/transformers && cd transformers
git checkout tld-poc
git reset --hard 10a1e29d84484e48fd106f58957d9ffc89dc43c5
pip install -e .
head -n 1 examples/pytorch/question-answering/requirements.txt | xargs -i pip install {}
```
Add ```--optimize_model_before_eval``` during evaluation.
```bash
export CUDA_VISIBLE_DEVICES=0
OUTDIR=eval-bert-base-squadv1-block-pruning-hybrid-cropped
WORKDIR=transformers/examples/pytorch/question-answering
cd $WORKDIR
mkdir $OUTDIR
nohup python run_qa.py \
--model_name_or_path vuiseng9/bert-base-squadv1-block-pruning-hybrid \
--dataset_name squad \
--optimize_model_before_eval \
--do_eval \
--per_device_eval_batch_size 128 \
--max_seq_length 384 \
--doc_stride 128 \
--overwrite_output_dir \
--output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log &
```
|
ComCom/gpt2-large | [
"pytorch",
"gpt2",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"GPT2Model"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | This model is a downstream optimization of [```vuiseng9/bert-base-squadv1-pruneofa-90pc-bt```](https://huggingface.co/vuiseng9/bert-base-squadv1-pruneofa-90pc-bt) using [OpenVINO/NNCF](https://github.com/openvinotoolkit/nncf). Applied optimization includes:
1. magnitude sparsification at 0% upon initialization. Custom reverse masking and sparsity freezing are applied.
2. NNCF Quantize-Aware Training - Symmetric 8-bit for both weight and activation on all learnable layers.
3. Custom distillation with large model ```bert-large-uncased-whole-word-masking-finetuned-squad```
```
eval_exact_match = 80.6623
eval_f1 = 87.7147
eval_samples = 10784
```
# Setup
```bash
# OpenVINO/NNCF
git clone https://github.com/vuiseng9/nncf && cd nncf
git checkout tld-poc
git reset --hard 5647610d5ee2bf9f1324604e6579bca1c391e260
python setup.py develop
pip install -r examples/torch/requirements.txt
# Huggingface nn_pruning
git clone https://github.com/vuiseng9/nn_pruning && cd nn_pruning
git checkout reproduce-evaluation
git reset --hard 2d4e196d694c465e43e5fbce6c3836d0a60e1446
pip install -e ".[dev]"
# Huggingface Transformers
git clone https://github.com/vuiseng9/transformers && cd transformers
git checkout tld-poc
git reset --hard 5dd7402e9a316041dea4ff67508c01047323616e
pip install -e .
head -n 1 examples/pytorch/question-answering/requirements.txt | xargs -i pip install {}
# Additional dependencies
pip install onnx
```
# Train
```bash
wget https://huggingface.co/vuiseng9/bert-base-squadv1-pruneofa-90pc-bt-qat-lt/raw/main/nncf_bert_squad_sparsity.json
NNCF_CFG=/path/to/downloaded_nncf_cfg_above #to-revise
OUTROOT=/path/to/train_output_root #to-revise
WORKDIR=transformers/examples/pytorch/question-answering #to-revise
RUNID=bert-base-squadv1-pruneofa-90pc-bt-qat-lt
cd $WORKDIR
OUTDIR=$OUTROOT/$RUNID
mkdir -p $OUTDIR
export CUDA_VISIBLE_DEVICES=0
NEPOCH=5
python run_qa.py \
--model_name_or_path vuiseng9/bert-base-squadv1-pruneofa-90pc-bt \
--pruneofa_qat \
--dataset_name squad \
--do_eval \
--do_train \
--evaluation_strategy steps \
--eval_steps 250 \
--learning_rate 3e-5 \
--lr_scheduler_type cosine_with_restarts \
--warmup_ratio 0.25 \
--cosine_cycles 1 \
--teacher bert-large-uncased-whole-word-masking-finetuned-squad \
--teacher_ratio 0.9 \
--num_train_epochs $NEPOCH \
--per_device_eval_batch_size 128 \
--per_device_train_batch_size 16 \
--max_seq_length 384 \
--doc_stride 128 \
--save_steps 250 \
--nncf_config $NNCF_CFG \
--logging_steps 1 \
--overwrite_output_dir \
--run_name $RUNID \
--output_dir $OUTDIR
```
# Eval
This repo must be cloned locally.
```bash
git clone https://huggingface.co/vuiseng9/bert-base-squadv1-pruneofa-90pc-bt-qat-lt
MODELROOT=/path/to/cloned_repo_above #to-revise
export CUDA_VISIBLE_DEVICES=0
OUTDIR=eval-bert-base-squadv1-pruneofa-90pc-bt-qat-lt
WORKDIR=transformers/examples/pytorch/question-answering #to-revise
cd $WORKDIR
mkdir $OUTDIR
nohup python run_qa.py \
--model_name_or_path vuiseng9/bert-base-squadv1-pruneofa-90pc-bt \
--dataset_name squad \
--qat_checkpoint $MODELROOT/checkpoint-22000 \
--nncf_config $MODELROOT/nncf_bert_squad_sparsity.json \
--to_onnx $OUTDIR/bert-base-squadv1-pruneofa-90pc-bt-qat-lt.onnx \
--do_eval \
--per_device_eval_batch_size 128 \
--max_seq_length 384 \
--doc_stride 128 \
--overwrite_output_dir \
--output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log &
```
|
ComCom/gpt2-medium | [
"pytorch",
"gpt2",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"GPT2Model"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | This model is transfer-learning of [bert-base pruneofa 90% sparse](https://huggingface.co/Intel/bert-base-uncased-sparse-90-unstructured-pruneofa) on Squadv1 dataset.
```
eval_exact_match = 80.2933
eval_f1 = 87.6788
eval_samples = 10784
```
# Train
use https://github.com/IntelLabs/Model-Compression-Research-Package.git
see ```pruneofa-transfer-learning.sh```
# Eval
```bash
export CUDA_VISIBLE_DEVICES=0
OUTDIR=eval-bert-base-squadv1-pruneofa-90pc-bt
WORKDIR=transformers/examples/pytorch/question-answering
cd $WORKDIR
nohup python run_qa.py \
--model_name_or_path vuiseng9/bert-base-squadv1-pruneofa-90pc-bt \
--dataset_name squad \
--do_eval \
--per_device_eval_batch_size 128 \
--max_seq_length 384 \
--doc_stride 128 \
--overwrite_output_dir \
--output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log &
```
|
ComCom/gpt2 | [
"pytorch",
"gpt2",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"GPT2Model"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | This model is a quantized-aware transfer learning of bert-base-uncased on Squadv1 using [OpenVINO/NNCF](https://github.com/openvinotoolkit/nncf). Applied optimization includes:
1. NNCF Quantize-Aware Training - Symmetric 8-bit for both weight and activation on all learnable layers.
2. Custom distillation with fine-tuned model [```csarron/bert-base-uncased-squad-v1```](https://huggingface.co/csarron/bert-base-uncased-squad-v1)
```
eval_exact_match = 80.8136
eval_f1 = 88.2594
eval_samples = 10784
```
# Setup
```bash
# OpenVINO/NNCF
git clone https://github.com/vuiseng9/nncf && cd nncf
git checkout tld-poc
git reset --hard 1dec7afe7a4b567c059fcf287ea2c234980fded2
python setup.py develop
pip install -r examples/torch/requirements.txt
# Huggingface nn_pruning
git clone https://github.com/vuiseng9/nn_pruning && cd nn_pruning
git checkout reproduce-evaluation
git reset --hard 2d4e196d694c465e43e5fbce6c3836d0a60e1446
pip install -e ".[dev]"
# Huggingface Transformers
git clone https://github.com/vuiseng9/transformers && cd transformers
git checkout tld-poc
git reset --hard 10a1e29d84484e48fd106f58957d9ffc89dc43c5
pip install -e .
head -n 1 examples/pytorch/question-answering/requirements.txt | xargs -i pip install {}
# Additional dependencies
pip install onnx
```
# Train
```bash
wget https://huggingface.co/vuiseng9/bert-base-squadv1-qat-bt/raw/main/nncf_bert_squad_qat.json
NNCF_CFG=/path/to/downloaded_nncf_cfg_above #to-revise
OUTROOT=/path/to/train_output_root #to-revise
WORKDIR=transformers/examples/pytorch/question-answering #to-revise
RUNID=bert-base-squadv1-qat-bt
cd $WORKDIR
OUTDIR=$OUTROOT/$RUNID
mkdir -p $OUTDIR
export CUDA_VISIBLE_DEVICES=0
NEPOCH=2
python run_qa.py \
--model_name_or_path bert-base-uncased \
--dataset_name squad \
--do_eval \
--do_train \
--evaluation_strategy steps \
--eval_steps 250 \
--learning_rate 3e-5 \
--lr_scheduler_type cosine_with_restarts \
--warmup_ratio 0.25 \
--cosine_cycles 1 \
--teacher csarron/bert-base-uncased-squad-v1 \
--teacher_ratio 0.9 \
--num_train_epochs $NEPOCH \
--per_device_eval_batch_size 128 \
--per_device_train_batch_size 16 \
--max_seq_length 384 \
--doc_stride 128 \
--save_steps 250 \
--nncf_config $NNCF_CFG \
--logging_steps 1 \
--overwrite_output_dir \
--run_name $RUNID \
--output_dir $OUTDIR
```
# Eval
This repo must be cloned locally.
```bash
git clone https://huggingface.co/vuiseng9/bert-base-squadv1-qat-bt
MODELROOT=/path/to/cloned_repo_above #to-revise
export CUDA_VISIBLE_DEVICES=0
OUTDIR=eval-bert-base-squadv1-qat-bt
WORKDIR=transformers/examples/pytorch/question-answering #to-revise
cd $WORKDIR
mkdir $OUTDIR
nohup python run_qa.py \
--model_name_or_path vuiseng9/bert-base-uncased-squad \
--dataset_name squad \
--qat_checkpoint $MODELROOT/checkpoint-10750 \
--nncf_config $MODELROOT/nncf_bert_squad_qat.json \
--to_onnx $OUTDIR/bert-base-squadv1-qat-bt.onnx \
--do_eval \
--per_device_eval_batch_size 128 \
--max_seq_length 384 \
--doc_stride 128 \
--overwrite_output_dir \
--output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log &
```
|
ComCom-Dev/gpt2-bible-test | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | This model is a fork of [```csarron/bert-base-uncased-squad-v1```](https://huggingface.co/csarron/bert-base-uncased-squad-v1).
```
eval_exact_match = 80.9082
eval_f1 = 88.2275
eval_samples = 10784
```
# Eval
```bash
export CUDA_VISIBLE_DEVICES=0
OUTDIR=eval-bert-base-squadv1
WORKDIR=transformers/examples/pytorch/question-answering
cd $WORKDIR
nohup python run_qa.py \
--model_name_or_path vuiseng9/bert-base-squadv1 \
--dataset_name squad \
--do_eval \
--per_device_eval_batch_size 128 \
--max_seq_length 384 \
--doc_stride 128 \
--overwrite_output_dir \
--output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log &
```
|
Cometasonmi451/Mine | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | This model is developed with transformers v4.10.3.
# Train
```bash
#!/usr/bin/env bash
export CUDA_VISIBLE_DEVICES=0
OUTDIR=bert-based-uncased-mnli
WORKDIR=transformers/examples/pytorch/text-classification
cd $WORKDIR
nohup python run_glue.py \
--model_name_or_path bert-base-uncased \
--task_name mnli \
--do_eval \
--do_train \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 16 \
--max_seq_length 128 \
--num_train_epochs 3 \
--overwrite_output_dir \
--output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log &
```
# Eval
```bash
export CUDA_VISIBLE_DEVICES=0
OUTDIR=eval-bert-based-uncased-mnli
WORKDIR=transformers/examples/pytorch/text-classification
cd $WORKDIR
nohup python run_glue.py \
--model_name_or_path vuiseng9/bert-base-uncased-mnli \
--task_name mnli \
--do_eval \
--per_device_eval_batch_size 16 \
--max_seq_length 128 \
--overwrite_output_dir \
--output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log &
```
|
cometrain/neurotitle-rugpt3-small | [
"pytorch",
"gpt2",
"text-generation",
"ru",
"en",
"dataset:All-NeurIPS-Papers-Scraper",
"transformers",
"Cometrain AutoCode",
"Cometrain AlphaML",
"license:mit"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 20 | null | This model is developed with transformers v4.10.3.
# Train
```bash
#!/usr/bin/env bash
export CUDA_VISIBLE_DEVICES=0
OUTDIR=bert-base-uncased-squad
WORKDIR=transformers/examples/pytorch/question-answering
cd $WORKDIR
nohup python run_qa.py \
--model_name_or_path bert-base-uncased \
--dataset_name squad \
--do_eval \
--do_train \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 16 \
--doc_stride 128 \
--max_seq_length 384 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--eval_steps 250 \
--save_steps 2500 \
--logging_steps 1 \
--overwrite_output_dir \
--output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log &
```
# Eval
```bash
export CUDA_VISIBLE_DEVICES=0
OUTDIR=eval-bert-base-uncased-squad
WORKDIR=transformers/examples/pytorch/question-answering
cd $WORKDIR
nohup python run_qa.py \
--model_name_or_path vuiseng9/bert-base-uncased-squad \
--dataset_name squad \
--do_eval \
--per_device_eval_batch_size 16 \
--max_seq_length 384 \
--doc_stride 128 \
--overwrite_output_dir \
--output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log &
```
|
Connor/DialoGPT-small-rick | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | * A set of unstructured sparse bert-base-uncased models fine-tuned for SQuADv1.
* Tensorflow models are created using ```TFAutoModelForQuestionAnswering.from_pretrained(..., from_pt=True)``` and ```model.save_pretrained(tf_pth)```.
* Observed issue - loss in model translation, discrepancy observed in evaluation between pytorch and tensorflow models.
* Table below is evaluated in HF's transformers v4.9.2. Sparsity is normalized to dense layers in attention heads and FFNN.
* Evaluation cli:
```bash
python run_qa.py \
--model_name_or_path <model identifier> \
--dataset_name squad \
--do_eval \
--per_device_eval_batch_size 384 \
--max_seq_length 68 \
--doc_stride 26 \
--output_dir /tmp/eval-squad
```
| | HF Model Hub Identifier | sparsity | em (pytorch) | em (tf) | f1 (pytorch) | f1 (tf) |
|---:|:------------------------------------------------------------------------------------------------------------------------|-----------:|---------------:|----------:|---------------:|----------:|
| 0 | [vuiseng9/bert-base-uncased-squadv1-85.4-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-85.4-sparse) | 85.4 | 69.9338 | 14.2573 | 77.6861 | 23.4917 |
| 1 | [vuiseng9/bert-base-uncased-squadv1-72.9-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-72.9-sparse) | 72.9 | 74.6358 | 31.0596 | 82.2555 | 39.8446 |
| 2 | [vuiseng9/bert-base-uncased-squadv1-65.1-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-65.1-sparse) | 65.1 | 76.1306 | 43.0274 | 83.4117 | 51.4300 |
| 3 | [vuiseng9/bert-base-uncased-squadv1-59.6-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-59.6-sparse) | 59.6 | 76.8590 | 50.4920 | 84.1267 | 59.0881 |
| 4 | [vuiseng9/bert-base-uncased-squadv1-52.0-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-52.0-sparse) | 52.0 | 78.0038 | 54.2857 | 85.2000 | 62.2914 | |
Connor-tech/bert_cn_finetuning | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 27 | null | * A set of unstructured sparse bert-base-uncased models fine-tuned for SQuADv1.
* Tensorflow models are created using ```TFAutoModelForQuestionAnswering.from_pretrained(..., from_pt=True)``` and ```model.save_pretrained(tf_pth)```.
* Observed issue - loss in model translation, discrepancy observed in evaluation between pytorch and tensorflow models.
* Table below is evaluated in HF's transformers v4.9.2. Sparsity is normalized to dense layers in attention heads and FFNN.
* Evaluation cli:
```bash
python run_qa.py \
--model_name_or_path <model identifier> \
--dataset_name squad \
--do_eval \
--per_device_eval_batch_size 384 \
--max_seq_length 68 \
--doc_stride 26 \
--output_dir /tmp/eval-squad
```
| | HF Model Hub Identifier | sparsity | em (pytorch) | em (tf) | f1 (pytorch) | f1 (tf) |
|---:|:------------------------------------------------------------------------------------------------------------------------|-----------:|---------------:|----------:|---------------:|----------:|
| 0 | [vuiseng9/bert-base-uncased-squadv1-85.4-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-85.4-sparse) | 85.4 | 69.9338 | 14.2573 | 77.6861 | 23.4917 |
| 1 | [vuiseng9/bert-base-uncased-squadv1-72.9-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-72.9-sparse) | 72.9 | 74.6358 | 31.0596 | 82.2555 | 39.8446 |
| 2 | [vuiseng9/bert-base-uncased-squadv1-65.1-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-65.1-sparse) | 65.1 | 76.1306 | 43.0274 | 83.4117 | 51.4300 |
| 3 | [vuiseng9/bert-base-uncased-squadv1-59.6-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-59.6-sparse) | 59.6 | 76.8590 | 50.4920 | 84.1267 | 59.0881 |
| 4 | [vuiseng9/bert-base-uncased-squadv1-52.0-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-52.0-sparse) | 52.0 | 78.0038 | 54.2857 | 85.2000 | 62.2914 | |
Connorvr/BrightBot-small | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | * A set of unstructured sparse bert-base-uncased models fine-tuned for SQuADv1.
* Tensorflow models are created using ```TFAutoModelForQuestionAnswering.from_pretrained(..., from_pt=True)``` and ```model.save_pretrained(tf_pth)```.
* Observed issue - loss in model translation, discrepancy observed in evaluation between pytorch and tensorflow models.
* Table below is evaluated in HF's transformers v4.9.2. Sparsity is normalized to dense layers in attention heads and FFNN.
* Evaluation cli:
```bash
python run_qa.py \
--model_name_or_path <model identifier> \
--dataset_name squad \
--do_eval \
--per_device_eval_batch_size 384 \
--max_seq_length 68 \
--doc_stride 26 \
--output_dir /tmp/eval-squad
```
| | HF Model Hub Identifier | sparsity | em (pytorch) | em (tf) | f1 (pytorch) | f1 (tf) |
|---:|:------------------------------------------------------------------------------------------------------------------------|-----------:|---------------:|----------:|---------------:|----------:|
| 0 | [vuiseng9/bert-base-uncased-squadv1-85.4-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-85.4-sparse) | 85.4 | 69.9338 | 14.2573 | 77.6861 | 23.4917 |
| 1 | [vuiseng9/bert-base-uncased-squadv1-72.9-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-72.9-sparse) | 72.9 | 74.6358 | 31.0596 | 82.2555 | 39.8446 |
| 2 | [vuiseng9/bert-base-uncased-squadv1-65.1-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-65.1-sparse) | 65.1 | 76.1306 | 43.0274 | 83.4117 | 51.4300 |
| 3 | [vuiseng9/bert-base-uncased-squadv1-59.6-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-59.6-sparse) | 59.6 | 76.8590 | 50.4920 | 84.1267 | 59.0881 |
| 4 | [vuiseng9/bert-base-uncased-squadv1-52.0-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-52.0-sparse) | 52.0 | 78.0038 | 54.2857 | 85.2000 | 62.2914 | |
Connorvr/TeachingGen | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | * A set of unstructured sparse bert-base-uncased models fine-tuned for SQuADv1.
* Tensorflow models are created using ```TFAutoModelForQuestionAnswering.from_pretrained(..., from_pt=True)``` and ```model.save_pretrained(tf_pth)```.
* Observed issue - loss in model translation, discrepancy observed in evaluation between pytorch and tensorflow models.
* Table below is evaluated in HF's transformers v4.9.2. Sparsity is normalized to dense layers in attention heads and FFNN.
* Evaluation cli:
```bash
python run_qa.py \
--model_name_or_path <model identifier> \
--dataset_name squad \
--do_eval \
--per_device_eval_batch_size 384 \
--max_seq_length 68 \
--doc_stride 26 \
--output_dir /tmp/eval-squad
```
| | HF Model Hub Identifier | sparsity | em (pytorch) | em (tf) | f1 (pytorch) | f1 (tf) |
|---:|:------------------------------------------------------------------------------------------------------------------------|-----------:|---------------:|----------:|---------------:|----------:|
| 0 | [vuiseng9/bert-base-uncased-squadv1-85.4-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-85.4-sparse) | 85.4 | 69.9338 | 14.2573 | 77.6861 | 23.4917 |
| 1 | [vuiseng9/bert-base-uncased-squadv1-72.9-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-72.9-sparse) | 72.9 | 74.6358 | 31.0596 | 82.2555 | 39.8446 |
| 2 | [vuiseng9/bert-base-uncased-squadv1-65.1-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-65.1-sparse) | 65.1 | 76.1306 | 43.0274 | 83.4117 | 51.4300 |
| 3 | [vuiseng9/bert-base-uncased-squadv1-59.6-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-59.6-sparse) | 59.6 | 76.8590 | 50.4920 | 84.1267 | 59.0881 |
| 4 | [vuiseng9/bert-base-uncased-squadv1-52.0-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-52.0-sparse) | 52.0 | 78.0038 | 54.2857 | 85.2000 | 62.2914 | |
ConstellationBoi/Oop | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | * A set of unstructured sparse bert-base-uncased models fine-tuned for SQuADv1.
* Tensorflow models are created using ```TFAutoModelForQuestionAnswering.from_pretrained(..., from_pt=True)``` and ```model.save_pretrained(tf_pth)```.
* Observed issue - loss in model translation, discrepancy observed in evaluation between pytorch and tensorflow models.
* Table below is evaluated in HF's transformers v4.9.2. Sparsity is normalized to dense layers in attention heads and FFNN.
* Evaluation cli:
```bash
python run_qa.py \
--model_name_or_path <model identifier> \
--dataset_name squad \
--do_eval \
--per_device_eval_batch_size 384 \
--max_seq_length 68 \
--doc_stride 26 \
--output_dir /tmp/eval-squad
```
| | HF Model Hub Identifier | sparsity | em (pytorch) | em (tf) | f1 (pytorch) | f1 (tf) |
|---:|:------------------------------------------------------------------------------------------------------------------------|-----------:|---------------:|----------:|---------------:|----------:|
| 0 | [vuiseng9/bert-base-uncased-squadv1-85.4-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-85.4-sparse) | 85.4 | 69.9338 | 14.2573 | 77.6861 | 23.4917 |
| 1 | [vuiseng9/bert-base-uncased-squadv1-72.9-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-72.9-sparse) | 72.9 | 74.6358 | 31.0596 | 82.2555 | 39.8446 |
| 2 | [vuiseng9/bert-base-uncased-squadv1-65.1-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-65.1-sparse) | 65.1 | 76.1306 | 43.0274 | 83.4117 | 51.4300 |
| 3 | [vuiseng9/bert-base-uncased-squadv1-59.6-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-59.6-sparse) | 59.6 | 76.8590 | 50.4920 | 84.1267 | 59.0881 |
| 4 | [vuiseng9/bert-base-uncased-squadv1-52.0-sparse](https://huggingface.co/vuiseng9/bert-base-uncased-squadv1-52.0-sparse) | 52.0 | 78.0038 | 54.2857 | 85.2000 | 62.2914 | |
Contrastive-Tension/BERT-Base-CT-STSb | [
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ### Reproducibility
```bash
# 1. install nncf
# checkout nncf 9c2845eeb38b4ab1b6d4ca19e31a1886e5bdf17c
# patch b/nncf/torch/sparsity/magnitude/algo.py
def sparsify_params(self):
from collections import OrderedDict
sparse_sd = OrderedDict()
with torch.no_grad():
for sparse_info in self.sparsified_module_info:
for n, m in self.model.named_modules():
if m == sparse_info.module:
sparse_sd[n+'.weight'] = m.weight*sparse_info.operand.binary_mask
model_sd = self.model.state_dict()
for k, v in sparse_sd.items():
assert k in model_sd, "key not exists!"
model_sd[k] = sparse_sd[k]
self.model.load_state_dict(model_sd)
# 2. transformers fork
git clone https://github.com/vuiseng9/transformers
cd transformers && git checkout gen-hybrid-sparse
# 3. follow
# transformers/examples/pytorch/question-answering/gen-hybrid-sparse/vscode-launch.json
```
### Key Content
```
gen_hybrid_sparse
โโโ 90pc_sparse-02_head-0512_ffnn
โย ย โโโ 90pc_sparse-02_head-0512_ffnn-8bit.onnx
โย ย โโโ ir
โย ย โย ย โโโ 90pc_sparse-02_head-0512_ffnn-8bit.bin
โย ย โย ย โโโ 90pc_sparse-02_head-0512_ffnn-8bit.mapping
โย ย โย ย โโโ 90pc_sparse-02_head-0512_ffnn-8bit.xml
โโโ 90pc_sparse-04_head-1024_ffnn
โย ย โโโ 90pc_sparse-04_head-1024_ffnn-8bit.onnx
โย ย โโโ ir
โย ย โย ย โโโ 90pc_sparse-04_head-1024_ffnn-8bit.bin
โย ย โย ย โโโ 90pc_sparse-04_head-1024_ffnn-8bit.mapping
โย ย โย ย โโโ 90pc_sparse-04_head-1024_ffnn-8bit.xml
โโโ 90pc_sparse-06_head-1536_ffnn
โย ย โโโ 90pc_sparse-06_head-1536_ffnn-8bit.onnx
โย ย โโโ ir
โย ย โย ย โโโ 90pc_sparse-06_head-1536_ffnn-8bit.bin
โย ย โย ย โโโ 90pc_sparse-06_head-1536_ffnn-8bit.mapping
โย ย โย ย โโโ 90pc_sparse-06_head-1536_ffnn-8bit.xml
โโโ 90pc_sparse-08_head-2048_ffnn
โย ย โโโ 90pc_sparse-08_head-2048_ffnn-8bit.onnx
โย ย โโโ ir
โย ย โย ย โโโ 90pc_sparse-08_head-2048_ffnn-8bit.bin
โย ย โย ย โโโ 90pc_sparse-08_head-2048_ffnn-8bit.mapping
โย ย โย ย โโโ 90pc_sparse-08_head-2048_ffnn-8bit.xml
โโโ 90pc_unstructured_sparse
โโโ 90pc_unstructured_sparse-8bit.onnx
โโโ ir
โโโ 90pc_unstructured_sparse-8bit.bin
โโโ 90pc_unstructured_sparse-8bit.mapping
โโโ 90pc_unstructured_sparse-8bit.xml
```
|
Contrastive-Tension/BERT-Base-CT | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 16 | null | This model is developed with transformers v4.9.1.
```
m = 0.8444
eval_samples = 9815
mm = 0.8495
eval_samples = 9832
```
# Train
```bash
#!/usr/bin/env bash
export CUDA_VISIBLE_DEVICES=0
OUTDIR=bert-mnli
NEPOCH=3
WORKDIR=transformers/examples/pytorch/text-classification
cd $WORKDIR
python run_glue.py \
--model_name_or_path bert-base-uncased \
--task_name mnli \
--max_seq_length 128 \
--do_train \
--per_device_train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs $NEPOCH \
--logging_steps 1 \
--evaluation_strategy steps \
--save_steps 3000 \
--do_eval \
--per_device_eval_batch_size 128 \
--eval_steps 250 \
--output_dir $OUTDIR
--overwrite_output_dir
```
# Eval
```bash
export CUDA_VISIBLE_DEVICES=0
OUTDIR=eval-bert-mnli
WORKDIR=transformers/examples/pytorch/text-classification
cd $WORKDIR
nohup python run_glue.py \
--model_name_or_path vuiseng9/bert-mnli \
--task_name mnli \
--do_eval \
--per_device_eval_batch_size 128 \
--max_seq_length 128 \
--overwrite_output_dir \
--output_dir $OUTDIR 2>&1 | tee $OUTDIR/run.log &
```
|
Contrastive-Tension/BERT-Base-Swe-CT-STSb | [
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 126 | null | This model is developed with transformers v4.13 with minor patch in this [fork](https://github.com/vuiseng9/transformers/tree/pegasus-v4p13).
# Setup
```bash
git clone https://github.com/vuiseng9/transformers
cd transformers
git checkout pegasus-v4p13 && git reset --hard 41eeb07
# installation, set summarization dependency
# . . .
```
# Train
```bash
#!/usr/bin/env bash
export CUDA_VISIBLE_DEVICES=0,1,2,3
NEPOCH=10
RUNID=pegasus-arxiv-${NEPOCH}eph-run1
OUTDIR=/data1/vchua/pegasus-hf4p13/pegasus-ft/${RUNID}
mkdir -p $OUTDIR
python run_summarization.py \
--model_name_or_path google/pegasus-large \
--dataset_name ccdv/arxiv-summarization \
--do_train \
--adafactor \
--learning_rate 8e-4 \
--label_smoothing_factor 0.1 \
--num_train_epochs $NEPOCH \
--per_device_train_batch_size 2 \
--do_eval \
--per_device_eval_batch_size 2 \
--num_beams 8 \
--max_source_length 1024 \
--max_target_length 256 \
--evaluation_strategy steps \
--eval_steps 10000 \
--save_strategy steps \
--save_steps 5000 \
--logging_steps 1 \
--overwrite_output_dir \
--run_name $RUNID \
--output_dir $OUTDIR > $OUTDIR/run.log 2>&1 &
```
# Eval
```bash
#!/usr/bin/env bash
export CUDA_VISIBLE_DEVICES=3
DT=$(date +%F_%H-%M)
RUNID=pegasus-arxiv-${DT}
OUTDIR=/data1/vchua/pegasus-hf4p13/pegasus-eval/${RUNID}
mkdir -p $OUTDIR
python run_summarization.py \
--model_name_or_path vuiseng9/pegasus-arxiv \
--dataset_name ccdv/arxiv-summarization \
--max_source_length 1024 \
--max_target_length 256 \
--do_predict \
--per_device_eval_batch_size 8 \
--predict_with_generate \
--num_beams 8 \
--overwrite_output_dir \
--run_name $RUNID \
--output_dir $OUTDIR > $OUTDIR/run.log 2>&1 &
```
Although fine-tuning is carried out for 5 epochs, this model is the checkpoint @150000 steps, 5.91 epoch, 34hrs) with lowest eval loss during training. Test/predict with this checkpoint should give results below. Note that we observe model at 80000 steps is closed to published result from HF.
```
***** predict metrics *****
predict_gen_len = 210.0925
predict_loss = 1.7192
predict_rouge1 = 46.1383
predict_rouge2 = 19.1393
predict_rougeL = 27.7573
predict_rougeLsum = 41.583
predict_runtime = 2:40:25.86
predict_samples = 6440
predict_samples_per_second = 0.669
predict_steps_per_second = 0.084
``` |
Contrastive-Tension/BERT-Distil-CT-STSb | [
"pytorch",
"tf",
"distilbert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"DistilBertModel"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | This model is developed with transformers v4.13 with minor patch in this [fork](https://github.com/vuiseng9/transformers/tree/pegasus-v4p13).
# Setup
```bash
git clone https://github.com/vuiseng9/transformers
cd transformers
git checkout pegasus-v4p13 && git reset --hard 41eeb07
# installation, set summarization dependency
# . . .
```
# Train
```bash
#!/usr/bin/env bash
export CUDA_VISIBLE_DEVICES=0,1,2,3
NEPOCH=10
RUNID=pegasus-billsum-${NEPOCH}eph-run1
OUTDIR=/data1/vchua/pegasus-hf4p13/pegasus/${RUNID}
mkdir -p $OUTDIR
nohup python run_summarization.py \
--model_name_or_path google/pegasus-large \
--dataset_name billsum \
--do_train \
--adafactor \
--learning_rate 2e-4 \
--label_smoothing_factor 0.1 \
--num_train_epochs $NEPOCH \
--per_device_train_batch_size 2 \
--do_eval \
--per_device_eval_batch_size 2 \
--num_beams 8 \
--max_source_length 1024 \
--max_target_length 256 \
--evaluation_strategy steps \
--eval_steps 1000 \
--save_strategy steps \
--save_steps 2000 \
--logging_steps 1 \
--overwrite_output_dir \
--run_name $RUNID \
--output_dir $OUTDIR > $OUTDIR/run.log 2>&1 &
```
# Eval
```bash
#!/usr/bin/env bash
export CUDA_VISIBLE_DEVICES=3
DT=$(date +%F_%H-%M)
RUNID=pegasus-billsum-${DT}
OUTDIR=/data1/vchua/pegasus-hf4p13/pegasus-test/${RUNID}
mkdir -p $OUTDIR
nohup python run_summarization.py \
--model_name_or_path vuiseng9/pegasus-billsum \
--dataset_name billsum \
--max_source_length 1024 \
--max_target_length 256 \
--do_predict \
--per_device_eval_batch_size 8 \
--predict_with_generate \
--num_beams 8 \
--overwrite_output_dir \
--run_name $RUNID \
--output_dir $OUTDIR > $OUTDIR/run.log 2>&1 &
```
Although fine-tuning is carried out for 10 epochs, this model is the checkpoint (@12000 steps, 6.6epoch, 210mins) with lowest eval loss during training. Test/predict with this checkpoint should give results below.
```
***** predict metrics *****
predict_gen_len = 179.7363
predict_loss = 1.2452
predict_rouge1 = 56.8657
predict_rouge2 = 38.6531
predict_rougeL = 44.8399
predict_rougeLsum = 51.6266
predict_runtime = 1:19:28.20
predict_samples = 3269
predict_samples_per_second = 0.686
predict_steps_per_second = 0.086
``` |
Contrastive-Tension/BERT-Distil-NLI-CT | [
"pytorch",
"tf",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | This model is developed with transformers v4.13 with minor patch in this [fork](https://github.com/vuiseng9/transformers/tree/pegasus-v4p13).
# Setup
```bash
git clone https://github.com/vuiseng9/transformers
cd transformers
git checkout pegasus-v4p13 && git reset --hard 3db4b452
# installation, set summarization dependency
# . . .
```
# Train
```bash
#!/usr/bin/env bash
export CUDA_VISIBLE_DEVICES=0,1 # 2 cards on xsum
NEPOCH=10
RUNID=pegasus-xsum-${NEPOCH}eph-run1
OUTDIR=/data1/vchua/pegasus-hf4p13/pegasus/${RUNID}
mkdir -p $OUTDIR
nohup python run_summarization.py \
--model_name_or_path google/pegasus-large \
--dataset_name xsum \
--do_train \
--adafactor \
--learning_rate 1e-4 \
--label_smoothing_factor 0.1 \
--num_train_epochs $NEPOCH \
--per_device_train_batch_size 8 \
--do_eval \
--per_device_eval_batch_size 8 \
--num_beams 8 \
--max_source_length 512 \
--max_target_length 64 \
--evaluation_strategy steps \
--eval_steps 1000 \
--save_strategy steps \
--save_steps 2000 \
--logging_steps 1 \
--overwrite_output_dir \
--run_name $RUNID \
--output_dir $OUTDIR > $OUTDIR/run.log 2>&1
```
# Eval
```bash
#!/usr/bin/env bash
export CUDA_VISIBLE_DEVICES=3
DT=$(date +%F_%H-%M)
RUNID=pegasus-xsum-${DT}
OUTDIR=/data1/vchua/pegasus-hf4p13/pegasus-test/${RUNID}
mkdir -p $OUTDIR
nohup python run_summarization.py \
--model_name_or_path vuiseng9/pegasus-xsum \
--dataset_name xsum \
--max_source_length 512 \
--max_target_length 64 \
--do_predict \
--per_device_eval_batch_size 16 \
--predict_with_generate \
--num_beams 8 \
--overwrite_output_dir \
--run_name $RUNID \
--output_dir $OUTDIR > $OUTDIR/run.log 2>&1 &
```
Although fine-tuning is carried out for 10 epochs, this model is the checkpoint (@62000 steps, 4.9epoch, 20hrs) with lower loss during training. Test/predict with this checkpoint should give results below.
```
***** predict metrics *****
predict_gen_len = 24.0499
predict_loss = 1.5801
predict_rouge1 = 47.2124
predict_rouge2 = 24.3673
predict_rougeL = 39.0055
predict_rougeLsum = 39.0007
predict_runtime = 0:34:23.32
predict_samples = 11334
predict_samples_per_second = 5.493
predict_steps_per_second = 0.344
``` |
Contrastive-Tension/BERT-Large-CT-STSb | [
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
language: en
datasets:
- librispeech_asr
tags:
- audio
- automatic-speech-recognition
license: apache-2.0
---
# Wav2Vec2-Base-100h
This is a fork of [```facebook/wav2vec2-base-100h```](https://huggingface.co/facebook/wav2vec2-base-100h)
### Changes & Notes
1. Document reproducible evaluation (below) to new transformer and datasets version.
2. Use batch size of 1 to reproduce results.
3. Validated with ```transformers v4.15.0```, ```datasets 1.18.0```
4. You may need to manually install pypkg ```librosa```, ```jiwer```
## Evaluation
This code snippet shows how to evaluate **facebook/wav2vec2-base-100h** on LibriSpeech's "clean" and "other" test data.
```python
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import soundfile as sf
import torch
from jiwer import wer
librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")
# librispeech_eval = load_dataset("librispeech_asr", "other", split="test")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-100h").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-100h")
def map_to_array(batch):
# speech, _ = sf.read(batch["file"])
# batch["speech"] = speech
batch["speech"] = batch['audio']['array']
return batch
librispeech_eval = librispeech_eval.map(map_to_array)
def map_to_pred(batch):
input_values = processor(batch["speech"], return_tensors="pt", padding="longest").input_values
with torch.no_grad():
logits = model(input_values.to("cuda")).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch
result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["speech"])
print("WER:", wer(result["text"], result["transcription"]))
```
*Result (WER)*:
| "clean/test" | "other/test" |
|--------------| ------------|
| 6.1 | 13.5 |
|
Contrastive-Tension/BERT-Large-NLI-CT | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 15 | null | ## TrOCR (small-sized model, fine-tuned on Synthetic Math Expression Dataset)
TrOCR model fine-tuned on the Synthetic Math Expression Dataset. It was introduced in the paper [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Li et al. and first released in [this repository](https://github.com/microsoft/unilm/tree/master/trocr).
Disclaimer: The team releasing TrOCR did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The TrOCR model is an encoder-decoder model, consisting of an image Transformer as encoder, and a text Transformer as decoder. The image encoder was initialized from the weights of BEiT, while the text decoder was initialized from the weights of RoBERTa.
Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. Next, the Transformer text decoder autoregressively generates tokens.
## Intended uses & limitations
You can use the raw model for optical character recognition (OCR) on single text-line images. See the model hub to look for fine-tuned versions on a task that interests you.
## How to use
Here is how to use this model in PyTorch:
```python
from transformers import VisionEncoderDecoderModel, AutoFeatureExtractor, AutoTokenizer
from PIL import Image
import requests
# load image from the IAM database
url = 'https://drive.google.com/uc?export=view&id=15dUjO44YDe1Agw_Qi8MyODRHpUFaCFw-'
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
feature_extractor = AutoFeatureExtractor.from_pretrained('vukpetar/trocr-small-photomath')
tokenizer = AutoTokenizer.from_pretrained("vukpetar/trocr-small-photomath")
model = VisionEncoderDecoderModel.from_pretrained('vukpetar/trocr-small-photomath')
pixel_values = feature_extractor(images=image, return_tensors="pt").pixel_values
generated_ids = model.generate(pixel_values)
generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## BibTeX entry and citation info
@misc{li2021trocr,
title={TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models},
author={Minghao Li and Tengchao Lv and Lei Cui and Yijuan Lu and Dinei Florencio and Cha Zhang and Zhoujun Li and Furu Wei},
year={2021},
eprint={2109.10282},
archivePrefix={arXiv},
primaryClass={cs.CL}
} |
Contrastive-Tension/RoBerta-Large-CT-STSb | [
"pytorch",
"tf",
"jax",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
language:
- ja
license: apache-2.0
tags:
- audio
- automatic-speech-recognition
- speech
datasets:
- Japanese accent datasets
metrics:
- wer
# Optional. Add this if you want to encode your eval results in a structured way.
model-index:
- name: Wav2vec2 Accent Japanese
results:
- task:
type: Speech Recognition # Required. Example: automatic-speech-recognition
name: automatic-speech-recognition # Optional. Example: Speech Recognition
dataset:
type: accent_voice
name: Japanese accent datasets
args: ja
metrics:
- type: wer # Required.
value: 15.82 # Required.
name: Test WER
---
# Wav2Vec2 Accent Japanese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Japanese accent dataset
When using this model, make sure that your speech input is sampled at 16kHz.
## Test Result
WER: 15.82% |
Cooker/cicero-similis | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: ja
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Japanese Hiragana by Chien Vu
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice Japanese
type: common_voice
args: ja
metrics:
- name: Test WER
type: wer
value: 24.74
- name: Test CER
type: cer
value: 10.99
---
# Wav2Vec2-Large-XLSR-53-Japanese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Japanese using the [Common Voice](https://huggingface.co/datasets/common_voice) and Japanese speech corpus of Saruwatari-lab, University of Tokyo [JSUT](https://sites.google.com/site/shinnosuketakamichi/publication/jsut).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
!pip install mecab-python3
!pip install unidic-lite
!pip install pykakasi
!python -m unidic download
import torch
import torchaudio
import librosa
from datasets import load_dataset
import MeCab
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
# config
wakati = MeCab.Tagger("-Owakati")
chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ใ\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ใ\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\๏ผ\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ใ\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ใ\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\โฆ\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\๏ผ\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ใป]'
kakasi = pykakasi.kakasi()
kakasi.setMode("J","H")
kakasi.setMode("K","H")
kakasi.setMode("r","Hepburn")
conv = kakasi.getConverter()
# load data, processor and model
test_dataset = load_dataset("common_voice", "ja", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("vumichien/wav2vec2-large-xlsr-japanese-hแปragana")
model = Wav2Vec2ForCTC.from_pretrained("vumichien/wav2vec2-large-xlsr-japanese-hแปragana")
resampler = lambda sr, y: librosa.resample(y.numpy().squeeze(), sr, 16_000)
# Preprocessing the datasets.
def speech_file_to_array_fn(batch):
batch["sentence"] = conv.do(wakati.parse(batch["sentence"]).strip())
batch["sentence"] = re.sub(chars_to_ignore_regex,'', batch["sentence"]).strip()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(sampling_rate, speech_array).squeeze()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Japanese test data of Common Voice.
```python
!pip install mecab-python3
!pip install unidic-lite
!pip install pykakasi
!python -m unidic download
import torch
import librosa
import torchaudio
from datasets import load_dataset, load_metric
import MeCab
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
#config
wakati = MeCab.Tagger("-Owakati")
chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ใ\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ใ\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\๏ผ\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ใ\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ใ\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\โฆ\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\๏ผ\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ใป]'
kakasi = pykakasi.kakasi()
kakasi.setMode("J","H")
kakasi.setMode("K","H")
kakasi.setMode("r","Hepburn")
conv = kakasi.getConverter()
# load data, processor and model
test_dataset = load_dataset("common_voice", "ja", split="test")
wer = load_metric("wer")
cer = load_metric("cer")
processor = Wav2Vec2Processor.from_pretrained("vumichien/wav2vec2-large-xlsr-japanese-hแปragana")
model = Wav2Vec2ForCTC.from_pretrained("vumichien/wav2vec2-large-xlsr-japanese-hแปragana")
model.to("cuda")
resampler = lambda sr, y: librosa.resample(y.numpy().squeeze(), sr, 16_000)
# Preprocessing the datasets.
def speech_file_to_array_fn(batch):
batch["sentence"] = conv.do(wakati.parse(batch["sentence"]).strip())
batch["sentence"] = re.sub(chars_to_ignore_regex,'', batch["sentence"]).strip()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(sampling_rate, speech_array).squeeze()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# evaluate function
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
print("CER: {:2f}".format(100 * cer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
## Test Result
**WER:** 24.74%,
**CER:** 10.99%
## Training
The Common Voice `train`, `validation` datasets and Japanese speech corpus datasets were used for training. |
Cool/Demo | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: ja
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Japanese by Chien Vu
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice Japanese
type: common_voice
args: ja
metrics:
- name: Test WER
type: wer
value: 30.84
- name: Test CER
type: cer
value: 17.85
widget:
- example_title: Japanese speech corpus sample 1
src: https://u.pcloud.link/publink/show?code=XZwhAlXZFOtXiqKHMzmYS9wXrCP8Yb7EtRd7
- example_title: Japanese speech corpus sample 2
src: https://u.pcloud.link/publink/show?code=XZ6hAlXZ5ccULt0YtrhJFl7LygKg0SJzKX0k
---
# Wav2Vec2-Large-XLSR-53-Japanese
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Japanese using the [Common Voice](https://huggingface.co/datasets/common_voice) and Japanese speech corpus of Saruwatari-lab, University of Tokyo [JSUT](https://sites.google.com/site/shinnosuketakamichi/publication/jsut).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
!pip install mecab-python3
!pip install unidic-lite
!python -m unidic download
import torch
import torchaudio
import librosa
from datasets import load_dataset
import MeCab
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
# config
wakati = MeCab.Tagger("-Owakati")
chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ใ\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ใ\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\๏ผ\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ใ\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ใ\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\โฆ\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\๏ผ\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ใป]'
# load data, processor and model
test_dataset = load_dataset("common_voice", "ja", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("vumichien/wav2vec2-large-xlsr-japanese")
model = Wav2Vec2ForCTC.from_pretrained("vumichien/wav2vec2-large-xlsr-japanese")
resampler = lambda sr, y: librosa.resample(y.numpy().squeeze(), sr, 16_000)
# Preprocessing the datasets.
def speech_file_to_array_fn(batch):
batch["sentence"] = wakati.parse(batch["sentence"]).strip()
batch["sentence"] = re.sub(chars_to_ignore_regex,'', batch["sentence"]).strip()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(sampling_rate, speech_array).squeeze()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Japanese test data of Common Voice.
```python
!pip install mecab-python3
!pip install unidic-lite
!python -m unidic download
import torch
import librosa
import torchaudio
from datasets import load_dataset, load_metric
import MeCab
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
#config
wakati = MeCab.Tagger("-Owakati")
chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ใ\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ใ\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\๏ผ\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ใ\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ใ\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\โฆ\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\๏ผ\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ใป]'
# load data, processor and model
test_dataset = load_dataset("common_voice", "ja", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("vumichien/wav2vec2-large-xlsr-japanese")
model = Wav2Vec2ForCTC.from_pretrained("vumichien/wav2vec2-large-xlsr-japanese")
model.to("cuda")
resampler = lambda sr, y: librosa.resample(y.numpy().squeeze(), sr, 16_000)
# Preprocessing the datasets.
def speech_file_to_array_fn(batch):
batch["sentence"] = wakati.parse(batch["sentence"]).strip()
batch["sentence"] = re.sub(chars_to_ignore_regex,'', batch["sentence"]).strip()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(sampling_rate, speech_array).squeeze()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# evaluate function
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
## Test Result
**WER:** 30.84%,
**CER:** 17.85%
## Training
The Common Voice `train`, `validation` datasets and Japanese speech corpus `basic5000` datasets were used for training.
|
Coolhand/Abuela | [
"en",
"image_restoration",
"superresolution",
"license:mit"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
language:
- ja
tags:
- automatic-speech-recognition
- common-voice
- hf-asr-leaderboard
- ja
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: wav2vec2-xls-r-1b
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7.0
type: mozilla-foundation/common_voice_7_0
args: ja
metrics:
- name: Test WER (with LM)
type: wer
value: 7.98
- name: Test CER (with LM)
type: cer
value: 3.42
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8.0
type: mozilla-foundation/common_voice_8_0
args: ja
metrics:
- name: Test WER (with LM)
type: wer
value: 7.88
- name: Test CER (with LM)
type: cer
value: 3.35
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: ja
metrics:
- name: Test WER (with LM)
type: wer
value: 28.07
- name: Test CER (with LM)
type: cer
value: 16.27
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: ja
metrics:
- name: Test CER
type: cer
value: 19.89
---
## Model description
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on my collection of Public Japanese Voice datasets for research [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0), [JUST](https://sites.google.com/site/shinnosuketakamichi/publication/jsut) (Japanese speech corpus of Saruwatari-lab., University of Tokyo), [JSSS](https://sites.google.com/site/shinnosuketakamichi/research-topics/jsss_corpus) (Japanese speech corpus for summarization and simplification), [CSS10](https://paperswithcode.com/dataset/css10) (A collection of single speaker speech datasets). You can find in preprocessing dataset in here VUMICHIEN/COMMON_VOICE_LARGE_JSUT_JSSS_CSS10.
### Total training data:
~60 hours
### Benchmark WER result:
| | [COMMON VOICE 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | [COMMON VOICE 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0)
|---|---|---|
|without LM| 10.96 | 10.91 |
|with 4-grams LM| 7.98 | 7.88 |
### Benchmark CER result:
| | [COMMON VOICE 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | [COMMON VOICE 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0)
|---|---|---|
|without LM| 4.28 | 4.22 |
|with 4-grams LM| 3.42 | 3.35 |
## Evaluation
Please use the eval.py file to run the evaluation:
```python
pip install mecab-python3 unidic-lite pykakasi
python eval.py --model_id vumichien/wav2vec2-xls-r-1b-japanese --dataset mozilla-foundation/common_voice_7_0 --config ja --split test --chunk_length_s 5.0 --stride_length_s 1.0 --log_outputs
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 2.2896 | 3.37 | 1500 | 0.4748 | 0.4013 | 0.1767 |
| 1.1608 | 6.74 | 3000 | 0.3350 | 0.3159 | 0.1456 |
| 1.1042 | 10.11 | 4500 | 0.3119 | 0.2971 | 0.1400 |
| 1.0494 | 13.48 | 6000 | 0.2974 | 0.2867 | 0.1353 |
| 1.0061 | 16.85 | 7500 | 0.2802 | 0.2746 | 0.1300 |
| 0.9629 | 20.22 | 9000 | 0.2844 | 0.2776 | 0.1326 |
| 0.9267 | 23.59 | 10500 | 0.2577 | 0.2603 | 0.1255 |
| 0.8984 | 26.96 | 12000 | 0.2508 | 0.2531 | 0.1226 |
| 0.8729 | 30.34 | 13500 | 0.2629 | 0.2606 | 0.1254 |
| 0.8546 | 33.71 | 15000 | 0.2402 | 0.2447 | 0.1193 |
| 0.8304 | 37.08 | 16500 | 0.2532 | 0.2472 | 0.1209 |
| 0.8075 | 40.45 | 18000 | 0.2439 | 0.2469 | 0.1198 |
| 0.7827 | 43.82 | 19500 | 0.2387 | 0.2372 | 0.1167 |
| 0.7627 | 47.19 | 21000 | 0.2344 | 0.2331 | 0.1147 |
| 0.7402 | 50.56 | 22500 | 0.2314 | 0.2299 | 0.1135 |
| 0.718 | 53.93 | 24000 | 0.2257 | 0.2267 | 0.1114 |
| 0.7016 | 57.3 | 25500 | 0.2204 | 0.2184 | 0.1089 |
| 0.6804 | 60.67 | 27000 | 0.2227 | 0.2181 | 0.1085 |
| 0.6625 | 64.04 | 28500 | 0.2138 | 0.2112 | 0.1058 |
| 0.6465 | 67.42 | 30000 | 0.2141 | 0.2081 | 0.1044 |
| 0.6238 | 70.79 | 31500 | 0.2172 | 0.2082 | 0.1050 |
| 0.6062 | 74.16 | 33000 | 0.2174 | 0.2058 | 0.1043 |
| 0.588 | 77.53 | 34500 | 0.2156 | 0.2034 | 0.1027 |
| 0.5722 | 80.9 | 36000 | 0.2162 | 0.2032 | 0.1029 |
| 0.5585 | 84.27 | 37500 | 0.2156 | 0.2022 | 0.1021 |
| 0.5456 | 87.64 | 39000 | 0.2126 | 0.1993 | 0.1009 |
| 0.5325 | 91.01 | 40500 | 0.2121 | 0.1966 | 0.1003 |
| 0.5229 | 94.38 | 42000 | 0.2104 | 0.1941 | 0.0991 |
| 0.5134 | 97.75 | 43500 | 0.2108 | 0.1948 | 0.0992 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
|
Coolhand/Sentiment | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
language:
- ja
tags:
- automatic-speech-recognition
- common-voice
- hf-asr-leaderboard
- ja
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-large-xlsr-53-ja
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7.0
type: mozilla-foundation/common_voice_8_0
args: ja
metrics:
- name: Test WER (with LM)
type: wer
value: 15.37
- name: Test CER (with LM)
type: cer
value: 6.91
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8.0
type: mozilla-foundation/common_voice_8_0
args: ja
metrics:
- name: Test WER (with LM)
type: wer
value: 16.09
- name: Test CER (with LM)
type: cer
value: 7.15
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: ja
metrics:
- name: Test WER (with LM)
type: wer
value: 37.96
- name: Test CER (with LM)
type: cer
value: 21.11
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: ja
metrics:
- name: Test CER
type: cer
value: 26.02
---
## Model description
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - JA dataset.
### Benchmark WER result:
| | [COMMON VOICE 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | [COMMON VOICE 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0)
|---|---|---|
|without LM| 15.74 | 25.10 |
|with 4-grams LM| 15.37 | 16.09 |
### Benchmark CER result:
| | [COMMON VOICE 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | [COMMON VOICE 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0)
|---|---|---|
|without LM| 9.51 | 9.95 |
|with 4-grams LM| 6.91 | 7.15 |
## Evaluation
Please use the eval.py file to run the evaluation:
```python
python eval.py --model_id vutankiet2901/wav2vec2-large-xlsr-53-ja --dataset mozilla-foundation/common_voice_7_0 --config ja --split test --log_outputs
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 4.7776 | 4.73 | 1500 | 2.9540 | 0.9772 | 0.8489 |
| 1.9076 | 9.46 | 3000 | 0.7146 | 0.5371 | 0.2484 |
| 1.507 | 14.2 | 4500 | 0.5843 | 0.4689 | 0.2196 |
| 1.3742 | 18.93 | 6000 | 0.5286 | 0.4321 | 0.1988 |
| 1.2776 | 23.66 | 7500 | 0.5007 | 0.4056 | 0.1870 |
| 1.2003 | 28.39 | 9000 | 0.4676 | 0.3848 | 0.1802 |
| 1.1281 | 33.12 | 10500 | 0.4524 | 0.3694 | 0.1720 |
| 1.0657 | 37.85 | 12000 | 0.4449 | 0.3590 | 0.1681 |
| 1.0129 | 42.59 | 13500 | 0.4266 | 0.3423 | 0.1617 |
| 0.9691 | 47.32 | 15000 | 0.4214 | 0.3375 | 0.1587 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
CopymySkill/DialoGPT-medium-atakan | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: apache-2.0
language:
- ja
tags:
- automatic-speech-recognition
- common-voice
- hf-asr-leaderboard
- ja
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-xls-r-1b
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 7.0
type: mozilla-foundation/common_voice_7_0
args: ja
metrics:
- name: Test WER (with LM)
type: wer
value: 11.77
- name: Test CER (with LM)
type: cer
value: 5.22
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 8.0
type: mozilla-foundation/common_voice_8_0
args: ja
metrics:
- name: Test WER (with LM)
type: wer
value: 12.23
- name: Test CER (with LM)
type: cer
value: 5.33
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: ja
metrics:
- name: Test WER (with LM)
type: wer
value: 29.35
- name: Test CER (with LM)
type: cer
value: 16.43
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Test Data
type: speech-recognition-community-v2/eval_data
args: ja
metrics:
- name: Test CER
type: cer
value: 19.48
---
## Model description
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - JA
### Benchmark WER result:
| | [COMMON VOICE 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | [COMMON VOICE 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0)
|---|---|---|
|without LM| 16.97 | 17.95 |
|with 4-grams LM| 11.77 | 12.23|
### Benchmark CER result:
| | [COMMON VOICE 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) | [COMMON VOICE 8.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0)
|---|---|---|
|without LM| 6.82 | 7.05 |
|with 4-grams LM| 5.22 | 5.33 |
## Evaluation
Please use the eval.py file to run the evaluation:
```python
pip install mecab-python3 unidic-lite pykakasi
python eval.py --model_id vutankiet2901/wav2vec2-xls-r-1b-ja --dataset mozilla-foundation/common_voice_8_0 --config ja --split test --log_outputs
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 3.484 | 9.49 | 1500 | 1.1849 | 0.7543 | 0.4099 |
| 1.3582 | 18.98 | 3000 | 0.4320 | 0.3489 | 0.1591 |
| 1.1716 | 28.48 | 4500 | 0.3835 | 0.3175 | 0.1454 |
| 1.0951 | 37.97 | 6000 | 0.3732 | 0.3033 | 0.1405 |
| 1.04 | 47.47 | 7500 | 0.3485 | 0.2898 | 0.1360 |
| 0.9768 | 56.96 | 9000 | 0.3386 | 0.2787 | 0.1309 |
| 0.9129 | 66.45 | 10500 | 0.3363 | 0.2711 | 0.1272 |
| 0.8614 | 75.94 | 12000 | 0.3386 | 0.2676 | 0.1260 |
| 0.8092 | 85.44 | 13500 | 0.3356 | 0.2610 | 0.1240 |
| 0.7658 | 94.93 | 15000 | 0.3316 | 0.2564 | 0.1218 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Corvus/DialoGPT-medium-CaptainPrice-Extended | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | Fine-Tuned MarianMT translation model for translating text from English to Dutch. Checkpoint of pre-trained model = Helsinki-NLP/opus-mt-en-nl.
Trained using custom training loop with PyTorch on Colab for 2 epochs. Link to the GitHub repo containing Google Colab notebook: https://github.com/vanadnarayane26/Maverick_2.0_Translation_layer/blob/main/Eng_to_dutch_marianmt.ipynb
|
Corvus/DialoGPT-medium-CaptainPrice | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | Fine-Tuned MarianMT translation model for translating text from English to Italian.
Checkpoint of pre-trained model = Helsinki-NLP/opus-mt-en-it.
Trained using custom training loop with PyTorch on Colab for 2 epochs.
Link to the GitHub repo containing Google Colab notebook: https://github.com/vanadnarayane26/Maverick_2.0_Translation_layer/blob/main/En_to_it_marianmt.ipynb |
CouchCat/ma_ner_v6_distil | [
"pytorch",
"distilbert",
"token-classification",
"en",
"transformers",
"ner",
"license:mit",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: t5-small-finetuned-no_paragraph-to-paragraph
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-no_paragraph-to-paragraph
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0713
- Bleu: 0.0
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:----:|:-------:|
| 0.767 | 1.0 | 576 | 0.0713 | 0.0 | 19.0 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
CouchCat/ma_ner_v7_distil | [
"pytorch",
"distilbert",
"token-classification",
"en",
"transformers",
"ner",
"license:mit",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: t5-small-finetuned-no_paragraph-to-yes_paragraph-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-no_paragraph-to-yes_paragraph-2
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0001
- Bleu: 0.0
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:----:|:-------:|
| 0.006 | 1.0 | 8081 | 0.0002 | 0.0 | 19.0 |
| 0.0032 | 2.0 | 16162 | 0.0001 | 0.0 | 19.0 |
| 0.0026 | 3.0 | 24243 | 0.0001 | 0.0 | 19.0 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Craig/paraphrase-MiniLM-L6-v2 | [
"pytorch",
"bert",
"arxiv:1908.10084",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"transformers",
"license:apache-2.0"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,026 | 2021-07-10T13:36:22Z | ---
language: id
tags:
- indonesian-roberta-base-sentiment-classifier
license: mit
datasets:
- indonlu
widget:
- text: "Jangan sampai saya telpon bos saya ya!"
---
## Indonesian RoBERTa Base Sentiment Classifier
Indonesian RoBERTa Base Sentiment Classifier is a sentiment-text-classification model based on the [RoBERTa](https://arxiv.org/abs/1907.11692) model. The model was originally the pre-trained [Indonesian RoBERTa Base](https://hf.co/flax-community/indonesian-roberta-base) model, which is then fine-tuned on [`indonlu`](https://hf.co/datasets/indonlu)'s `SmSA` dataset consisting of Indonesian comments and reviews.
After training, the model achieved an evaluation accuracy of 94.36% and F1-macro of 92.42%. On the benchmark test set, the model achieved an accuracy of 93.2% and F1-macro of 91.02%.
Hugging Face's `Trainer` class from the [Transformers](https://huggingface.co/transformers) library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with other frameworks nonetheless.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
| ---------------------------------------------- | ------- | ------------ | ------------------------------- |
| `indonesian-roberta-base-sentiment-classifier` | 124M | RoBERTa Base | `SmSA` |
## Evaluation Results
The model was trained for 5 epochs and the best model was loaded at the end.
| Epoch | Training Loss | Validation Loss | Accuracy | F1 | Precision | Recall |
| ----- | ------------- | --------------- | -------- | -------- | --------- | -------- |
| 1 | 0.342600 | 0.213551 | 0.928571 | 0.898539 | 0.909803 | 0.890694 |
| 2 | 0.190700 | 0.213466 | 0.934127 | 0.901135 | 0.925297 | 0.882757 |
| 3 | 0.125500 | 0.219539 | 0.942857 | 0.920901 | 0.927511 | 0.915193 |
| 4 | 0.083600 | 0.235232 | 0.943651 | 0.924227 | 0.926494 | 0.922048 |
| 5 | 0.059200 | 0.262473 | 0.942063 | 0.920583 | 0.924084 | 0.917351 |
## How to Use
### As Text Classifier
```python
from transformers import pipeline
pretrained_name = "w11wo/indonesian-roberta-base-sentiment-classifier"
nlp = pipeline(
"sentiment-analysis",
model=pretrained_name,
tokenizer=pretrained_name
)
nlp("Jangan sampai saya telpon bos saya ya!")
```
## Disclaimer
Do consider the biases which come from both the pre-trained RoBERTa model and the `SmSA` dataset that may be carried over into the results of this model.
## Author
Indonesian RoBERTa Base Sentiment Classifier was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access.
## Citation
If used, please cite the following:
```bibtex
@misc {wilson_wongso_2023,
author = { {Wilson Wongso} },
title = { indonesian-roberta-base-sentiment-classifier (Revision e402e46) },
year = 2023,
url = { https://huggingface.co/w11wo/indonesian-roberta-base-sentiment-classifier },
doi = { 10.57967/hf/0644 },
publisher = { Hugging Face }
}
``` |
Culmenus/IceBERT-finetuned-ner | [
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"dataset:mim_gold_ner",
"transformers",
"generated_from_trainer",
"license:gpl-3.0",
"model-index",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"RobertaForTokenClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
language: lo
tags:
- lao-roberta-base-pos-tagger
license: mit
widget:
- text: "เบฎเปเบญเบ เบกเปเบงเบ เปเบเป เบชเบฝเบเบเบต เบญเบดเบซเบผเบต"
---
## Lao RoBERTa Base POS Tagger
Lao RoBERTa Base POS Tagger is a part-of-speech token-classification model based on the [RoBERTa](https://arxiv.org/abs/1907.11692) model. The model was originally the pre-trained [Lao RoBERTa Base](https://huggingface.co/w11wo/lao-roberta-base) model, which is then fine-tuned on the [`Yunshan Cup 2020`](https://github.com/GKLMIP/Yunshan-Cup-2020) dataset consisting of tag-labelled Lao corpus.
After training, the model achieved an evaluation accuracy of 83.14%. On the benchmark test set, the model achieved an accuracy of 83.30%.
Hugging Face's `Trainer` class from the [Transformers](https://huggingface.co/transformers) library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with other frameworks nonetheless.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
| ----------------------------- | ------- | ------------ | ------------------------------- |
| `lao-roberta-base-pos-tagger` | 124M | RoBERTa Base | `Yunshan Cup 2020` |
## Evaluation Results
The model was trained for 15 epochs, with a batch size of 8, a learning rate of 5e-5, with cosine annealing to 0. The best model was loaded at the end.
| Epoch | Training Loss | Validation Loss | Accuracy |
| ----- | ------------- | --------------- | -------- |
| 1 | 1.026100 | 0.733780 | 0.746021 |
| 2 | 0.646900 | 0.659625 | 0.775688 |
| 3 | 0.500400 | 0.576214 | 0.798523 |
| 4 | 0.385400 | 0.606503 | 0.805269 |
| 5 | 0.288000 | 0.652493 | 0.809092 |
| 6 | 0.204600 | 0.671678 | 0.815216 |
| 7 | 0.145200 | 0.704693 | 0.818209 |
| 8 | 0.098700 | 0.830561 | 0.816998 |
| 9 | 0.066100 | 0.883329 | 0.825232 |
| 10 | 0.043900 | 0.933347 | 0.825664 |
| 11 | 0.027200 | 0.992055 | 0.828449 |
| 12 | 0.017300 | 1.054874 | 0.830819 |
| 13 | 0.011500 | 1.081638 | 0.830940 |
| 14 | 0.008500 | 1.094252 | 0.831304 |
| 15 | 0.007400 | 1.097428 | 0.831442 |
## How to Use
### As Token Classifier
```python
from transformers import pipeline
pretrained_name = "w11wo/lao-roberta-base-pos-tagger"
nlp = pipeline(
"token-classification",
model=pretrained_name,
tokenizer=pretrained_name
)
nlp("เบฎเปเบญเบ เบกเปเบงเบ เปเบเป เบชเบฝเบเบเบต เบญเบดเบซเบผเบต")
```
## Disclaimer
Do consider the biases which come from both the pre-trained RoBERTa model and the `Yunshan Cup 2020` dataset that may be carried over into the results of this model.
## Author
Lao RoBERTa Base POS Tagger was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access.
|
Culmenus/opus-mt-de-is-finetuned-de-to-is_35g65cc_2 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: su
tags:
- sundanese-roberta-base-emotion-classifier
license: mit
widget:
- text: "Wah, รฉta gรฉlo, keren pisan!"
---
## Sundanese RoBERTa Base Emotion Classifier
Sundanese RoBERTa Base Emotion Classifier is an emotion-text-classification model based on the [RoBERTa](https://arxiv.org/abs/1907.11692) model. The model was originally the pre-trained [Sundanese RoBERTa Base](https://hf.co/w11wo/sundanese-roberta-base) model, which is then fine-tuned on the [Sundanese Twitter dataset](https://github.com/virgantara/sundanese-twitter-dataset), consisting of Sundanese tweets.
10% of the dataset is kept for evaluation purposes. After training, the model achieved an evaluation accuracy of 98.41% and F1-macro of 98.43%.
Hugging Face's `Trainer` class from the [Transformers](https://huggingface.co/transformers) library was used to train the model. PyTorch was used as the backend framework during training, but the model remains compatible with other frameworks nonetheless.
## Model
| Model | #params | Arch. | Training/Validation data (text) |
| ------------------------------------------- | ------- | ------------ | ------------------------------- |
| `sundanese-roberta-base-emotion-classifier` | 124M | RoBERTa Base | Sundanese Twitter dataset |
## Evaluation Results
The model was trained for 10 epochs and the best model was loaded at the end.
| Epoch | Training Loss | Validation Loss | Accuracy | F1 | Precision | Recall |
| ----- | ------------- | --------------- | -------- | -------- | --------- | -------- |
| 1 | 0.801800 | 0.293695 | 0.900794 | 0.899048 | 0.903466 | 0.900406 |
| 2 | 0.208700 | 0.185291 | 0.936508 | 0.935520 | 0.939460 | 0.935540 |
| 3 | 0.089700 | 0.150287 | 0.956349 | 0.956569 | 0.956500 | 0.958612 |
| 4 | 0.025600 | 0.130889 | 0.972222 | 0.972865 | 0.973029 | 0.973184 |
| 5 | 0.002200 | 0.100031 | 0.980159 | 0.980430 | 0.980430 | 0.980430 |
| 6 | 0.001300 | 0.104971 | 0.980159 | 0.980430 | 0.980430 | 0.980430 |
| 7 | 0.000600 | 0.107744 | 0.980159 | 0.980174 | 0.980814 | 0.979743 |
| 8 | 0.000500 | 0.102327 | 0.980159 | 0.980171 | 0.979970 | 0.980430 |
| 9 | 0.000500 | 0.101935 | 0.984127 | 0.984376 | 0.984073 | 0.984741 |
| 10 | 0.000400 | 0.105965 | 0.984127 | 0.984142 | 0.983720 | 0.984741 |
## How to Use
### As Text Classifier
```python
from transformers import pipeline
pretrained_name = "sundanese-roberta-base-emotion-classifier"
nlp = pipeline(
"sentiment-analysis",
model=pretrained_name,
tokenizer=pretrained_name
)
nlp("Wah, รฉta gรฉlo, keren pisan!")
```
## Disclaimer
Do consider the biases which come from both the pre-trained RoBERTa model and the Sundanese Twitter dataset that may be carried over into the results of this model.
## Author
Sundanese RoBERTa Base Emotion Classifier was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/). All computation and development are done on Google Colaboratory using their free GPU access.
## Citation Information
```bib
@article{rs-907893,
author = {Wongso, Wilson
and Lucky, Henry
and Suhartono, Derwin},
journal = {Journal of Big Data},
year = {2022},
month = {Feb},
day = {26},
abstract = {The Sundanese language has over 32 million speakers worldwide, but the language has reaped little to no benefits from the recent advances in natural language understanding. Like other low-resource languages, the only alternative is to fine-tune existing multilingual models. In this paper, we pre-trained three monolingual Transformer-based language models on Sundanese data. When evaluated on a downstream text classification task, we found that most of our monolingual models outperformed larger multilingual models despite the smaller overall pre-training data. In the subsequent analyses, our models benefited strongly from the Sundanese pre-training corpus size and do not exhibit socially biased behavior. We released our models for other researchers and practitioners to use.},
issn = {2693-5015},
doi = {10.21203/rs.3.rs-907893/v1},
url = {https://doi.org/10.21203/rs.3.rs-907893/v1}
}
``` |
Daltcamalea01/Camaleaodalt | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: en
datasets:
- speechcolab/gigaspeech
---
|
Davlan/byt5-base-yor-eng-mt | [
"pytorch",
"t5",
"text2text-generation",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null |
---
language:
- de
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-de
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 87.0
- type: accuracy
name: Dutch Test accuracy
value: 89.6
- type: accuracy
name: German Test accuracy
value: 97.2
- type: accuracy
name: Italian Test accuracy
value: 85.6
- type: accuracy
name: French Test accuracy
value: 84.8
- type: accuracy
name: Spanish Test accuracy
value: 88.4
- type: accuracy
name: Russian Test accuracy
value: 89.4
- type: accuracy
name: Swedish Test accuracy
value: 92.3
- type: accuracy
name: Norwegian Test accuracy
value: 87.7
- type: accuracy
name: Danish Test accuracy
value: 88.9
- type: accuracy
name: Low Saxon Test accuracy
value: 44.3
- type: accuracy
name: Akkadian Test accuracy
value: 21.4
- type: accuracy
name: Armenian Test accuracy
value: 85.6
- type: accuracy
name: Welsh Test accuracy
value: 69.0
- type: accuracy
name: Old East Slavic Test accuracy
value: 67.7
- type: accuracy
name: Albanian Test accuracy
value: 84.6
- type: accuracy
name: Slovenian Test accuracy
value: 76.5
- type: accuracy
name: Guajajara Test accuracy
value: 18.1
- type: accuracy
name: Kurmanji Test accuracy
value: 74.1
- type: accuracy
name: Turkish Test accuracy
value: 75.6
- type: accuracy
name: Finnish Test accuracy
value: 83.8
- type: accuracy
name: Indonesian Test accuracy
value: 82.2
- type: accuracy
name: Ukrainian Test accuracy
value: 89.0
- type: accuracy
name: Polish Test accuracy
value: 86.6
- type: accuracy
name: Portuguese Test accuracy
value: 87.8
- type: accuracy
name: Kazakh Test accuracy
value: 80.6
- type: accuracy
name: Latin Test accuracy
value: 75.8
- type: accuracy
name: Old French Test accuracy
value: 36.3
- type: accuracy
name: Buryat Test accuracy
value: 49.8
- type: accuracy
name: Kaapor Test accuracy
value: 11.7
- type: accuracy
name: Korean Test accuracy
value: 61.4
- type: accuracy
name: Estonian Test accuracy
value: 86.6
- type: accuracy
name: Croatian Test accuracy
value: 88.8
- type: accuracy
name: Gothic Test accuracy
value: 8.1
- type: accuracy
name: Swiss German Test accuracy
value: 54.4
- type: accuracy
name: Assyrian Test accuracy
value: 17.2
- type: accuracy
name: North Sami Test accuracy
value: 25.0
- type: accuracy
name: Naija Test accuracy
value: 28.2
- type: accuracy
name: Latvian Test accuracy
value: 83.9
- type: accuracy
name: Chinese Test accuracy
value: 52.6
- type: accuracy
name: Tagalog Test accuracy
value: 72.1
- type: accuracy
name: Bambara Test accuracy
value: 17.5
- type: accuracy
name: Lithuanian Test accuracy
value: 82.6
- type: accuracy
name: Galician Test accuracy
value: 85.2
- type: accuracy
name: Vietnamese Test accuracy
value: 60.8
- type: accuracy
name: Greek Test accuracy
value: 88.7
- type: accuracy
name: Catalan Test accuracy
value: 86.8
- type: accuracy
name: Czech Test accuracy
value: 87.4
- type: accuracy
name: Erzya Test accuracy
value: 33.6
- type: accuracy
name: Bhojpuri Test accuracy
value: 46.5
- type: accuracy
name: Thai Test accuracy
value: 62.4
- type: accuracy
name: Marathi Test accuracy
value: 86.5
- type: accuracy
name: Basque Test accuracy
value: 77.3
- type: accuracy
name: Slovak Test accuracy
value: 87.6
- type: accuracy
name: Kiche Test accuracy
value: 21.6
- type: accuracy
name: Yoruba Test accuracy
value: 16.6
- type: accuracy
name: Warlpiri Test accuracy
value: 21.5
- type: accuracy
name: Tamil Test accuracy
value: 84.2
- type: accuracy
name: Maltese Test accuracy
value: 15.3
- type: accuracy
name: Ancient Greek Test accuracy
value: 62.0
- type: accuracy
name: Icelandic Test accuracy
value: 84.1
- type: accuracy
name: Mbya Guarani Test accuracy
value: 20.5
- type: accuracy
name: Urdu Test accuracy
value: 68.0
- type: accuracy
name: Romanian Test accuracy
value: 83.5
- type: accuracy
name: Persian Test accuracy
value: 76.0
- type: accuracy
name: Apurina Test accuracy
value: 22.2
- type: accuracy
name: Japanese Test accuracy
value: 36.2
- type: accuracy
name: Hungarian Test accuracy
value: 86.7
- type: accuracy
name: Hindi Test accuracy
value: 73.0
- type: accuracy
name: Classical Chinese Test accuracy
value: 28.6
- type: accuracy
name: Komi Permyak Test accuracy
value: 34.9
- type: accuracy
name: Faroese Test accuracy
value: 76.6
- type: accuracy
name: Sanskrit Test accuracy
value: 9.4
- type: accuracy
name: Livvi Test accuracy
value: 50.9
- type: accuracy
name: Arabic Test accuracy
value: 79.4
- type: accuracy
name: Wolof Test accuracy
value: 21.1
- type: accuracy
name: Bulgarian Test accuracy
value: 91.1
- type: accuracy
name: Akuntsu Test accuracy
value: 14.4
- type: accuracy
name: Makurap Test accuracy
value: 1.4
- type: accuracy
name: Kangri Test accuracy
value: 40.5
- type: accuracy
name: Breton Test accuracy
value: 60.0
- type: accuracy
name: Telugu Test accuracy
value: 83.2
- type: accuracy
name: Cantonese Test accuracy
value: 48.9
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 38.7
- type: accuracy
name: Karelian Test accuracy
value: 64.4
- type: accuracy
name: Upper Sorbian Test accuracy
value: 65.5
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 66.8
- type: accuracy
name: Komi Zyrian Test accuracy
value: 28.4
- type: accuracy
name: Irish Test accuracy
value: 66.3
- type: accuracy
name: Nayini Test accuracy
value: 44.9
- type: accuracy
name: Munduruku Test accuracy
value: 8.0
- type: accuracy
name: Manx Test accuracy
value: 20.6
- type: accuracy
name: Skolt Sami Test accuracy
value: 25.8
- type: accuracy
name: Afrikaans Test accuracy
value: 88.9
- type: accuracy
name: Old Turkish Test accuracy
value: 31.7
- type: accuracy
name: Tupinamba Test accuracy
value: 20.9
- type: accuracy
name: Belarusian Test accuracy
value: 89.5
- type: accuracy
name: Serbian Test accuracy
value: 89.8
- type: accuracy
name: Moksha Test accuracy
value: 31.3
- type: accuracy
name: Western Armenian Test accuracy
value: 77.6
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 56.5
- type: accuracy
name: Khunsari Test accuracy
value: 35.1
- type: accuracy
name: Hebrew Test accuracy
value: 91.7
- type: accuracy
name: Uyghur Test accuracy
value: 71.5
- type: accuracy
name: Chukchi Test accuracy
value: 29.0
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: German
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-de")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-de")
```
|
Davlan/distilbert-base-multilingual-cased-masakhaner | [
"pytorch",
"tf",
"distilbert",
"token-classification",
"arxiv:2103.11811",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 16 | null |
---
language:
- el
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-el
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 83.6
- type: accuracy
name: Dutch Test accuracy
value: 82.2
- type: accuracy
name: German Test accuracy
value: 82.6
- type: accuracy
name: Italian Test accuracy
value: 82.0
- type: accuracy
name: French Test accuracy
value: 78.7
- type: accuracy
name: Spanish Test accuracy
value: 82.2
- type: accuracy
name: Russian Test accuracy
value: 88.4
- type: accuracy
name: Swedish Test accuracy
value: 87.4
- type: accuracy
name: Norwegian Test accuracy
value: 82.1
- type: accuracy
name: Danish Test accuracy
value: 85.9
- type: accuracy
name: Low Saxon Test accuracy
value: 49.8
- type: accuracy
name: Akkadian Test accuracy
value: 24.4
- type: accuracy
name: Armenian Test accuracy
value: 84.0
- type: accuracy
name: Welsh Test accuracy
value: 68.9
- type: accuracy
name: Old East Slavic Test accuracy
value: 75.0
- type: accuracy
name: Albanian Test accuracy
value: 87.7
- type: accuracy
name: Slovenian Test accuracy
value: 77.2
- type: accuracy
name: Guajajara Test accuracy
value: 25.8
- type: accuracy
name: Kurmanji Test accuracy
value: 74.3
- type: accuracy
name: Turkish Test accuracy
value: 75.3
- type: accuracy
name: Finnish Test accuracy
value: 83.4
- type: accuracy
name: Indonesian Test accuracy
value: 75.4
- type: accuracy
name: Ukrainian Test accuracy
value: 88.6
- type: accuracy
name: Polish Test accuracy
value: 84.0
- type: accuracy
name: Portuguese Test accuracy
value: 82.4
- type: accuracy
name: Kazakh Test accuracy
value: 80.5
- type: accuracy
name: Latin Test accuracy
value: 77.3
- type: accuracy
name: Old French Test accuracy
value: 52.5
- type: accuracy
name: Buryat Test accuracy
value: 56.0
- type: accuracy
name: Kaapor Test accuracy
value: 11.2
- type: accuracy
name: Korean Test accuracy
value: 59.9
- type: accuracy
name: Estonian Test accuracy
value: 83.6
- type: accuracy
name: Croatian Test accuracy
value: 84.9
- type: accuracy
name: Gothic Test accuracy
value: 20.2
- type: accuracy
name: Swiss German Test accuracy
value: 43.6
- type: accuracy
name: Assyrian Test accuracy
value: 14.6
- type: accuracy
name: North Sami Test accuracy
value: 33.5
- type: accuracy
name: Naija Test accuracy
value: 42.7
- type: accuracy
name: Latvian Test accuracy
value: 84.9
- type: accuracy
name: Chinese Test accuracy
value: 42.1
- type: accuracy
name: Tagalog Test accuracy
value: 66.7
- type: accuracy
name: Bambara Test accuracy
value: 28.2
- type: accuracy
name: Lithuanian Test accuracy
value: 85.3
- type: accuracy
name: Galician Test accuracy
value: 82.1
- type: accuracy
name: Vietnamese Test accuracy
value: 62.8
- type: accuracy
name: Greek Test accuracy
value: 98.0
- type: accuracy
name: Catalan Test accuracy
value: 80.4
- type: accuracy
name: Czech Test accuracy
value: 85.0
- type: accuracy
name: Erzya Test accuracy
value: 43.9
- type: accuracy
name: Bhojpuri Test accuracy
value: 45.0
- type: accuracy
name: Thai Test accuracy
value: 58.6
- type: accuracy
name: Marathi Test accuracy
value: 85.3
- type: accuracy
name: Basque Test accuracy
value: 72.4
- type: accuracy
name: Slovak Test accuracy
value: 82.8
- type: accuracy
name: Kiche Test accuracy
value: 36.2
- type: accuracy
name: Yoruba Test accuracy
value: 28.9
- type: accuracy
name: Warlpiri Test accuracy
value: 38.9
- type: accuracy
name: Tamil Test accuracy
value: 83.0
- type: accuracy
name: Maltese Test accuracy
value: 22.3
- type: accuracy
name: Ancient Greek Test accuracy
value: 64.2
- type: accuracy
name: Icelandic Test accuracy
value: 80.7
- type: accuracy
name: Mbya Guarani Test accuracy
value: 32.4
- type: accuracy
name: Urdu Test accuracy
value: 53.0
- type: accuracy
name: Romanian Test accuracy
value: 83.7
- type: accuracy
name: Persian Test accuracy
value: 74.4
- type: accuracy
name: Apurina Test accuracy
value: 41.3
- type: accuracy
name: Japanese Test accuracy
value: 30.0
- type: accuracy
name: Hungarian Test accuracy
value: 80.2
- type: accuracy
name: Hindi Test accuracy
value: 60.0
- type: accuracy
name: Classical Chinese Test accuracy
value: 30.1
- type: accuracy
name: Komi Permyak Test accuracy
value: 44.2
- type: accuracy
name: Faroese Test accuracy
value: 72.9
- type: accuracy
name: Sanskrit Test accuracy
value: 40.4
- type: accuracy
name: Livvi Test accuracy
value: 65.2
- type: accuracy
name: Arabic Test accuracy
value: 76.6
- type: accuracy
name: Wolof Test accuracy
value: 28.0
- type: accuracy
name: Bulgarian Test accuracy
value: 89.6
- type: accuracy
name: Akuntsu Test accuracy
value: 26.7
- type: accuracy
name: Makurap Test accuracy
value: 18.5
- type: accuracy
name: Kangri Test accuracy
value: 43.1
- type: accuracy
name: Breton Test accuracy
value: 63.5
- type: accuracy
name: Telugu Test accuracy
value: 85.3
- type: accuracy
name: Cantonese Test accuracy
value: 48.3
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 51.6
- type: accuracy
name: Karelian Test accuracy
value: 71.0
- type: accuracy
name: Upper Sorbian Test accuracy
value: 69.5
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 69.2
- type: accuracy
name: Komi Zyrian Test accuracy
value: 36.5
- type: accuracy
name: Irish Test accuracy
value: 61.3
- type: accuracy
name: Nayini Test accuracy
value: 43.6
- type: accuracy
name: Munduruku Test accuracy
value: 29.4
- type: accuracy
name: Manx Test accuracy
value: 33.8
- type: accuracy
name: Skolt Sami Test accuracy
value: 31.5
- type: accuracy
name: Afrikaans Test accuracy
value: 85.0
- type: accuracy
name: Old Turkish Test accuracy
value: 37.1
- type: accuracy
name: Tupinamba Test accuracy
value: 29.2
- type: accuracy
name: Belarusian Test accuracy
value: 89.1
- type: accuracy
name: Serbian Test accuracy
value: 85.2
- type: accuracy
name: Moksha Test accuracy
value: 43.8
- type: accuracy
name: Western Armenian Test accuracy
value: 76.9
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 54.8
- type: accuracy
name: Khunsari Test accuracy
value: 45.9
- type: accuracy
name: Hebrew Test accuracy
value: 88.5
- type: accuracy
name: Uyghur Test accuracy
value: 75.7
- type: accuracy
name: Chukchi Test accuracy
value: 34.8
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Greek
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-el")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-el")
```
|
Davlan/distilbert-base-multilingual-cased-ner-hrl | [
"pytorch",
"tf",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible",
"has_space"
] | token-classification | {
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 123,856 | null |
---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-en
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 96.0
- type: accuracy
name: Dutch Test accuracy
value: 90.4
- type: accuracy
name: German Test accuracy
value: 88.6
- type: accuracy
name: Italian Test accuracy
value: 87.8
- type: accuracy
name: French Test accuracy
value: 87.4
- type: accuracy
name: Spanish Test accuracy
value: 90.3
- type: accuracy
name: Russian Test accuracy
value: 91.0
- type: accuracy
name: Swedish Test accuracy
value: 94.0
- type: accuracy
name: Norwegian Test accuracy
value: 89.6
- type: accuracy
name: Danish Test accuracy
value: 91.6
- type: accuracy
name: Low Saxon Test accuracy
value: 57.4
- type: accuracy
name: Akkadian Test accuracy
value: 26.4
- type: accuracy
name: Armenian Test accuracy
value: 88.5
- type: accuracy
name: Welsh Test accuracy
value: 70.6
- type: accuracy
name: Old East Slavic Test accuracy
value: 76.5
- type: accuracy
name: Albanian Test accuracy
value: 82.3
- type: accuracy
name: Slovenian Test accuracy
value: 79.0
- type: accuracy
name: Guajajara Test accuracy
value: 17.2
- type: accuracy
name: Kurmanji Test accuracy
value: 76.9
- type: accuracy
name: Turkish Test accuracy
value: 79.1
- type: accuracy
name: Finnish Test accuracy
value: 87.2
- type: accuracy
name: Indonesian Test accuracy
value: 86.9
- type: accuracy
name: Ukrainian Test accuracy
value: 87.6
- type: accuracy
name: Polish Test accuracy
value: 87.2
- type: accuracy
name: Portuguese Test accuracy
value: 90.0
- type: accuracy
name: Kazakh Test accuracy
value: 82.5
- type: accuracy
name: Latin Test accuracy
value: 79.6
- type: accuracy
name: Old French Test accuracy
value: 53.4
- type: accuracy
name: Buryat Test accuracy
value: 58.8
- type: accuracy
name: Kaapor Test accuracy
value: 9.2
- type: accuracy
name: Korean Test accuracy
value: 64.0
- type: accuracy
name: Estonian Test accuracy
value: 88.4
- type: accuracy
name: Croatian Test accuracy
value: 87.9
- type: accuracy
name: Gothic Test accuracy
value: 20.5
- type: accuracy
name: Swiss German Test accuracy
value: 47.6
- type: accuracy
name: Assyrian Test accuracy
value: 14.6
- type: accuracy
name: North Sami Test accuracy
value: 32.0
- type: accuracy
name: Naija Test accuracy
value: 47.5
- type: accuracy
name: Latvian Test accuracy
value: 87.5
- type: accuracy
name: Chinese Test accuracy
value: 47.5
- type: accuracy
name: Tagalog Test accuracy
value: 73.5
- type: accuracy
name: Bambara Test accuracy
value: 27.7
- type: accuracy
name: Lithuanian Test accuracy
value: 87.3
- type: accuracy
name: Galician Test accuracy
value: 87.1
- type: accuracy
name: Vietnamese Test accuracy
value: 66.4
- type: accuracy
name: Greek Test accuracy
value: 87.6
- type: accuracy
name: Catalan Test accuracy
value: 89.7
- type: accuracy
name: Czech Test accuracy
value: 88.1
- type: accuracy
name: Erzya Test accuracy
value: 47.6
- type: accuracy
name: Bhojpuri Test accuracy
value: 50.7
- type: accuracy
name: Thai Test accuracy
value: 59.5
- type: accuracy
name: Marathi Test accuracy
value: 82.2
- type: accuracy
name: Basque Test accuracy
value: 76.0
- type: accuracy
name: Slovak Test accuracy
value: 88.5
- type: accuracy
name: Kiche Test accuracy
value: 25.4
- type: accuracy
name: Yoruba Test accuracy
value: 18.5
- type: accuracy
name: Warlpiri Test accuracy
value: 29.1
- type: accuracy
name: Tamil Test accuracy
value: 83.4
- type: accuracy
name: Maltese Test accuracy
value: 21.1
- type: accuracy
name: Ancient Greek Test accuracy
value: 66.8
- type: accuracy
name: Icelandic Test accuracy
value: 84.8
- type: accuracy
name: Mbya Guarani Test accuracy
value: 24.1
- type: accuracy
name: Urdu Test accuracy
value: 67.0
- type: accuracy
name: Romanian Test accuracy
value: 85.7
- type: accuracy
name: Persian Test accuracy
value: 76.7
- type: accuracy
name: Apurina Test accuracy
value: 28.6
- type: accuracy
name: Japanese Test accuracy
value: 34.1
- type: accuracy
name: Hungarian Test accuracy
value: 86.0
- type: accuracy
name: Hindi Test accuracy
value: 74.1
- type: accuracy
name: Classical Chinese Test accuracy
value: 29.4
- type: accuracy
name: Komi Permyak Test accuracy
value: 47.4
- type: accuracy
name: Faroese Test accuracy
value: 77.0
- type: accuracy
name: Sanskrit Test accuracy
value: 25.6
- type: accuracy
name: Livvi Test accuracy
value: 63.2
- type: accuracy
name: Arabic Test accuracy
value: 80.7
- type: accuracy
name: Wolof Test accuracy
value: 26.1
- type: accuracy
name: Bulgarian Test accuracy
value: 90.8
- type: accuracy
name: Akuntsu Test accuracy
value: 18.3
- type: accuracy
name: Makurap Test accuracy
value: 5.5
- type: accuracy
name: Kangri Test accuracy
value: 43.0
- type: accuracy
name: Breton Test accuracy
value: 64.1
- type: accuracy
name: Telugu Test accuracy
value: 84.7
- type: accuracy
name: Cantonese Test accuracy
value: 54.0
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 53.7
- type: accuracy
name: Karelian Test accuracy
value: 69.7
- type: accuracy
name: Upper Sorbian Test accuracy
value: 75.6
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 66.3
- type: accuracy
name: Komi Zyrian Test accuracy
value: 39.9
- type: accuracy
name: Irish Test accuracy
value: 67.0
- type: accuracy
name: Nayini Test accuracy
value: 44.9
- type: accuracy
name: Munduruku Test accuracy
value: 12.3
- type: accuracy
name: Manx Test accuracy
value: 25.4
- type: accuracy
name: Skolt Sami Test accuracy
value: 29.9
- type: accuracy
name: Afrikaans Test accuracy
value: 89.3
- type: accuracy
name: Old Turkish Test accuracy
value: 37.1
- type: accuracy
name: Tupinamba Test accuracy
value: 23.1
- type: accuracy
name: Belarusian Test accuracy
value: 89.1
- type: accuracy
name: Serbian Test accuracy
value: 88.4
- type: accuracy
name: Moksha Test accuracy
value: 44.1
- type: accuracy
name: Western Armenian Test accuracy
value: 80.1
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 59.0
- type: accuracy
name: Khunsari Test accuracy
value: 43.2
- type: accuracy
name: Hebrew Test accuracy
value: 90.6
- type: accuracy
name: Uyghur Test accuracy
value: 75.8
- type: accuracy
name: Chukchi Test accuracy
value: 32.6
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: English
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-en")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-en")
```
|
Davlan/mt5_base_eng_yor_mt | [
"pytorch",
"mt5",
"text2text-generation",
"arxiv:2103.08647",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"MT5ForConditionalGeneration"
],
"model_type": "mt5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null |
---
language:
- fro
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-fro
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 73.4
- type: accuracy
name: Dutch Test accuracy
value: 73.1
- type: accuracy
name: German Test accuracy
value: 70.7
- type: accuracy
name: Italian Test accuracy
value: 72.6
- type: accuracy
name: French Test accuracy
value: 79.3
- type: accuracy
name: Spanish Test accuracy
value: 78.0
- type: accuracy
name: Russian Test accuracy
value: 68.8
- type: accuracy
name: Swedish Test accuracy
value: 76.8
- type: accuracy
name: Norwegian Test accuracy
value: 69.6
- type: accuracy
name: Danish Test accuracy
value: 74.2
- type: accuracy
name: Low Saxon Test accuracy
value: 40.3
- type: accuracy
name: Akkadian Test accuracy
value: 38.3
- type: accuracy
name: Armenian Test accuracy
value: 64.7
- type: accuracy
name: Welsh Test accuracy
value: 56.3
- type: accuracy
name: Old East Slavic Test accuracy
value: 67.5
- type: accuracy
name: Albanian Test accuracy
value: 66.5
- type: accuracy
name: Slovenian Test accuracy
value: 64.2
- type: accuracy
name: Guajajara Test accuracy
value: 15.0
- type: accuracy
name: Kurmanji Test accuracy
value: 59.9
- type: accuracy
name: Turkish Test accuracy
value: 57.2
- type: accuracy
name: Finnish Test accuracy
value: 66.3
- type: accuracy
name: Indonesian Test accuracy
value: 66.9
- type: accuracy
name: Ukrainian Test accuracy
value: 66.7
- type: accuracy
name: Polish Test accuracy
value: 67.3
- type: accuracy
name: Portuguese Test accuracy
value: 73.1
- type: accuracy
name: Kazakh Test accuracy
value: 58.5
- type: accuracy
name: Latin Test accuracy
value: 65.3
- type: accuracy
name: Old French Test accuracy
value: 93.3
- type: accuracy
name: Buryat Test accuracy
value: 43.2
- type: accuracy
name: Kaapor Test accuracy
value: 25.8
- type: accuracy
name: Korean Test accuracy
value: 50.3
- type: accuracy
name: Estonian Test accuracy
value: 66.1
- type: accuracy
name: Croatian Test accuracy
value: 72.0
- type: accuracy
name: Gothic Test accuracy
value: 38.1
- type: accuracy
name: Swiss German Test accuracy
value: 34.6
- type: accuracy
name: Assyrian Test accuracy
value: 8.2
- type: accuracy
name: North Sami Test accuracy
value: 23.0
- type: accuracy
name: Naija Test accuracy
value: 40.4
- type: accuracy
name: Latvian Test accuracy
value: 65.2
- type: accuracy
name: Chinese Test accuracy
value: 36.4
- type: accuracy
name: Tagalog Test accuracy
value: 53.3
- type: accuracy
name: Bambara Test accuracy
value: 13.4
- type: accuracy
name: Lithuanian Test accuracy
value: 64.1
- type: accuracy
name: Galician Test accuracy
value: 71.6
- type: accuracy
name: Vietnamese Test accuracy
value: 46.7
- type: accuracy
name: Greek Test accuracy
value: 72.9
- type: accuracy
name: Catalan Test accuracy
value: 76.9
- type: accuracy
name: Czech Test accuracy
value: 68.8
- type: accuracy
name: Erzya Test accuracy
value: 25.4
- type: accuracy
name: Bhojpuri Test accuracy
value: 41.2
- type: accuracy
name: Thai Test accuracy
value: 52.2
- type: accuracy
name: Marathi Test accuracy
value: 51.5
- type: accuracy
name: Basque Test accuracy
value: 59.6
- type: accuracy
name: Slovak Test accuracy
value: 70.7
- type: accuracy
name: Kiche Test accuracy
value: 19.7
- type: accuracy
name: Yoruba Test accuracy
value: 18.3
- type: accuracy
name: Warlpiri Test accuracy
value: 15.8
- type: accuracy
name: Tamil Test accuracy
value: 62.0
- type: accuracy
name: Maltese Test accuracy
value: 28.1
- type: accuracy
name: Ancient Greek Test accuracy
value: 56.3
- type: accuracy
name: Icelandic Test accuracy
value: 70.6
- type: accuracy
name: Mbya Guarani Test accuracy
value: 16.8
- type: accuracy
name: Urdu Test accuracy
value: 54.2
- type: accuracy
name: Romanian Test accuracy
value: 69.1
- type: accuracy
name: Persian Test accuracy
value: 65.4
- type: accuracy
name: Apurina Test accuracy
value: 24.5
- type: accuracy
name: Japanese Test accuracy
value: 31.0
- type: accuracy
name: Hungarian Test accuracy
value: 62.5
- type: accuracy
name: Hindi Test accuracy
value: 58.3
- type: accuracy
name: Classical Chinese Test accuracy
value: 41.9
- type: accuracy
name: Komi Permyak Test accuracy
value: 30.3
- type: accuracy
name: Faroese Test accuracy
value: 62.5
- type: accuracy
name: Sanskrit Test accuracy
value: 37.8
- type: accuracy
name: Livvi Test accuracy
value: 40.2
- type: accuracy
name: Arabic Test accuracy
value: 66.2
- type: accuracy
name: Wolof Test accuracy
value: 26.8
- type: accuracy
name: Bulgarian Test accuracy
value: 72.5
- type: accuracy
name: Akuntsu Test accuracy
value: 24.2
- type: accuracy
name: Makurap Test accuracy
value: 19.2
- type: accuracy
name: Kangri Test accuracy
value: 36.4
- type: accuracy
name: Breton Test accuracy
value: 47.3
- type: accuracy
name: Telugu Test accuracy
value: 58.4
- type: accuracy
name: Cantonese Test accuracy
value: 33.5
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 57.3
- type: accuracy
name: Karelian Test accuracy
value: 49.4
- type: accuracy
name: Upper Sorbian Test accuracy
value: 52.3
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 48.3
- type: accuracy
name: Komi Zyrian Test accuracy
value: 26.6
- type: accuracy
name: Irish Test accuracy
value: 46.7
- type: accuracy
name: Nayini Test accuracy
value: 41.0
- type: accuracy
name: Munduruku Test accuracy
value: 15.6
- type: accuracy
name: Manx Test accuracy
value: 16.1
- type: accuracy
name: Skolt Sami Test accuracy
value: 20.0
- type: accuracy
name: Afrikaans Test accuracy
value: 77.0
- type: accuracy
name: Old Turkish Test accuracy
value: 2.7
- type: accuracy
name: Tupinamba Test accuracy
value: 23.5
- type: accuracy
name: Belarusian Test accuracy
value: 67.8
- type: accuracy
name: Serbian Test accuracy
value: 74.1
- type: accuracy
name: Moksha Test accuracy
value: 27.3
- type: accuracy
name: Western Armenian Test accuracy
value: 61.6
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 42.8
- type: accuracy
name: Khunsari Test accuracy
value: 32.4
- type: accuracy
name: Hebrew Test accuracy
value: 62.5
- type: accuracy
name: Uyghur Test accuracy
value: 55.0
- type: accuracy
name: Chukchi Test accuracy
value: 20.1
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Old French
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-fro")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-fro")
```
|
Davlan/xlm-roberta-base-finetuned-igbo | [
"pytorch",
"xlm-roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"XLMRobertaForMaskedLM"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 68 | null |
---
language:
- hi
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-hi
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 75.9
- type: accuracy
name: Dutch Test accuracy
value: 72.3
- type: accuracy
name: German Test accuracy
value: 69.4
- type: accuracy
name: Italian Test accuracy
value: 68.1
- type: accuracy
name: French Test accuracy
value: 67.1
- type: accuracy
name: Spanish Test accuracy
value: 70.2
- type: accuracy
name: Russian Test accuracy
value: 82.9
- type: accuracy
name: Swedish Test accuracy
value: 77.4
- type: accuracy
name: Norwegian Test accuracy
value: 72.4
- type: accuracy
name: Danish Test accuracy
value: 74.9
- type: accuracy
name: Low Saxon Test accuracy
value: 48.0
- type: accuracy
name: Akkadian Test accuracy
value: 21.7
- type: accuracy
name: Armenian Test accuracy
value: 82.1
- type: accuracy
name: Welsh Test accuracy
value: 59.4
- type: accuracy
name: Old East Slavic Test accuracy
value: 63.6
- type: accuracy
name: Albanian Test accuracy
value: 68.5
- type: accuracy
name: Slovenian Test accuracy
value: 71.3
- type: accuracy
name: Guajajara Test accuracy
value: 18.5
- type: accuracy
name: Kurmanji Test accuracy
value: 71.8
- type: accuracy
name: Turkish Test accuracy
value: 75.4
- type: accuracy
name: Finnish Test accuracy
value: 80.3
- type: accuracy
name: Indonesian Test accuracy
value: 76.6
- type: accuracy
name: Ukrainian Test accuracy
value: 80.8
- type: accuracy
name: Polish Test accuracy
value: 81.1
- type: accuracy
name: Portuguese Test accuracy
value: 71.5
- type: accuracy
name: Kazakh Test accuracy
value: 82.0
- type: accuracy
name: Latin Test accuracy
value: 69.3
- type: accuracy
name: Old French Test accuracy
value: 44.0
- type: accuracy
name: Buryat Test accuracy
value: 53.9
- type: accuracy
name: Kaapor Test accuracy
value: 10.8
- type: accuracy
name: Korean Test accuracy
value: 57.8
- type: accuracy
name: Estonian Test accuracy
value: 81.0
- type: accuracy
name: Croatian Test accuracy
value: 79.8
- type: accuracy
name: Gothic Test accuracy
value: 8.6
- type: accuracy
name: Swiss German Test accuracy
value: 42.2
- type: accuracy
name: Assyrian Test accuracy
value: 16.3
- type: accuracy
name: North Sami Test accuracy
value: 26.2
- type: accuracy
name: Naija Test accuracy
value: 35.8
- type: accuracy
name: Latvian Test accuracy
value: 80.2
- type: accuracy
name: Chinese Test accuracy
value: 37.1
- type: accuracy
name: Tagalog Test accuracy
value: 71.3
- type: accuracy
name: Bambara Test accuracy
value: 22.2
- type: accuracy
name: Lithuanian Test accuracy
value: 81.3
- type: accuracy
name: Galician Test accuracy
value: 70.7
- type: accuracy
name: Vietnamese Test accuracy
value: 60.6
- type: accuracy
name: Greek Test accuracy
value: 69.5
- type: accuracy
name: Catalan Test accuracy
value: 68.7
- type: accuracy
name: Czech Test accuracy
value: 78.8
- type: accuracy
name: Erzya Test accuracy
value: 36.3
- type: accuracy
name: Bhojpuri Test accuracy
value: 61.2
- type: accuracy
name: Thai Test accuracy
value: 52.8
- type: accuracy
name: Marathi Test accuracy
value: 82.2
- type: accuracy
name: Basque Test accuracy
value: 78.8
- type: accuracy
name: Slovak Test accuracy
value: 78.9
- type: accuracy
name: Kiche Test accuracy
value: 21.7
- type: accuracy
name: Yoruba Test accuracy
value: 19.3
- type: accuracy
name: Warlpiri Test accuracy
value: 23.5
- type: accuracy
name: Tamil Test accuracy
value: 85.7
- type: accuracy
name: Maltese Test accuracy
value: 16.3
- type: accuracy
name: Ancient Greek Test accuracy
value: 54.9
- type: accuracy
name: Icelandic Test accuracy
value: 70.4
- type: accuracy
name: Mbya Guarani Test accuracy
value: 23.2
- type: accuracy
name: Urdu Test accuracy
value: 89.7
- type: accuracy
name: Romanian Test accuracy
value: 72.1
- type: accuracy
name: Persian Test accuracy
value: 78.1
- type: accuracy
name: Apurina Test accuracy
value: 22.9
- type: accuracy
name: Japanese Test accuracy
value: 29.3
- type: accuracy
name: Hungarian Test accuracy
value: 75.4
- type: accuracy
name: Hindi Test accuracy
value: 93.7
- type: accuracy
name: Classical Chinese Test accuracy
value: 18.4
- type: accuracy
name: Komi Permyak Test accuracy
value: 34.3
- type: accuracy
name: Faroese Test accuracy
value: 64.9
- type: accuracy
name: Sanskrit Test accuracy
value: 14.0
- type: accuracy
name: Livvi Test accuracy
value: 57.9
- type: accuracy
name: Arabic Test accuracy
value: 73.9
- type: accuracy
name: Wolof Test accuracy
value: 24.9
- type: accuracy
name: Bulgarian Test accuracy
value: 81.3
- type: accuracy
name: Akuntsu Test accuracy
value: 16.2
- type: accuracy
name: Makurap Test accuracy
value: 2.7
- type: accuracy
name: Kangri Test accuracy
value: 52.8
- type: accuracy
name: Breton Test accuracy
value: 49.5
- type: accuracy
name: Telugu Test accuracy
value: 85.4
- type: accuracy
name: Cantonese Test accuracy
value: 42.1
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 35.1
- type: accuracy
name: Karelian Test accuracy
value: 64.9
- type: accuracy
name: Upper Sorbian Test accuracy
value: 64.2
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 60.1
- type: accuracy
name: Komi Zyrian Test accuracy
value: 29.7
- type: accuracy
name: Irish Test accuracy
value: 56.5
- type: accuracy
name: Nayini Test accuracy
value: 39.7
- type: accuracy
name: Munduruku Test accuracy
value: 9.3
- type: accuracy
name: Manx Test accuracy
value: 25.3
- type: accuracy
name: Skolt Sami Test accuracy
value: 26.9
- type: accuracy
name: Afrikaans Test accuracy
value: 71.9
- type: accuracy
name: Old Turkish Test accuracy
value: 43.0
- type: accuracy
name: Tupinamba Test accuracy
value: 21.3
- type: accuracy
name: Belarusian Test accuracy
value: 80.5
- type: accuracy
name: Serbian Test accuracy
value: 79.9
- type: accuracy
name: Moksha Test accuracy
value: 34.3
- type: accuracy
name: Western Armenian Test accuracy
value: 74.9
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 49.1
- type: accuracy
name: Khunsari Test accuracy
value: 37.8
- type: accuracy
name: Hebrew Test accuracy
value: 81.2
- type: accuracy
name: Uyghur Test accuracy
value: 75.8
- type: accuracy
name: Chukchi Test accuracy
value: 27.0
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Hindi
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-hi")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-hi")
```
|
Davlan/xlm-roberta-base-masakhaner | [
"pytorch",
"xlm-roberta",
"token-classification",
"arxiv:2103.11811",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"XLMRobertaForTokenClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null |
---
language:
- lzh
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-lzh
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 33.6
- type: accuracy
name: Dutch Test accuracy
value: 30.9
- type: accuracy
name: German Test accuracy
value: 31.1
- type: accuracy
name: Italian Test accuracy
value: 31.1
- type: accuracy
name: French Test accuracy
value: 30.3
- type: accuracy
name: Spanish Test accuracy
value: 30.6
- type: accuracy
name: Russian Test accuracy
value: 37.1
- type: accuracy
name: Swedish Test accuracy
value: 35.6
- type: accuracy
name: Norwegian Test accuracy
value: 32.7
- type: accuracy
name: Danish Test accuracy
value: 35.0
- type: accuracy
name: Low Saxon Test accuracy
value: 19.0
- type: accuracy
name: Akkadian Test accuracy
value: 25.9
- type: accuracy
name: Armenian Test accuracy
value: 40.9
- type: accuracy
name: Welsh Test accuracy
value: 27.3
- type: accuracy
name: Old East Slavic Test accuracy
value: 36.4
- type: accuracy
name: Albanian Test accuracy
value: 31.6
- type: accuracy
name: Slovenian Test accuracy
value: 31.1
- type: accuracy
name: Guajajara Test accuracy
value: 13.9
- type: accuracy
name: Kurmanji Test accuracy
value: 36.5
- type: accuracy
name: Turkish Test accuracy
value: 42.7
- type: accuracy
name: Finnish Test accuracy
value: 45.0
- type: accuracy
name: Indonesian Test accuracy
value: 40.6
- type: accuracy
name: Ukrainian Test accuracy
value: 36.0
- type: accuracy
name: Polish Test accuracy
value: 35.3
- type: accuracy
name: Portuguese Test accuracy
value: 34.8
- type: accuracy
name: Kazakh Test accuracy
value: 45.4
- type: accuracy
name: Latin Test accuracy
value: 37.9
- type: accuracy
name: Old French Test accuracy
value: 33.4
- type: accuracy
name: Buryat Test accuracy
value: 27.2
- type: accuracy
name: Kaapor Test accuracy
value: 19.6
- type: accuracy
name: Korean Test accuracy
value: 44.8
- type: accuracy
name: Estonian Test accuracy
value: 41.4
- type: accuracy
name: Croatian Test accuracy
value: 34.2
- type: accuracy
name: Gothic Test accuracy
value: 12.3
- type: accuracy
name: Swiss German Test accuracy
value: 18.1
- type: accuracy
name: Assyrian Test accuracy
value: 3.5
- type: accuracy
name: North Sami Test accuracy
value: 8.9
- type: accuracy
name: Naija Test accuracy
value: 25.4
- type: accuracy
name: Latvian Test accuracy
value: 45.0
- type: accuracy
name: Chinese Test accuracy
value: 53.2
- type: accuracy
name: Tagalog Test accuracy
value: 34.0
- type: accuracy
name: Bambara Test accuracy
value: 13.9
- type: accuracy
name: Lithuanian Test accuracy
value: 44.0
- type: accuracy
name: Galician Test accuracy
value: 29.0
- type: accuracy
name: Vietnamese Test accuracy
value: 40.9
- type: accuracy
name: Greek Test accuracy
value: 31.3
- type: accuracy
name: Catalan Test accuracy
value: 29.6
- type: accuracy
name: Czech Test accuracy
value: 35.4
- type: accuracy
name: Erzya Test accuracy
value: 9.6
- type: accuracy
name: Bhojpuri Test accuracy
value: 22.9
- type: accuracy
name: Thai Test accuracy
value: 51.6
- type: accuracy
name: Marathi Test accuracy
value: 36.8
- type: accuracy
name: Basque Test accuracy
value: 42.1
- type: accuracy
name: Slovak Test accuracy
value: 36.3
- type: accuracy
name: Kiche Test accuracy
value: 11.9
- type: accuracy
name: Yoruba Test accuracy
value: 10.9
- type: accuracy
name: Warlpiri Test accuracy
value: 15.0
- type: accuracy
name: Tamil Test accuracy
value: 53.4
- type: accuracy
name: Maltese Test accuracy
value: 9.4
- type: accuracy
name: Ancient Greek Test accuracy
value: 31.9
- type: accuracy
name: Icelandic Test accuracy
value: 38.4
- type: accuracy
name: Mbya Guarani Test accuracy
value: 7.1
- type: accuracy
name: Urdu Test accuracy
value: 33.4
- type: accuracy
name: Romanian Test accuracy
value: 33.5
- type: accuracy
name: Persian Test accuracy
value: 35.2
- type: accuracy
name: Apurina Test accuracy
value: 11.9
- type: accuracy
name: Japanese Test accuracy
value: 39.6
- type: accuracy
name: Hungarian Test accuracy
value: 37.2
- type: accuracy
name: Hindi Test accuracy
value: 33.0
- type: accuracy
name: Classical Chinese Test accuracy
value: 88.0
- type: accuracy
name: Komi Permyak Test accuracy
value: 11.3
- type: accuracy
name: Faroese Test accuracy
value: 30.3
- type: accuracy
name: Sanskrit Test accuracy
value: 20.6
- type: accuracy
name: Livvi Test accuracy
value: 29.1
- type: accuracy
name: Arabic Test accuracy
value: 34.9
- type: accuracy
name: Wolof Test accuracy
value: 17.0
- type: accuracy
name: Bulgarian Test accuracy
value: 34.3
- type: accuracy
name: Akuntsu Test accuracy
value: 19.3
- type: accuracy
name: Makurap Test accuracy
value: 21.2
- type: accuracy
name: Kangri Test accuracy
value: 19.8
- type: accuracy
name: Breton Test accuracy
value: 27.4
- type: accuracy
name: Telugu Test accuracy
value: 49.4
- type: accuracy
name: Cantonese Test accuracy
value: 53.7
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 27.9
- type: accuracy
name: Karelian Test accuracy
value: 32.8
- type: accuracy
name: Upper Sorbian Test accuracy
value: 22.1
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 29.8
- type: accuracy
name: Komi Zyrian Test accuracy
value: 9.7
- type: accuracy
name: Irish Test accuracy
value: 29.5
- type: accuracy
name: Nayini Test accuracy
value: 32.1
- type: accuracy
name: Munduruku Test accuracy
value: 14.4
- type: accuracy
name: Manx Test accuracy
value: 16.8
- type: accuracy
name: Skolt Sami Test accuracy
value: 5.3
- type: accuracy
name: Afrikaans Test accuracy
value: 31.8
- type: accuracy
name: Old Turkish Test accuracy
value: 13.6
- type: accuracy
name: Tupinamba Test accuracy
value: 9.4
- type: accuracy
name: Belarusian Test accuracy
value: 36.7
- type: accuracy
name: Serbian Test accuracy
value: 33.9
- type: accuracy
name: Moksha Test accuracy
value: 10.4
- type: accuracy
name: Western Armenian Test accuracy
value: 34.8
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 29.2
- type: accuracy
name: Khunsari Test accuracy
value: 23.0
- type: accuracy
name: Hebrew Test accuracy
value: 44.8
- type: accuracy
name: Uyghur Test accuracy
value: 44.6
- type: accuracy
name: Chukchi Test accuracy
value: 7.0
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Classical Chinese
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-lzh")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-lzh")
```
|
Davlan/xlm-roberta-base-ner-hrl | [
"pytorch",
"xlm-roberta",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"XLMRobertaForTokenClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 760 | null |
---
language:
- mr
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-mr
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 67.4
- type: accuracy
name: Dutch Test accuracy
value: 61.5
- type: accuracy
name: German Test accuracy
value: 66.9
- type: accuracy
name: Italian Test accuracy
value: 64.8
- type: accuracy
name: French Test accuracy
value: 61.7
- type: accuracy
name: Spanish Test accuracy
value: 60.1
- type: accuracy
name: Russian Test accuracy
value: 68.1
- type: accuracy
name: Swedish Test accuracy
value: 68.4
- type: accuracy
name: Norwegian Test accuracy
value: 64.1
- type: accuracy
name: Danish Test accuracy
value: 66.4
- type: accuracy
name: Low Saxon Test accuracy
value: 51.7
- type: accuracy
name: Akkadian Test accuracy
value: 23.7
- type: accuracy
name: Armenian Test accuracy
value: 74.4
- type: accuracy
name: Welsh Test accuracy
value: 50.1
- type: accuracy
name: Old East Slavic Test accuracy
value: 57.8
- type: accuracy
name: Albanian Test accuracy
value: 61.9
- type: accuracy
name: Slovenian Test accuracy
value: 60.1
- type: accuracy
name: Guajajara Test accuracy
value: 20.5
- type: accuracy
name: Kurmanji Test accuracy
value: 60.0
- type: accuracy
name: Turkish Test accuracy
value: 71.8
- type: accuracy
name: Finnish Test accuracy
value: 74.5
- type: accuracy
name: Indonesian Test accuracy
value: 59.0
- type: accuracy
name: Ukrainian Test accuracy
value: 67.1
- type: accuracy
name: Polish Test accuracy
value: 65.0
- type: accuracy
name: Portuguese Test accuracy
value: 66.7
- type: accuracy
name: Kazakh Test accuracy
value: 73.8
- type: accuracy
name: Latin Test accuracy
value: 66.2
- type: accuracy
name: Old French Test accuracy
value: 48.6
- type: accuracy
name: Buryat Test accuracy
value: 57.0
- type: accuracy
name: Kaapor Test accuracy
value: 19.2
- type: accuracy
name: Korean Test accuracy
value: 59.7
- type: accuracy
name: Estonian Test accuracy
value: 75.4
- type: accuracy
name: Croatian Test accuracy
value: 63.8
- type: accuracy
name: Gothic Test accuracy
value: 20.0
- type: accuracy
name: Swiss German Test accuracy
value: 46.8
- type: accuracy
name: Assyrian Test accuracy
value: 16.1
- type: accuracy
name: North Sami Test accuracy
value: 37.1
- type: accuracy
name: Naija Test accuracy
value: 37.9
- type: accuracy
name: Latvian Test accuracy
value: 75.6
- type: accuracy
name: Chinese Test accuracy
value: 49.7
- type: accuracy
name: Tagalog Test accuracy
value: 55.1
- type: accuracy
name: Bambara Test accuracy
value: 28.9
- type: accuracy
name: Lithuanian Test accuracy
value: 75.9
- type: accuracy
name: Galician Test accuracy
value: 65.5
- type: accuracy
name: Vietnamese Test accuracy
value: 61.0
- type: accuracy
name: Greek Test accuracy
value: 70.4
- type: accuracy
name: Catalan Test accuracy
value: 57.9
- type: accuracy
name: Czech Test accuracy
value: 64.9
- type: accuracy
name: Erzya Test accuracy
value: 47.7
- type: accuracy
name: Bhojpuri Test accuracy
value: 41.9
- type: accuracy
name: Thai Test accuracy
value: 44.1
- type: accuracy
name: Marathi Test accuracy
value: 89.0
- type: accuracy
name: Basque Test accuracy
value: 71.8
- type: accuracy
name: Slovak Test accuracy
value: 61.3
- type: accuracy
name: Kiche Test accuracy
value: 25.7
- type: accuracy
name: Yoruba Test accuracy
value: 22.8
- type: accuracy
name: Warlpiri Test accuracy
value: 42.9
- type: accuracy
name: Tamil Test accuracy
value: 73.5
- type: accuracy
name: Maltese Test accuracy
value: 26.7
- type: accuracy
name: Ancient Greek Test accuracy
value: 63.5
- type: accuracy
name: Icelandic Test accuracy
value: 64.0
- type: accuracy
name: Mbya Guarani Test accuracy
value: 29.7
- type: accuracy
name: Urdu Test accuracy
value: 50.3
- type: accuracy
name: Romanian Test accuracy
value: 63.3
- type: accuracy
name: Persian Test accuracy
value: 61.0
- type: accuracy
name: Apurina Test accuracy
value: 38.4
- type: accuracy
name: Japanese Test accuracy
value: 40.5
- type: accuracy
name: Hungarian Test accuracy
value: 69.4
- type: accuracy
name: Hindi Test accuracy
value: 52.7
- type: accuracy
name: Classical Chinese Test accuracy
value: 32.4
- type: accuracy
name: Komi Permyak Test accuracy
value: 50.1
- type: accuracy
name: Faroese Test accuracy
value: 58.0
- type: accuracy
name: Sanskrit Test accuracy
value: 34.1
- type: accuracy
name: Livvi Test accuracy
value: 65.3
- type: accuracy
name: Arabic Test accuracy
value: 55.9
- type: accuracy
name: Wolof Test accuracy
value: 27.8
- type: accuracy
name: Bulgarian Test accuracy
value: 63.2
- type: accuracy
name: Akuntsu Test accuracy
value: 23.1
- type: accuracy
name: Makurap Test accuracy
value: 17.1
- type: accuracy
name: Kangri Test accuracy
value: 48.8
- type: accuracy
name: Breton Test accuracy
value: 50.8
- type: accuracy
name: Telugu Test accuracy
value: 82.0
- type: accuracy
name: Cantonese Test accuracy
value: 52.5
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 42.8
- type: accuracy
name: Karelian Test accuracy
value: 61.8
- type: accuracy
name: Upper Sorbian Test accuracy
value: 54.1
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 55.8
- type: accuracy
name: Komi Zyrian Test accuracy
value: 47.0
- type: accuracy
name: Irish Test accuracy
value: 50.1
- type: accuracy
name: Nayini Test accuracy
value: 48.7
- type: accuracy
name: Munduruku Test accuracy
value: 18.6
- type: accuracy
name: Manx Test accuracy
value: 31.1
- type: accuracy
name: Skolt Sami Test accuracy
value: 40.8
- type: accuracy
name: Afrikaans Test accuracy
value: 66.4
- type: accuracy
name: Old Turkish Test accuracy
value: 37.1
- type: accuracy
name: Tupinamba Test accuracy
value: 29.9
- type: accuracy
name: Belarusian Test accuracy
value: 65.4
- type: accuracy
name: Serbian Test accuracy
value: 62.6
- type: accuracy
name: Moksha Test accuracy
value: 46.8
- type: accuracy
name: Western Armenian Test accuracy
value: 70.6
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 47.4
- type: accuracy
name: Khunsari Test accuracy
value: 45.9
- type: accuracy
name: Hebrew Test accuracy
value: 77.1
- type: accuracy
name: Uyghur Test accuracy
value: 73.2
- type: accuracy
name: Chukchi Test accuracy
value: 33.5
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Marathi
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-mr")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-mr")
```
|
Dbluciferm3737/U | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null |
---
language:
- sl
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-sl
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 81.7
- type: accuracy
name: Dutch Test accuracy
value: 83.1
- type: accuracy
name: German Test accuracy
value: 81.2
- type: accuracy
name: Italian Test accuracy
value: 81.3
- type: accuracy
name: French Test accuracy
value: 79.9
- type: accuracy
name: Spanish Test accuracy
value: 84.9
- type: accuracy
name: Russian Test accuracy
value: 91.5
- type: accuracy
name: Swedish Test accuracy
value: 86.0
- type: accuracy
name: Norwegian Test accuracy
value: 78.4
- type: accuracy
name: Danish Test accuracy
value: 83.7
- type: accuracy
name: Low Saxon Test accuracy
value: 41.9
- type: accuracy
name: Akkadian Test accuracy
value: 17.3
- type: accuracy
name: Armenian Test accuracy
value: 84.3
- type: accuracy
name: Welsh Test accuracy
value: 65.5
- type: accuracy
name: Old East Slavic Test accuracy
value: 74.1
- type: accuracy
name: Albanian Test accuracy
value: 76.6
- type: accuracy
name: Slovenian Test accuracy
value: 97.6
- type: accuracy
name: Guajajara Test accuracy
value: 22.5
- type: accuracy
name: Kurmanji Test accuracy
value: 75.7
- type: accuracy
name: Turkish Test accuracy
value: 75.4
- type: accuracy
name: Finnish Test accuracy
value: 81.2
- type: accuracy
name: Indonesian Test accuracy
value: 81.8
- type: accuracy
name: Ukrainian Test accuracy
value: 92.6
- type: accuracy
name: Polish Test accuracy
value: 93.2
- type: accuracy
name: Portuguese Test accuracy
value: 84.0
- type: accuracy
name: Kazakh Test accuracy
value: 79.4
- type: accuracy
name: Latin Test accuracy
value: 76.7
- type: accuracy
name: Old French Test accuracy
value: 40.3
- type: accuracy
name: Buryat Test accuracy
value: 53.1
- type: accuracy
name: Kaapor Test accuracy
value: 11.2
- type: accuracy
name: Korean Test accuracy
value: 61.9
- type: accuracy
name: Estonian Test accuracy
value: 82.2
- type: accuracy
name: Croatian Test accuracy
value: 93.1
- type: accuracy
name: Gothic Test accuracy
value: 6.2
- type: accuracy
name: Swiss German Test accuracy
value: 40.7
- type: accuracy
name: Assyrian Test accuracy
value: 14.6
- type: accuracy
name: North Sami Test accuracy
value: 22.5
- type: accuracy
name: Naija Test accuracy
value: 33.9
- type: accuracy
name: Latvian Test accuracy
value: 86.0
- type: accuracy
name: Chinese Test accuracy
value: 39.7
- type: accuracy
name: Tagalog Test accuracy
value: 72.0
- type: accuracy
name: Bambara Test accuracy
value: 23.5
- type: accuracy
name: Lithuanian Test accuracy
value: 87.3
- type: accuracy
name: Galician Test accuracy
value: 82.5
- type: accuracy
name: Vietnamese Test accuracy
value: 67.3
- type: accuracy
name: Greek Test accuracy
value: 79.7
- type: accuracy
name: Catalan Test accuracy
value: 79.0
- type: accuracy
name: Czech Test accuracy
value: 94.1
- type: accuracy
name: Erzya Test accuracy
value: 40.1
- type: accuracy
name: Bhojpuri Test accuracy
value: 46.5
- type: accuracy
name: Thai Test accuracy
value: 53.2
- type: accuracy
name: Marathi Test accuracy
value: 87.7
- type: accuracy
name: Basque Test accuracy
value: 74.6
- type: accuracy
name: Slovak Test accuracy
value: 95.5
- type: accuracy
name: Kiche Test accuracy
value: 24.7
- type: accuracy
name: Yoruba Test accuracy
value: 17.1
- type: accuracy
name: Warlpiri Test accuracy
value: 27.5
- type: accuracy
name: Tamil Test accuracy
value: 83.4
- type: accuracy
name: Maltese Test accuracy
value: 18.4
- type: accuracy
name: Ancient Greek Test accuracy
value: 60.8
- type: accuracy
name: Icelandic Test accuracy
value: 80.0
- type: accuracy
name: Mbya Guarani Test accuracy
value: 23.7
- type: accuracy
name: Urdu Test accuracy
value: 61.6
- type: accuracy
name: Romanian Test accuracy
value: 82.4
- type: accuracy
name: Persian Test accuracy
value: 78.6
- type: accuracy
name: Apurina Test accuracy
value: 29.2
- type: accuracy
name: Japanese Test accuracy
value: 25.5
- type: accuracy
name: Hungarian Test accuracy
value: 74.6
- type: accuracy
name: Hindi Test accuracy
value: 67.4
- type: accuracy
name: Classical Chinese Test accuracy
value: 14.8
- type: accuracy
name: Komi Permyak Test accuracy
value: 40.3
- type: accuracy
name: Faroese Test accuracy
value: 75.0
- type: accuracy
name: Sanskrit Test accuracy
value: 14.3
- type: accuracy
name: Livvi Test accuracy
value: 58.2
- type: accuracy
name: Arabic Test accuracy
value: 79.8
- type: accuracy
name: Wolof Test accuracy
value: 24.7
- type: accuracy
name: Bulgarian Test accuracy
value: 90.4
- type: accuracy
name: Akuntsu Test accuracy
value: 20.6
- type: accuracy
name: Makurap Test accuracy
value: 6.2
- type: accuracy
name: Kangri Test accuracy
value: 44.2
- type: accuracy
name: Breton Test accuracy
value: 53.2
- type: accuracy
name: Telugu Test accuracy
value: 83.4
- type: accuracy
name: Cantonese Test accuracy
value: 48.9
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 41.9
- type: accuracy
name: Karelian Test accuracy
value: 64.7
- type: accuracy
name: Upper Sorbian Test accuracy
value: 79.9
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 67.2
- type: accuracy
name: Komi Zyrian Test accuracy
value: 33.3
- type: accuracy
name: Irish Test accuracy
value: 63.0
- type: accuracy
name: Nayini Test accuracy
value: 32.1
- type: accuracy
name: Munduruku Test accuracy
value: 10.1
- type: accuracy
name: Manx Test accuracy
value: 22.0
- type: accuracy
name: Skolt Sami Test accuracy
value: 27.4
- type: accuracy
name: Afrikaans Test accuracy
value: 74.0
- type: accuracy
name: Old Turkish Test accuracy
value: 37.1
- type: accuracy
name: Tupinamba Test accuracy
value: 22.5
- type: accuracy
name: Belarusian Test accuracy
value: 90.2
- type: accuracy
name: Serbian Test accuracy
value: 94.4
- type: accuracy
name: Moksha Test accuracy
value: 37.6
- type: accuracy
name: Western Armenian Test accuracy
value: 73.8
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 55.0
- type: accuracy
name: Khunsari Test accuracy
value: 32.4
- type: accuracy
name: Hebrew Test accuracy
value: 81.2
- type: accuracy
name: Uyghur Test accuracy
value: 72.1
- type: accuracy
name: Chukchi Test accuracy
value: 30.2
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Slovenian
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-sl")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-sl")
```
|
DeBERTa/deberta-v2-xxlarge | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null |
---
language:
- sr
license: apache-2.0
library_name: transformers
tags:
- part-of-speech
- token-classification
datasets:
- universal_dependencies
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-ft-udpos28-sr
results:
- task:
type: token-classification
name: Part-of-Speech Tagging
dataset:
type: universal_dependencies
name: Universal Dependencies v2.8
metrics:
- type: accuracy
name: English Test accuracy
value: 82.9
- type: accuracy
name: Dutch Test accuracy
value: 84.0
- type: accuracy
name: German Test accuracy
value: 82.7
- type: accuracy
name: Italian Test accuracy
value: 82.6
- type: accuracy
name: French Test accuracy
value: 83.6
- type: accuracy
name: Spanish Test accuracy
value: 87.3
- type: accuracy
name: Russian Test accuracy
value: 90.6
- type: accuracy
name: Swedish Test accuracy
value: 85.5
- type: accuracy
name: Norwegian Test accuracy
value: 79.0
- type: accuracy
name: Danish Test accuracy
value: 84.1
- type: accuracy
name: Low Saxon Test accuracy
value: 47.9
- type: accuracy
name: Akkadian Test accuracy
value: 30.2
- type: accuracy
name: Armenian Test accuracy
value: 84.2
- type: accuracy
name: Welsh Test accuracy
value: 67.4
- type: accuracy
name: Old East Slavic Test accuracy
value: 75.9
- type: accuracy
name: Albanian Test accuracy
value: 74.6
- type: accuracy
name: Slovenian Test accuracy
value: 85.8
- type: accuracy
name: Guajajara Test accuracy
value: 25.6
- type: accuracy
name: Kurmanji Test accuracy
value: 75.8
- type: accuracy
name: Turkish Test accuracy
value: 76.2
- type: accuracy
name: Finnish Test accuracy
value: 81.7
- type: accuracy
name: Indonesian Test accuracy
value: 80.5
- type: accuracy
name: Ukrainian Test accuracy
value: 92.3
- type: accuracy
name: Polish Test accuracy
value: 91.8
- type: accuracy
name: Portuguese Test accuracy
value: 84.7
- type: accuracy
name: Kazakh Test accuracy
value: 79.7
- type: accuracy
name: Latin Test accuracy
value: 77.0
- type: accuracy
name: Old French Test accuracy
value: 54.3
- type: accuracy
name: Buryat Test accuracy
value: 58.6
- type: accuracy
name: Kaapor Test accuracy
value: 14.6
- type: accuracy
name: Korean Test accuracy
value: 60.6
- type: accuracy
name: Estonian Test accuracy
value: 84.4
- type: accuracy
name: Croatian Test accuracy
value: 97.0
- type: accuracy
name: Gothic Test accuracy
value: 17.1
- type: accuracy
name: Swiss German Test accuracy
value: 42.9
- type: accuracy
name: Assyrian Test accuracy
value: 16.1
- type: accuracy
name: North Sami Test accuracy
value: 31.2
- type: accuracy
name: Naija Test accuracy
value: 38.7
- type: accuracy
name: Latvian Test accuracy
value: 85.1
- type: accuracy
name: Chinese Test accuracy
value: 41.3
- type: accuracy
name: Tagalog Test accuracy
value: 77.5
- type: accuracy
name: Bambara Test accuracy
value: 27.6
- type: accuracy
name: Lithuanian Test accuracy
value: 85.3
- type: accuracy
name: Galician Test accuracy
value: 84.9
- type: accuracy
name: Vietnamese Test accuracy
value: 65.8
- type: accuracy
name: Greek Test accuracy
value: 83.9
- type: accuracy
name: Catalan Test accuracy
value: 85.7
- type: accuracy
name: Czech Test accuracy
value: 94.8
- type: accuracy
name: Erzya Test accuracy
value: 43.1
- type: accuracy
name: Bhojpuri Test accuracy
value: 47.9
- type: accuracy
name: Thai Test accuracy
value: 60.5
- type: accuracy
name: Marathi Test accuracy
value: 84.0
- type: accuracy
name: Basque Test accuracy
value: 74.9
- type: accuracy
name: Slovak Test accuracy
value: 94.6
- type: accuracy
name: Kiche Test accuracy
value: 31.5
- type: accuracy
name: Yoruba Test accuracy
value: 21.8
- type: accuracy
name: Warlpiri Test accuracy
value: 37.7
- type: accuracy
name: Tamil Test accuracy
value: 83.9
- type: accuracy
name: Maltese Test accuracy
value: 22.7
- type: accuracy
name: Ancient Greek Test accuracy
value: 59.0
- type: accuracy
name: Icelandic Test accuracy
value: 79.6
- type: accuracy
name: Mbya Guarani Test accuracy
value: 29.4
- type: accuracy
name: Urdu Test accuracy
value: 63.0
- type: accuracy
name: Romanian Test accuracy
value: 82.1
- type: accuracy
name: Persian Test accuracy
value: 78.7
- type: accuracy
name: Apurina Test accuracy
value: 30.1
- type: accuracy
name: Japanese Test accuracy
value: 28.7
- type: accuracy
name: Hungarian Test accuracy
value: 78.4
- type: accuracy
name: Hindi Test accuracy
value: 66.6
- type: accuracy
name: Classical Chinese Test accuracy
value: 27.3
- type: accuracy
name: Komi Permyak Test accuracy
value: 40.2
- type: accuracy
name: Faroese Test accuracy
value: 76.1
- type: accuracy
name: Sanskrit Test accuracy
value: 32.5
- type: accuracy
name: Livvi Test accuracy
value: 62.6
- type: accuracy
name: Arabic Test accuracy
value: 80.9
- type: accuracy
name: Wolof Test accuracy
value: 30.7
- type: accuracy
name: Bulgarian Test accuracy
value: 92.2
- type: accuracy
name: Akuntsu Test accuracy
value: 32.6
- type: accuracy
name: Makurap Test accuracy
value: 12.3
- type: accuracy
name: Kangri Test accuracy
value: 44.4
- type: accuracy
name: Breton Test accuracy
value: 58.0
- type: accuracy
name: Telugu Test accuracy
value: 77.8
- type: accuracy
name: Cantonese Test accuracy
value: 44.9
- type: accuracy
name: Old Church Slavonic Test accuracy
value: 45.4
- type: accuracy
name: Karelian Test accuracy
value: 69.8
- type: accuracy
name: Upper Sorbian Test accuracy
value: 77.5
- type: accuracy
name: South Levantine Arabic Test accuracy
value: 66.8
- type: accuracy
name: Komi Zyrian Test accuracy
value: 36.1
- type: accuracy
name: Irish Test accuracy
value: 67.9
- type: accuracy
name: Nayini Test accuracy
value: 44.9
- type: accuracy
name: Munduruku Test accuracy
value: 19.2
- type: accuracy
name: Manx Test accuracy
value: 33.1
- type: accuracy
name: Skolt Sami Test accuracy
value: 33.0
- type: accuracy
name: Afrikaans Test accuracy
value: 79.6
- type: accuracy
name: Old Turkish Test accuracy
value: 37.1
- type: accuracy
name: Tupinamba Test accuracy
value: 31.4
- type: accuracy
name: Belarusian Test accuracy
value: 91.0
- type: accuracy
name: Serbian Test accuracy
value: 99.1
- type: accuracy
name: Moksha Test accuracy
value: 40.2
- type: accuracy
name: Western Armenian Test accuracy
value: 75.8
- type: accuracy
name: Scottish Gaelic Test accuracy
value: 57.1
- type: accuracy
name: Khunsari Test accuracy
value: 32.4
- type: accuracy
name: Hebrew Test accuracy
value: 88.5
- type: accuracy
name: Uyghur Test accuracy
value: 71.0
- type: accuracy
name: Chukchi Test accuracy
value: 29.3
---
# XLM-RoBERTa base Universal Dependencies v2.8 POS tagging: Serbian
This model is part of our paper called:
- Make the Best of Cross-lingual Transfer: Evidence from POS Tagging with over 100 Languages
Check the [Space](https://huggingface.co/spaces/wietsedv/xpos) for more details.
## Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-sr")
model = AutoModelForTokenClassification.from_pretrained("wietsedv/xlm-roberta-base-ft-udpos28-sr")
```
|
Declan/CNN_model_v7 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: "en"
tags:
- dstc10
- knowledge title-body validation
widget:
- text: "Can you accommodate large groups? It does not offer free WiFi."
- text: "Is there a gym on site? It does not have an onsite fitness center."
---
This is the model used for knowledge clustering where we feed title-body pair and the classifier predicts if the pair is valid or not.
For further information, please refer to https://github.com/yctam/dstc10_track2_task2 for the Github repository.
Credit: Jiakai Zou, Wilson Tam
---
```python
from transformers import AutoConfig, AutoTokenizer, AutoModelForSequenceClassification
def single_test(tokenizer, title_body_pair):
result = tokenizer([title_body_pair], return_tensors="pt")
model.eval()
outputs = model(**result)
predictions = outputs.logits.argmax(dim=-1)
# There was a mistake in flipping the labels.
return True if predictions == 0 else False
if __name__ == '__main__':
model_name = "wilsontam/bert-base-uncased-dstc10-kb-title-body-validate"
config = AutoConfig.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True)
model = AutoModelForSequenceClassification.from_pretrained(".")
sentence = "Can I check in anytime?"
body = "Yes, 24 Hours Front Desk Avaliable."
print(single_test((sentence, body))) # Expect: True
``` |
Declan/CNN_model_v8 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
language: "en"
tags:
- dstc10
- knowledge cluster classifier
widget:
- text: "oh and we'll mi thing uh is there bike clo ars or bike crac where i can park my thee"
- text: "oh and one more thing uhhh is there bike lockers or a bike rack where i can park my bike"
- text: "ni yeah that sounds great ummm dold you have the any idea er could you check for me if there's hat three wifie available there"
- text: "nice yeah that sounds great ummm do you have any idea or could you check for me if there's uhhh free wi-fi available there"
- text: "perfect and what is the check kin time for that"
---
This is the model used for knowledge cluster classification for the DSTC10 track2 knowledge selection task, trained with double heads, i.e., classifier head and LM head using ASR error simulator for model training.
For further information, please refer to https://github.com/yctam/dstc10_track2_task2 for the Github repository. You can use this model and use our source code to predict knowledge clusters under ASR errors. AAAI 2022 workshop paper: https://github.com/shanemoon/dstc10/raw/main/papers/dstc10_aaai22_track2_21.pdf
--- |
Declan/ChicagoTribune_model_v1 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
language: "en"
tags:
- dstc10
widget:
- text: "Can you accommodate large [MASK] ?"
---
# Goal
This Bert model is trained using DSTC9 training + validation data for dialogue modeling purpose.
Data link: https://github.com/alexa/alexa-with-dstc9-track1-dataset
Credit: Shuhan Yuan, Wilson Tam |
Declan/ChicagoTribune_model_v2 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
language: "en"
tags:
- dstc9
widget:
- text: "Yes, I'm going to be in Chinatown, San Francisco and am looking"
- text: "Can you find me one that is in the"
---
This GPT2 model is trained using DSTC9 data for dialogue modeling purpose.
Data link: https://github.com/alexa/alexa-with-dstc9-track1-dataset
Credit: Jia-Chen Jason Gu, Wilson Tam
|
Declan/ChicagoTribune_model_v6 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null |
---
language: tr
widget:
- text: "Mustafa Kemal Atatรผrk 19 Mayฤฑs 1919'da Samsun'a รงฤฑktฤฑ."
---
# Turkish Named Entity Recognition (NER) Model
## This repository is cloned from https://huggingface.co/akdeniz27/bert-base-turkish-cased-ner. This is the tensorflow version.
This model is the fine-tuned model of "dbmdz/bert-base-turkish-cased"
using a reviewed version of well known Turkish NER dataset
(https://github.com/stefan-it/turkish-bert/files/4558187/nerdata.txt).
# Fine-tuning parameters:
```
task = "ner"
model_checkpoint = "dbmdz/bert-base-turkish-cased"
batch_size = 8
label_list = ['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC']
max_length = 512
learning_rate = 2e-5
num_train_epochs = 3
weight_decay = 0.01
```
# How to use:
```
model = AutoModelForTokenClassification.from_pretrained("winvoker/bert-base-turkish-cased-ner-tf")
tokenizer = AutoTokenizer.from_pretrained("winvoker/bert-base-turkish-cased-ner-tf")
ner = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="first")
ner("<your text here>")
```
Pls refer "https://huggingface.co/transformers/_modules/transformers/pipelines/token_classification.html" for entity grouping with aggregation_strategy parameter.
# Reference test results:
* accuracy: 0.9933935699477056
* f1: 0.9592969472710453
* precision: 0.9543530277931161
* recall: 0.9642923563325274
Evaluation results with the test sets proposed in ["Kรผรงรผk, D., Kรผรงรผk, D., Arฤฑcฤฑ, N. 2016. Tรผrkรงe Varlฤฑk ฤฐsmi Tanฤฑma iรงin bir Veri Kรผmesi ("A Named Entity Recognition Dataset for Turkish"). IEEE Sinyal ฤฐลleme, ฤฐletiลim ve Uygulamalarฤฑ Kurultayฤฑ. Zonguldak, Tรผrkiye."](https://ieeexplore.ieee.org/document/7495744) paper.
* Test Set Acc. Prec. Rec. F1-Score
* 20010000 0.9946 0.9871 0.9463 0.9662
* 20020000 0.9928 0.9134 0.9206 0.9170
* 20030000 0.9942 0.9814 0.9186 0.9489
* 20040000 0.9943 0.9660 0.9522 0.9590
* 20050000 0.9971 0.9539 0.9932 0.9732
* 20060000 0.9993 0.9942 0.9942 0.9942
* 20070000 0.9970 0.9806 0.9439 0.9619
* 20080000 0.9988 0.9821 0.9649 0.9735
* 20090000 0.9977 0.9891 0.9479 0.9681
* 20100000 0.9961 0.9684 0.9293 0.9485
* Overall 0.9961 0.9720 0.9516 0.9617 |
Declan/ChicagoTribune_model_v8 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
language: ar
datasets:
- tydiqa
widget:
- text: "ู
ุง ูู ูุธุงู
ุงูุญูู
ูู ูุจูุงูุ"
context: "ูุจูุงู ุฃู (ุฑุณู
ูุง: ุงูุฌู
ููุฑูุฉ ุงููุจูุงููุฉ)ุ ูู ุฏููุฉ ุนุฑุจูุฉ ูุงูุนุฉ ูู ุงูุดุฑู ุงูุฃูุณุท ูู ุบุฑุจ ุงููุงุฑุฉ ุงูุขุณูููุฉ. ุชุญุฏูุง ุณูุฑูุง ู
ู ุงูุดู
ุงู ู ุงูุดุฑูุ ู ููุณุทูู ุงูู
ุญุชูุฉ - ุฅุณุฑุงุฆูู ู
ู ุงูุฌููุจุ ูุชุทู ู
ู ุฌูุฉ ุงูุบุฑุจ ุนูู ุงูุจุญุฑ ุงูุฃุจูุถ ุงูู
ุชูุณุท. ูู ุจูุฏ ุฏูู
ูุฑุงุทู ุฌู
ููุฑู ุทูุงุฆูู. ู
ุนุธู
ุณูุงูู ู
ู ุงูุนุฑุจ ุงูู
ุณูู
ูู ู ุงูู
ุณูุญููู. ูุจุฎูุงู ุบุงูุจูุฉ ุงูุฏูู ุงูุนุฑุจูุฉ ููุงู ูุฌูุฏ ูุนุงู ููู
ุณูุญููู ูู ุงูุญูุงุฉ ุงูุนุงู
ุฉ ูุงูุณูุงุณูุฉ. ูุงุฌุฑ ูุงูุชุดุฑ ุฃุจูุงุคู ุญูู ุงูุนุงูู
ู
ูุฐ ุฃูุงู
ุงูููููููููุ ูุญุงููุง ูุฅู ุนุฏุฏ ุงููุจูุงูููู ุงูู
ูุงุฌุฑูู ููุฏุฑ ุจุถุนู ุนุฏุฏ ุงููุจูุงูููู ุงูู
ููู
ูู. ูุงุฌู ูุจูุงู ู
ูุฐ ุงููุฏู
ุชุนุฏุฏ ุงูุญุถุงุฑุงุช ุงูุชู ุนุจุฑุช ููู ุฃู ุงุญุชูุช ุฃุฑุงุถูู ูุฐูู ูู
ููุนู ุงููุณุทู ุจูู ุงูุดู
ุงู ุงูุฃูุฑูุจู ูุงูุฌููุจ ุงูุนุฑุจู ูุงูุดุฑู ุงูุขุณููู ูุงูุบุฑุจ ุงูุฃูุฑูููุ ููุนุฏ ูุฐุง ุงูู
ููุน ุงูู
ุชูุณุท ู
ู ุฃุจุฑุฒ ุงูุฃุณุจุงุจ ูุชููุน ุงูุซูุงูุงุช ูู ูุจูุงูุ ููู ุงูููุช ุฐุงุชู ู
ู ุงูุฃุณุจุงุจ ุงูู
ุคุฏูุฉ ููุญุฑูุจ ูุงููุฒุงุนุงุช ุนูู ู
ุฑ ุงูุนุตูุฑ ุชุฌูุช ุจุญุฑูุจ ุฃูููุฉ ููุฒุงุน ู
ุตูุฑู ู
ุน ุฅุณุฑุงุฆูู. ููุนูุฏ ุฃูุฏู
ุฏููู ุนูู ุงุณุชูุทุงู ุงูุฅูุณุงู ูู ูุจูุงู ููุดูุก ุญุถุงุฑุฉ ุนูู ุฃุฑุถู ุฅูู ุฃูุซุฑ ู
ู 7000 ุณูุฉ. ูู ุงููุฏู
ุ ุณูู ุงููููููููู ุฃุฑุถ ูุจูุงู ุงูุญุงููุฉ ู
ุน ุฌุฒุก ู
ู ุฃุฑุถ ุณูุฑูุง ู ููุณุทููุ ููุคูุงุก ููู
ุณุงู
ููู ุงุชุฎุฐูุง ู
ู ุงูู
ูุงุญุฉ ูุงูุชุฌุงุฑุฉ ู
ููุฉ ููู
ุ ูุงุฒุฏูุฑุช ุญุถุงุฑุชูู
ุทููุฉ 2500 ุณูุฉ ุชูุฑูุจุง (ู
ู ุญูุงูู ุณูุฉ 3000 ุญุชู ุณูุฉ 539 ู.ู
). ููุฏ ู
ุฑุช ุนูู ูุจูุงู ุนุฏุฉ ุญุถุงุฑุงุช ูุดุนูุจ ุงุณุชูุฑุช ููู ู
ูุฐ ุนูุฏ ุงููููููููุ ู
ุซู ุงูู
ุตุฑููู ุงููุฏู
ุงุกุ ุงูุขุดูุฑูููุ ุงููุฑุณุ ุงูุฅุบุฑููุ ุงูุฑูู
ุงูุ ุงูุฑูู
ุงูุจูุฒูุทูููุ ุงูุนุฑุจุ ุงูุตููุจูููุ ุงูุฃุชุฑุงู ุงูุนุซู
ุงููููุ ูุงููุฑูุณููู."
---
<img src="https://raw.githubusercontent.com/WissamAntoun/arabic-wikipedia-qa-streamlit/main/is2alni_logo.png" width="150" align="center"/>
# Arabic QA
AraELECTRA powered Arabic Wikipedia QA system with Streamlit [](https://share.streamlit.io/wissamantoun/arabic-wikipedia-qa-streamlit/main)
This model is trained on the Arabic section of ArTyDiQA using the colab here [](https://colab.research.google.com/drive/1hik0L_Dxg6WwJFcDPP1v74motSkst4gE?usp=sharing)
# How to use:
```bash
git clone https://github.com/aub-mind/arabert
pip install pyarabic
```
```python
from arabert.preprocess import ArabertPreprocessor
from transformers import pipeline
prep = ArabertPreprocessor("aubmindlab/araelectra-base-discriminator") #or empty string it's the same
qa_pipe =pipeline("question-answering",model="wissamantoun/araelectra-base-artydiqa")
text = " ู
ุง ูู ูุธุงู
ุงูุญูู
ูู ูุจูุงูุ"
context = """
ูุจูุงู ุฃู (ุฑุณู
ูููุง: ุงูุฌูู
ููููุฑููููุฉ ุงููุจูุงููููุฉ)ุ ูู ุฏููุฉ ุนุฑุจููุฉ ูุงููุนูุฉ ูู ุงูุดูุฑู ุงูุฃูุณุท ูู ุบุฑุจ ุงููุงุฑุฉ ุงูุขุณููููุฉ. ุชูุญูุฏููุง ุณูุฑูุง ู
ู ุงูุดู
ุงู ูโุงูุดุฑูุ ูโููุณุทูู ุงูู
ุญุชูุฉ - ุฅุณุฑุงุฆูู ู
ู ุงูุฌููุจุ ูุชุทู ู
ู ุฌูุฉ ุงูุบุฑุจ ุนูู ุงูุจุญุฑ ุงูุฃุจูุถ ุงูู
ุชูุณุท. ูู ุจูุฏ ุฏูู
ูุฑุงุทู ุฌู
ููุฑู ุทูุงุฆูู. ู
ูุนุธู
ุณูุงูู ู
ู ุงูุนุฑุจ ุงูู
ุณูู
ูู ูโุงูู
ุณูุญููู. ูุจุฎูุงู ุบุงูุจููุฉ ุงูุฏูู ุงูุนุฑุจููุฉ ููุงู ูุฌูุฏ ูุนูุงู ููู
ุณูุญููู ูู ุงูุญูุงุฉ ุงูุนุงู
ูุฉ ูุงูุณูุงุณููุฉ. ูุงุฌุฑ ูุงูุชุดุฑ ุฃุจูุงุคู ุญูู ุงูุนุงูู
ู
ูุฐ ุฃูุงู
ุงูููููููููุ ูุญุงููููุง ูุฅู ุนุฏุฏ ุงููุจูุงูููู ุงูู
ูุงุฌุฑูู ูููุฏููุฑ ุจุถุนู ุนุฏุฏ ุงููุจูุงูููู ุงูู
ููู
ูู.
ูุงุฌู ูุจูุงู ู
ูุฐ ุงููุฏู
ุชุนุฏุฏ ุงูุญุถุงุฑุงุช ุงูุชู ุนุจุฑุช ููู ุฃู ุงุญุชููุช ุฃุฑุงุถูู ูุฐูู ูู
ููุนู ุงููุณุทู ุจูู ุงูุดู
ุงู ุงูุฃูุฑูุจู ูุงูุฌููุจ ุงูุนุฑุจู ูุงูุดุฑู ุงูุขุณููู ูุงูุบุฑุจ ุงูุฃูุฑูููุ ููุนุฏ ูุฐุง ุงูู
ููุน ุงูู
ุชูุณุท ู
ู ุฃุจุฑุฒ ุงูุฃุณุจุงุจ ูุชููุน ุงูุซูุงูุงุช ูู ูุจูุงูุ ููู ุงูููุช ุฐุงุชู ู
ู ุงูุฃุณุจุงุจ ุงูู
ุคุฏูุฉ ููุญุฑูุจ ูุงููุฒุงุนุงุช ุนูู ู
ุฑ ุงูุนุตูุฑ ุชุฌูุช ุจุญุฑูุจ ุฃูููุฉ ููุฒุงุน ู
ุตูุฑู ู
ุน ุฅุณุฑุงุฆูู. ููุนูุฏ ุฃูุฏู
ุฏููู ุนูู ุงุณุชูุทุงู ุงูุฅูุณุงู ูู ูุจูุงู ููุดูุก ุญุถุงุฑุฉ ุนูู ุฃุฑุถู ุฅูู ุฃูุซุฑ ู
ู 7000 ุณูุฉ.
ูู ุงููุฏู
ุ ุณูู ุงููููููููู ุฃุฑุถ ูุจูุงู ุงูุญุงููุฉ ู
ุน ุฌุฒุก ู
ู ุฃุฑุถ ุณูุฑูุง ูโููุณุทููุ ููุคูุงุก ููู
ุณุงู
ููู ุงุชุฎุฐูุง ู
ู ุงูู
ูุงุญุฉ ูุงูุชุฌุงุฑุฉ ู
ููุฉ ููู
ุ ูุงุฒุฏูุฑุช ุญุถุงุฑุชูู
ุทููุฉ 2500 ุณูุฉ ุชูุฑูุจูุง (ู
ู ุญูุงูู ุณูุฉ 3000 ุญุชู ุณูุฉ 539 ู.ู
). ููุฏ ู
ุฑูุช ุนูู ูุจูุงู ุนุฏูุฉ ุญุถุงุฑุงุช ูุดุนูุจ ุงุณุชูุฑุช ููู ู
ูุฐ ุนูุฏ ุงููููููููุ ู
ุซู ุงูู
ุตุฑููู ุงููุฏู
ุงุกุ ุงูุขุดูุฑูููุ ุงููุฑุณุ ุงูุฅุบุฑููุ ุงูุฑูู
ุงูุ ุงูุฑูู
ุงูุจูุฒูุทูููุ ุงูุนุฑุจุ ุงูุตููุจูููุ ุงูุฃุชุฑุงู ุงูุนุซู
ุงููููุ ูุงููุฑูุณููู.
"""
context = prep.preprocess(context)# don't forget to preprocess the question and the context to get the optimal results
result = qa_pipe(question=text,context=context)
"""
{'answer': 'ุฏูู
ูุฑุงุทู ุฌู
ููุฑู ุทูุงุฆูู',
'end': 241,
'score': 0.4910127818584442,
'start': 219}
"""
```
# If you used this model please cite us as :
```
@misc{antoun2020araelectra,
title={AraELECTRA: Pre-Training Text Discriminators for Arabic Language Understanding},
author={Wissam Antoun and Fady Baly and Hazem Hajj},
year={2020},
eprint={2012.15516},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
Declan/NPR_model_v8 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
tags:
- conversational
---
# DialoGPT Trained on the Speech of Fox Mulder from The X-Files |
Declan/NewYorkPost_model_v1 | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- nl
tags:
- Named Entity Recognition
- xlm-roberta
datasets:
- conll2002
metrics:
- f1: 90.57
---
# XLM-RoBERTa base ConLL-2002 Dutch
XLM-Roberta base model finetuned on ConLL-2002 Dutch train set, which is a Named Entity Recognition dataset containing the following classes: PER, LOC, ORG and MISC.
Label mapping:
{
0: O,
1: B-PER,
2: I-PER,
3: B-ORG,
4: I-ORG,
5: B-LOC,
6: I-LOC,
7: B-MISC,
8: I-MISC,
}
Results from https://arxiv.org/pdf/1911.02116.pdf reciprocated (original results were 90.39 F1, this finetuned version here scored 90.57). |
Declan/Reuters_model_v4 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | Pretrained on:
* Masked amino acid modeling
Please see our [main model](https://huggingface.co/wukevin/tcr-bert) for additional details. |
Declan/Reuters_model_v5 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | # TCR transformer model
See our full [codebase](https://github.com/wukevin/tcr-bert) and our [preprint](https://www.biorxiv.org/content/10.1101/2021.11.18.469186v1) for more information.
This model is on:
- Masked language modeling (masked amino acid or MAA modeling)
- Classification across antigen labels from PIRD
If you are looking for a model trained only on MAA, please see our [other model](https://huggingface.co/wukevin/tcr-bert-mlm-only).
Example inputs:
* `C A S S P V T G G I Y G Y T F` (binds to NLVPMVATV CMV antigen)
* `C A T S G R A G V E Q F F` (binds to GILGFVFTL flu antigen) |
DeepPavlov/rubert-base-cased | [
"pytorch",
"jax",
"bert",
"feature-extraction",
"ru",
"arxiv:1905.07213",
"transformers",
"has_space"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 148,127 | null |
# Delish v6 (GPT-Neo 1.3B)
This model is from the DelishBot project.
|
DeepPavlov/xlm-roberta-large-en-ru-mnli | [
"pytorch",
"xlm-roberta",
"text-classification",
"en",
"ru",
"dataset:glue",
"dataset:mnli",
"transformers",
"xlm-roberta-large",
"xlm-roberta-large-en-ru",
"xlm-roberta-large-en-ru-mnli",
"has_space"
] | text-classification | {
"architectures": [
"XLMRobertaForSequenceClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 227 | null |
# GPT NEO 350M
This hosts the pulled 350M that Eleuther removed. I am keeping it ๐ |
DeividasM/wav2vec2-large-xlsr-53-lithuanian | [
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"lt",
"dataset:common_voice",
"transformers",
"audio",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0",
"model-index"
] | automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | Step Training Loss Validation Loss Rouge2 Precision Rouge2 Recall Rouge2 Fmeasure
240 2.513600 3.049892 0.082800 0.102600 0.085700
240 steps |
DeltaHub/adapter_t5-3b_mrpc | [
"pytorch",
"transformers"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | \nTraining Loss Validation Loss Rouge2 Precision Rouge2 Recall Rouge2 Fmeasure
2.880900 2.715085 0.121400 0.142300 0.117100
+200 steps
total = 440 steps
tokenization:
max article: 8192
max abstract: 512 |
DeltaHub/adapter_t5-3b_qnli | [
"pytorch",
"transformers"
] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | Step Training Loss Validation Loss Rouge2 Precision Rouge2 Recall Rouge2 Fmeasure
100 3.049500 2.605496 0.172300 0.186900 0.151200
200 3.019400 2.567277 0.165100 0.189400 0.145000
300 3.014400 2.538830 0.157000 0.179200 0.134200
400 2.867200 2.490068 0.163600 0.177100 0.136200
500 2.723700 2.465870 0.168400 0.195700 0.152300
600 2.925400 2.452575 0.169500 0.210100 0.159400
700 2.878900 2.440204 0.173400 0.198000 0.155800
800 3.156500 2.423908 0.172900 0.196300 0.152800
+ 440 steps before
total = 1240 steps |
Denny29/DialoGPT-medium-asunayuuki | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
tags:
- conversational
---
# Joseph Joestar DialoGPT Model |
DevsIA/imagenes | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | This is super resolution model to upscale anime like illustration image by 4x.
This model can upscale 256x256 image to 1024x1024 within around 20[ms] on GPU and around 250[ms] on CPU.
Example is [here](https://github.com/xiong-jie-y/ml-examples/tree/master/realtime_srgan_anime).
All the models in this repository is under MIT License. |
Dhritam/Zova-bot | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9392329403951519
- name: Recall
type: recall
value: 0.9520363513968361
- name: F1
type: f1
value: 0.9455913079816131
- name: Accuracy
type: accuracy
value: 0.9864308000235474
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0634
- Precision: 0.9392
- Recall: 0.9520
- F1: 0.9456
- Accuracy: 0.9864
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0866 | 1.0 | 1756 | 0.0736 | 0.9157 | 0.9322 | 0.9239 | 0.9816 |
| 0.0382 | 2.0 | 3512 | 0.0663 | 0.9326 | 0.9472 | 0.9398 | 0.9855 |
| 0.0226 | 3.0 | 5268 | 0.0634 | 0.9392 | 0.9520 | 0.9456 | 0.9864 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0
- Datasets 1.16.2.dev0
- Tokenizers 0.10.3
|
DicoTiar/wisdomfiy | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb-whole-word-masking
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb-whole-word-masking
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3043
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.5536 | 1.0 | 157 | 3.3242 |
| 3.4026 | 2.0 | 314 | 3.2848 |
| 3.3708 | 3.0 | 471 | 3.2791 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.0
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
|
DimaOrekhov/cubert-method-name | [
"pytorch",
"encoder-decoder",
"text2text-generation",
"transformers",
"autotrain_compatible"
] | text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
language: ka
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec finetuned for Georgian
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ka
type: common_voice
args: ka
metrics:
- name: Test WER
type: wer
value: 45.28
---
# Wav2Vec2-Large-XLSR-53-Georgian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Georgian using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import librosa
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ka", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("xsway/wav2vec2-large-xlsr-georgian")
model = Wav2Vec2ForCTC.from_pretrained("xsway/wav2vec2-large-xlsr-georgian")
resampler = lambda sampling_rate, y: librosa.resample(y.numpy().squeeze(), sampling_rate, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
\\\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\\\tbatch["speech"] = resampler(sampling_rate, speech_array).squeeze()
\\\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\\\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Georgian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
import librosa
test_dataset = load_dataset("common_voice", "ka", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("xsway/wav2vec2-large-xlsr-georgian")
model = Wav2Vec2ForCTC.from_pretrained("xsway/wav2vec2-large-xlsr-georgian")
model.to("cuda")
chars_to_ignore_regex = '[\\\\\\\\,\\\\\\\\?\\\\\\\\.\\\\\\\\!\\\\\\\\-\\\\\\\\;\\\\\\\\:\\\\\\\\"\\\\\\\\โ]'
resampler = lambda sampling_rate, y: librosa.resample(y.numpy().squeeze(), sampling_rate, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(sampling_rate, speech_array).squeeze()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 45.28 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found [here](...)
|
DivyanshuSheth/T5-Seq2Seq-Final | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
model_index:
- name: bert-base-uncased-issues-128
results:
- task:
name: Masked Language Modeling
type: fill-mask
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-issues-128
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2109
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.9845 | 1.0 | 1163 | 1.6403 |
| 1.5695 | 2.0 | 2326 | 1.4212 |
| 1.4221 | 3.0 | 3489 | 1.3714 |
| 1.3302 | 4.0 | 4652 | 1.3592 |
| 1.2734 | 5.0 | 5815 | 1.2781 |
| 1.2143 | 6.0 | 6978 | 1.2286 |
| 1.1704 | 7.0 | 8141 | 1.2492 |
| 1.1261 | 8.0 | 9304 | 1.2044 |
| 1.0812 | 9.0 | 10467 | 1.1878 |
| 1.0657 | 10.0 | 11630 | 1.2177 |
| 1.0319 | 11.0 | 12793 | 1.1428 |
| 1.0063 | 12.0 | 13956 | 1.0910 |
| 0.9731 | 13.0 | 15119 | 1.1111 |
| 0.9674 | 14.0 | 16282 | 1.1699 |
| 0.9391 | 15.0 | 17445 | 1.0805 |
| 0.9381 | 16.0 | 18608 | 1.2109 |
### Framework versions
- Transformers 4.8.0
- Pytorch 1.9.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
Doxophobia/DialoGPT-medium-celeste | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
] | conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | null |
---
language: en
tags:
- sagemaker
- bart
- summarization
license: apache-2.0
- Training 3000 examples
|
DoyyingFace/bert-asian-hate-tweets-asian-unclean-slanted | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 29 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-existence
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-existence
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.9532 | 1.0 | 221 | 2.1697 |
| 2.0959 | 2.0 | 442 | 1.9725 |
| 1.9277 | 3.0 | 663 | 1.7944 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.1
- Datasets 1.18.3
- Tokenizers 0.11.0
|
DoyyingFace/bert-asian-hate-tweets-asian-unclean-warmup-100 | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 28 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-mi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-mi
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8606
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.1069 | 1.0 | 97 | 2.3524 |
| 2.1677 | 2.0 | 194 | 1.9426 |
| 1.9197 | 3.0 | 291 | 2.0536 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.1
- Datasets 1.18.3
- Tokenizers 0.11.0
|
DoyyingFace/bert-asian-hate-tweets-asian-unclean-warmup-25 | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 30 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-quantifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-quantifier
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7478
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.2007 | 1.0 | 94 | 2.3496 |
| 2.2332 | 2.0 | 188 | 1.8656 |
| 2.0141 | 3.0 | 282 | 1.8479 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.1
- Datasets 1.18.3
- Tokenizers 0.11.0
|
DoyyingFace/bert-asian-hate-tweets-asonam-clean | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 27 | null | # T5 for Semantic Parsing
## Model description
T5 (small and large) finetuned on CoNaLa for semantic parsing (Natural Language descriptions to Python code)
Paper: https://arxiv.org/pdf/2101.07138.pdf
Code, data and how to use: https://github.com/ypapanik/t5-for-code-generation
### Cite
```
@misc{papanikolaou2021teach,
title={Teach me how to Label: Labeling Functions from Natural Language with Text-to-text Transformers},
author={Yannis Papanikolaou},
year={2021},
eprint={2101.07138},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
DoyyingFace/bert-asian-hate-tweets-concat-clean-with-unclean-valid | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 25 | null | >>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='bert-base-uncased')
>>> unmasker("Hello I'm a [MASK] model.")
[{'sequence': "[CLS] hello i'm a fashion model. [SEP]",
'score': 0.1073106899857521,
'token': 4827,
'token_str': 'fashion'},
{'sequence': "[CLS] hello i'm a role model. [SEP]",
'score': 0.08774490654468536,
'token': 2535,
'token_str': 'role'},
{'sequence': "[CLS] hello i'm a new model. [SEP]",
'score': 0.05338378623127937,
'token': 2047,
'token_str': 'new'},
{'sequence': "[CLS] hello i'm a super model. [SEP]",
'score': 0.04667217284440994,
'token': 3565,
'token_str': 'super'},
{'sequence': "[CLS] hello i'm a fine model. [SEP]",
'score': 0.027095865458250046,
'token': 2986,
'token_str': 'fine'}]
from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained("bert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input) |
bert-base-chinese | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"zh",
"arxiv:1810.04805",
"transformers",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3,377,486 | 2022-02-05T11:56:49Z | ---
language:
- hi
license: apache-2.0
tags:
- automatic-speech-recognition
- generated_from_trainer
- hf-asr-leaderboard
- mozilla-foundation/common_voice_7_0
- robust-speech-event
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: xls-r-300m-yaswanth-hindi2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xls-r-300m-yaswanth-hindi2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7163
- Wer: 0.6951
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0007
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.986 | 4.46 | 500 | 2.0194 | 1.1857 |
| 0.9232 | 8.93 | 1000 | 1.2665 | 0.8435 |
| 0.5094 | 13.39 | 1500 | 1.2473 | 0.7893 |
| 0.3618 | 17.86 | 2000 | 1.3675 | 0.7789 |
| 0.2914 | 22.32 | 2500 | 1.3725 | 0.7914 |
| 0.2462 | 26.79 | 3000 | 1.4567 | 0.7795 |
| 0.228 | 31.25 | 3500 | 1.6179 | 0.7872 |
| 0.1995 | 35.71 | 4000 | 1.4932 | 0.7555 |
| 0.1878 | 40.18 | 4500 | 1.5352 | 0.7480 |
| 0.165 | 44.64 | 5000 | 1.5238 | 0.7440 |
| 0.1514 | 49.11 | 5500 | 1.5842 | 0.7498 |
| 0.1416 | 53.57 | 6000 | 1.6662 | 0.7524 |
| 0.1351 | 58.04 | 6500 | 1.6280 | 0.7356 |
| 0.1196 | 62.5 | 7000 | 1.6329 | 0.7250 |
| 0.1109 | 66.96 | 7500 | 1.6435 | 0.7302 |
| 0.1008 | 71.43 | 8000 | 1.7058 | 0.7170 |
| 0.0907 | 75.89 | 8500 | 1.6880 | 0.7387 |
| 0.0816 | 80.36 | 9000 | 1.6957 | 0.7031 |
| 0.0743 | 84.82 | 9500 | 1.7547 | 0.7222 |
| 0.0694 | 89.29 | 10000 | 1.6974 | 0.7117 |
| 0.0612 | 93.75 | 10500 | 1.7251 | 0.7020 |
| 0.0577 | 98.21 | 11000 | 1.7163 | 0.6951 |
### Framework versions
- Transformers 4.16.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
bert-base-german-cased | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"de",
"transformers",
"exbert",
"license:mit",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 175,983 | 2021-10-19T00:20:42Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: sparql-qald9-t5-base-2021-10-19_00-15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sparql-qald9-t5-base-2021-10-19_00-15
This model is a fine-tuned version of [yazdipour/text-to-sparql-t5-base-2021-10-18_16-15](https://huggingface.co/yazdipour/text-to-sparql-t5-base-2021-10-18_16-15) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | P | R | F1 | Bleu-score | Bleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:------:|:----------:|:-----------------------------------------------------------------------------:|:-------:|
| No log | 1.0 | 51 | 1.8998 | 19.0 | 0.3634 | 0.0387 | 0.1963 | 9.9428 | [71.94645844952593, 49.30006086427267, 35.36503683858004, 28.145941921072225] | 0.2294 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
bert-base-german-dbmdz-cased | [
"pytorch",
"jax",
"bert",
"fill-mask",
"de",
"transformers",
"license:mit",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,814 | 2021-10-19T00:08:51Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: sparql-qald9-t5-small-2021-10-19_00-01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sparql-qald9-t5-small-2021-10-19_00-01
This model is a fine-tuned version of [yazdipour/text-to-sparql-t5-small-2021-10-18_23-00](https://huggingface.co/yazdipour/text-to-sparql-t5-small-2021-10-18_23-00) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | P | R | F1 | Bleu-score | Bleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:------:|:----------:|:-------------------------------------------------------------------------------:|:-------:|
| No log | 1.0 | 51 | 2.4058 | 19.0 | 0.3946 | 0.0660 | 0.2253 | 9.8438 | [72.36042012161415, 47.920433996383366, 33.929754804506295, 26.416482707873435] | 0.2344 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
bert-base-german-dbmdz-uncased | [
"pytorch",
"jax",
"safetensors",
"bert",
"fill-mask",
"de",
"transformers",
"license:mit",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 68,305 | 2021-10-19T07:18:10Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: sparql-qald9-t5-small-2021-10-19_07-12_RAW
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sparql-qald9-t5-small-2021-10-19_07-12_RAW
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | P | R | F1 | Bleu-score | Bleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:------:|:----------:|:----------------------------------------------------------------------------:|:-------:|
| No log | 1.0 | 51 | 2.8581 | 19.0 | 0.3301 | 0.0433 | 0.1830 | 7.5917 | [69.82603479304139, 45.68226763348714, 32.33357717629846, 24.56861133935908] | 0.1903 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
bert-base-multilingual-cased | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"af",
"sq",
"ar",
"an",
"hy",
"ast",
"az",
"ba",
"eu",
"bar",
"be",
"bn",
"inc",
"bs",
"br",
"bg",
"my",
"ca",
"ceb",
"ce",
"zh",
"cv",
"hr",
"cs",
"da",
"nl",
"en",
"et",
"fi",
"fr",
"gl",
"ka",
"de",
"el",
"gu",
"ht",
"he",
"hi",
"hu",
"is",
"io",
"id",
"ga",
"it",
"ja",
"jv",
"kn",
"kk",
"ky",
"ko",
"la",
"lv",
"lt",
"roa",
"nds",
"lm",
"mk",
"mg",
"ms",
"ml",
"mr",
"mn",
"min",
"ne",
"new",
"nb",
"nn",
"oc",
"fa",
"pms",
"pl",
"pt",
"pa",
"ro",
"ru",
"sco",
"sr",
"scn",
"sk",
"sl",
"aze",
"es",
"su",
"sw",
"sv",
"tl",
"tg",
"th",
"ta",
"tt",
"te",
"tr",
"uk",
"ud",
"uz",
"vi",
"vo",
"war",
"cy",
"fry",
"pnb",
"yo",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4,749,504 | 2021-10-17T23:43:47Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
metrics:
- f1
model-index:
- name: text-to-sparql-t5-base-2021-10-17_23-40
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
metrics:
- name: F1
type: f1
value: 0.2649857699871063
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text-to-sparql-t5-base-2021-10-17_23-40
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2645
- Gen Len: 19.0
- P: 0.5125
- R: 0.0382
- F1: 0.2650
- Score: 5.1404
- Bleu-precisions: [88.49268497650789, 75.01025204252232, 66.60779038484033, 63.18383699935422]
- Bleu-bp: 0.0707
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | P | R | F1 | Score | Bleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:------:|:------:|:----------------------------------------------------------------------------:|:-------:|
| 0.3513 | 1.0 | 4807 | 0.2645 | 19.0 | 0.5125 | 0.0382 | 0.2650 | 5.1404 | [88.49268497650789, 75.01025204252232, 66.60779038484033, 63.18383699935422] | 0.0707 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
bert-base-multilingual-uncased | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"af",
"sq",
"ar",
"an",
"hy",
"ast",
"az",
"ba",
"eu",
"bar",
"be",
"bn",
"inc",
"bs",
"br",
"bg",
"my",
"ca",
"ceb",
"ce",
"zh",
"cv",
"hr",
"cs",
"da",
"nl",
"en",
"et",
"fi",
"fr",
"gl",
"ka",
"de",
"el",
"gu",
"ht",
"he",
"hi",
"hu",
"is",
"io",
"id",
"ga",
"it",
"ja",
"jv",
"kn",
"kk",
"ky",
"ko",
"la",
"lv",
"lt",
"roa",
"nds",
"lm",
"mk",
"mg",
"ms",
"ml",
"mr",
"min",
"ne",
"new",
"nb",
"nn",
"oc",
"fa",
"pms",
"pl",
"pt",
"pa",
"ro",
"ru",
"sco",
"sr",
"scn",
"sk",
"sl",
"aze",
"es",
"su",
"sw",
"sv",
"tl",
"tg",
"ta",
"tt",
"te",
"tr",
"uk",
"ud",
"uz",
"vi",
"vo",
"war",
"cy",
"fry",
"pnb",
"yo",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 328,585 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
model-index:
- name: text-to-sparql-t5-base-2021-10-18_16-15
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text-to-sparql-t5-base-2021-10-18_16-15
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1294
- Gen Len: 19.0
- Bertscorer-p: 0.5827
- Bertscorer-r: 0.0812
- Bertscorer-f1: 0.3202
- Sacrebleu-score: 5.9410
- Sacrebleu-precisions: [92.24641734333713, 84.24354361048307, 78.78523204758982, 75.43428275229601]
- Bleu-bp: 0.0721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | Bertscorer-p | Bertscorer-r | Bertscorer-f1 | Sacrebleu-score | Sacrebleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------------:|:------------:|:-------------:|:---------------:|:----------------------------------------------------------------------------:|:-------:|
| nan | 1.0 | 4772 | 0.1294 | 19.0 | 0.5827 | 0.0812 | 0.3202 | 5.9410 | [92.24641734333713, 84.24354361048307, 78.78523204758982, 75.43428275229601] | 0.0721 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
bert-base-uncased | [
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 59,663,489 | 2021-10-19T23:09:08Z | ---
tags:
- generated_from_trainer
model-index:
- name: sparql-qald9-t5-base-2021-10-19_23-02
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sparql-qald9-t5-base-2021-10-19_23-02
This model is a fine-tuned version of [yazdipour/text-to-sparql-t5-base-2021-10-19_15-35_lastDS](https://huggingface.co/yazdipour/text-to-sparql-t5-base-2021-10-19_15-35_lastDS) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | P | R | F1 | Bleu-score | Bleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:------:|:----------:|:-----------------------------------------------------------------------------:|:-------:|
| No log | 1.0 | 51 | 1.8300 | 19.0 | 0.3640 | 0.0346 | 0.1943 | 10.0358 | [72.88988261598658, 50.27455765710799, 35.93015446608462, 28.454070201643017] | 0.2281 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
bert-large-cased-whole-word-masking-finetuned-squad | [
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"bert",
"question-answering",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8,214 | 2021-10-19T15:38:55Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
metrics:
- f1
model-index:
- name: text-to-sparql-t5-base-2021-10-19_15-35_lastDS
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
metrics:
- name: F1
type: f1
value: 0.3275993764400482
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text-to-sparql-t5-base-2021-10-19_15-35_lastDS
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1310
- Gen Len: 19.0
- P: 0.5807
- R: 0.0962
- F1: 0.3276
- Score: 6.4533
- Bleu-precisions: [92.48113990507008, 85.38781447185119, 80.57856404313097, 77.37314727416516]
- Bleu-bp: 0.0770
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | P | R | F1 | Score | Bleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:------:|:------:|:----------------------------------------------------------------------------:|:-------:|
| nan | 1.0 | 4807 | 0.1310 | 19.0 | 0.5807 | 0.0962 | 0.3276 | 6.4533 | [92.48113990507008, 85.38781447185119, 80.57856404313097, 77.37314727416516] | 0.0770 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
bert-large-cased-whole-word-masking | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2,316 | 2021-10-15T01:04:21Z | ---
tags:
- generated_from_trainer
model-index:
- name: text-to-sparql-t5-small-2021-10-15_01-00
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text-to-sparql-t5-small-2021-10-15_01-00
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | P | R | F1 | Bleu-score | Bleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:------:|:----------:|:-----------------------------------------------------------------:|:-------:|
| No log | 1.0 | 26 | 4.1488 | 19.0 | 0.2368 | -0.0304 | 0.1003 | 0.8868 | [56.84848484848485, 25.0, 8.88888888888889, 0.041666666666666664] | 0.1851 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.2
- Tokenizers 0.10.3
|
bert-large-cased | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 388,769 | 2021-10-17T18:52:28Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
metrics:
- f1
model-index:
- name: text-to-sparql-t5-small-2021-10-17_18-47
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
metrics:
- name: F1
type: f1
value: 0.2345714420080185
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text-to-sparql-t5-small-2021-10-17_18-47
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5258
- Gen Len: 19.0
- P: 0.4582
- R: 0.0278
- F1: 0.2346
- Score: 3.5848
- Bleu-precisions: [82.57739877107295, 62.13358857503344, 48.43062944877681, 41.90172321318059]
- Bleu-bp: 0.0631
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | P | R | F1 | Score | Bleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:------:|:------:|:----------------------------------------------------------------------------:|:-------:|
| 0.7575 | 1.0 | 4807 | 0.5258 | 19.0 | 0.4582 | 0.0278 | 0.2346 | 3.5848 | [82.57739877107295, 62.13358857503344, 48.43062944877681, 41.90172321318059] | 0.0631 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
bert-large-uncased-whole-word-masking-finetuned-squad | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"question-answering",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 480,510 | 2021-10-18T09:35:17Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
metrics:
- f1
model-index:
- name: text-to-sparql-t5-small-2021-10-18_09-32
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
metrics:
- name: F1
type: f1
value: 0.26458749175071716
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text-to-sparql-t5-small-2021-10-18_09-32
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5119
- Gen Len: 19.0
- P: 0.4884
- R: 0.0583
- F1: 0.2646
- Score: 3.5425
- Bleu-precisions: [82.80295919500207, 62.695879280325016, 50.2215675749897, 44.03052700138759]
- Bleu-bp: 0.0609
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | P | R | F1 | Score | Bleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:------:|:------:|:----------------------------------------------------------------------------:|:-------:|
| 0.7088 | 1.0 | 4772 | 0.5119 | 19.0 | 0.4884 | 0.0583 | 0.2646 | 3.5425 | [82.80295919500207, 62.695879280325016, 50.2215675749897, 44.03052700138759] | 0.0609 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
bert-large-uncased-whole-word-masking | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 76,685 | 2021-10-18T12:15:30Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
model-index:
- name: text-to-sparql-t5-small-2021-10-18_12-12
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text-to-sparql-t5-small-2021-10-18_12-12
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3284
- Gen Len: 19.0
- Bertscorer-p: 0.5420
- Bertscorer-r: 0.0732
- Bertscorer-f1: 0.2972
- Sacrebleu-score: 4.8763
- Sacrebleu-precisions: [87.2581084764241, 73.48869132519009, 64.19139944127409, 58.342420937840785]
- Bleu-bp: 0.0697
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | Bertscorer-p | Bertscorer-r | Bertscorer-f1 | Sacrebleu-score | Sacrebleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------------:|:------------:|:-------------:|:---------------:|:----------------------------------------------------------------------------:|:-------:|
| 0.4209 | 1.0 | 4772 | 0.3284 | 19.0 | 0.5420 | 0.0732 | 0.2972 | 4.8763 | [87.2581084764241, 73.48869132519009, 64.19139944127409, 58.342420937840785] | 0.0697 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
bert-large-uncased | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,058,496 | 2021-10-18T23:06:10Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
model-index:
- name: text-to-sparql-t5-small-2021-10-18_23-00
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text-to-sparql-t5-small-2021-10-18_23-00
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2284
- Gen Len: 19.0
- Bertscorer-p: 0.5644
- Bertscorer-r: 0.0815
- Bertscorer-f1: 0.3120
- Sacrebleu-score: 5.5690
- Sacrebleu-precisions: [89.6746395837541, 79.06489438259324, 71.93407601726916, 67.21220306665607]
- Bleu-bp: 0.0728
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | Bertscorer-p | Bertscorer-r | Bertscorer-f1 | Sacrebleu-score | Sacrebleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------------:|:------------:|:-------------:|:---------------:|:---------------------------------------------------------------------------:|:-------:|
| 0.2808 | 1.0 | 4772 | 0.2284 | 19.0 | 0.5644 | 0.0815 | 0.3120 | 5.5690 | [89.6746395837541, 79.06489438259324, 71.93407601726916, 67.21220306665607] | 0.0728 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
camembert-base | [
"pytorch",
"tf",
"safetensors",
"camembert",
"fill-mask",
"fr",
"dataset:oscar",
"arxiv:1911.03894",
"transformers",
"license:mit",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"CamembertForMaskedLM"
],
"model_type": "camembert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,440,898 | 2021-10-19T22:35:44Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: sparql-qald9-t5-small-2021-10-19_22-32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sparql-qald9-t5-small-2021-10-19_22-32
This model is a fine-tuned version of [yazdipour/text-to-sparql-t5-small-2021-10-19_10-17_lastDS](https://huggingface.co/yazdipour/text-to-sparql-t5-small-2021-10-19_10-17_lastDS) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | P | R | F1 | Bleu-score | Bleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:------:|:----------:|:-------------------------------------------------------------------------------:|:-------:|
| No log | 1.0 | 51 | 2.4477 | 19.0 | 0.3797 | 0.0727 | 0.2219 | 9.3495 | [73.47751849743882, 49.595519601742375, 35.346602608098834, 26.243305279265492] | 0.2180 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
ctrl | [
"pytorch",
"tf",
"ctrl",
"en",
"arxiv:1909.05858",
"arxiv:1910.09700",
"transformers",
"license:bsd-3-clause",
"has_space"
] | null | {
"architectures": null,
"model_type": "ctrl",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 17,007 | 2021-10-19T10:22:38Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
metrics:
- f1
model-index:
- name: text-to-sparql-t5-small-2021-10-19_10-17_lastDS
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
metrics:
- name: F1
type: f1
value: 0.3129461705684662
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text-to-sparql-t5-small-2021-10-19_10-17_lastDS
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2335
- Gen Len: 19.0
- P: 0.5580
- R: 0.0884
- F1: 0.3129
- Score: 5.9585
- Bleu-precisions: [90.11303396628615, 80.34125695971072, 73.81487011728768, 69.48796722990271]
- Bleu-bp: 0.0763
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Gen Len | P | R | F1 | Score | Bleu-precisions | Bleu-bp |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:------:|:------:|:----------------------------------------------------------------------------:|:-------:|
| 0.3166 | 1.0 | 4807 | 0.2335 | 19.0 | 0.5580 | 0.0884 | 0.3129 | 5.9585 | [90.11303396628615, 80.34125695971072, 73.81487011728768, 69.48796722990271] | 0.0763 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
distilbert-base-cased-distilled-squad | [
"pytorch",
"tf",
"rust",
"safetensors",
"openvino",
"distilbert",
"question-answering",
"en",
"dataset:squad",
"arxiv:1910.01108",
"arxiv:1910.09700",
"transformers",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"has_space"
] | question-answering | {
"architectures": [
"DistilBertForQuestionAnswering"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 257,745 | 2021-08-04T04:37:57Z | ---
tags: autonlp
language: ko
widget:
- text: "I love AutoNLP ๐ค"
datasets:
- ybybybybybybyb/autonlp-data-revanalysis
---
# Model Trained Using AutoNLP
- Problem type: Multi-class Classification
- Model ID: 6711455
## Validation Metrics
- Loss: 0.8241586089134216
- Accuracy: 0.7835820895522388
- Macro F1: 0.5297383029341792
- Micro F1: 0.783582089552239
- Weighted F1: 0.7130091019920225
- Macro Precision: 0.48787061994609165
- Micro Precision: 0.7835820895522388
- Weighted Precision: 0.6541416904694856
- Macro Recall: 0.5795454545454546
- Micro Recall: 0.7835820895522388
- Weighted Recall: 0.7835820895522388
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/ybybybybybybyb/autonlp-revanalysis-6711455
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("ybybybybybybyb/autonlp-revanalysis-6711455", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("ybybybybybybyb/autonlp-revanalysis-6711455", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
distilbert-base-german-cased | [
"pytorch",
"safetensors",
"distilbert",
"fill-mask",
"de",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 43,667 | 2021-11-15T19:28:54Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.509687043672971
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7512
- Matthews Correlation: 0.5097
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5237 | 1.0 | 535 | 0.5117 | 0.4469 |
| 0.3496 | 2.0 | 1070 | 0.5538 | 0.4965 |
| 0.2377 | 3.0 | 1605 | 0.6350 | 0.4963 |
| 0.1767 | 4.0 | 2140 | 0.7512 | 0.5097 |
| 0.1383 | 5.0 | 2675 | 0.8647 | 0.5056 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.8.1+cu102
- Datasets 1.15.1
- Tokenizers 0.10.1
|
distilbert-base-uncased | [
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"distilbert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1910.01108",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
] | fill-mask | {
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10,887,471 | null | # Bert2Bert Summarization with ๐ค EncoderDecoder Framework
[This is a TensorFlow version converted from the original PyTorch [Bert2Bert](https://huggingface.co/patrickvonplaten/bert2bert-cnn_dailymail-fp16)]
This model is a Bert2Bert model fine-tuned on summarization.
Bert2Bert is a `EncoderDecoderModel`, meaning that both the encoder and the decoder are `bert-base-uncased`
BERT models. Leveraging the [EncoderDecoderFramework](https://huggingface.co/transformers/model_doc/encoderdecoder.html#encoder-decoder-models), the
two pretrained models can simply be loaded into the framework via:
```python
bert2bert = TFEncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-uncased", "bert-base-uncased")
```
The decoder of an `TFEncoderDecoder` model needs cross-attention layers and usually makes use of causal
masking for auto-regressiv generation.
Thus, ``bert2bert`` is consequently fined-tuned on the `CNN/Daily Mail`dataset and the resulting model
`bert2bert-cnn_dailymail-fp16` is uploaded here.
## Example
The model is by no means a state-of-the-art model, but nevertheless
produces reasonable summarization results. It was mainly fine-tuned
as a proof-of-concept for the ๐ค EncoderDecoder Framework.
The model can be used as follows:
```python
from transformers import AutoTokenizer, TFEncoderDecoderModel
loc = "ydshieh/bert2bert-cnn_dailymail-fp16"
model = TFEncoderDecoderModel.from_pretrained(loc)
tokenizer = AutoTokenizer.from_pretrained(loc)
article = """(CNN)Sigma Alpha Epsilon is under fire for a video showing party-bound fraternity members singing a racist chant. SAE's national chapter suspended the students, but University of Oklahoma President David Boren took it a step further, saying the university's affiliation with the fraternity is permanently done. The news is shocking, but it's not the first time SAE has faced controversy. SAE was founded March 9, 1856, at the University of Alabama, five years before the American Civil War, according to the fraternity website. When the war began, the group had fewer than 400 members, of which "369 went to war for the Confederate States and seven for the Union Army," the website says. The fraternity now boasts more than 200,000 living alumni, along with about 15,000 undergraduates populating 219 chapters and 20 "colonies" seeking full membership at universities. SAE has had to work hard to change recently after a string of member deaths, many blamed on the hazing of new recruits, SAE national President Bradley Cohen wrote in a message on the fraternity's website. The fraternity's website lists more than 130 chapters cited or suspended for "health and safety incidents" since 2010. At least 30 of the incidents involved hazing, and dozens more involved alcohol. However, the list is missing numerous incidents from recent months. Among them, according to various media outlets: Yale University banned the SAEs from campus activities last month after members allegedly tried to interfere with a sexual misconduct investigation connected to an initiation rite. Stanford University in December suspended SAE housing privileges after finding sorority members attending a fraternity function were subjected to graphic sexual content. And Johns Hopkins University in November suspended the fraternity for underage drinking. "The media has labeled us as the 'nation's deadliest fraternity,' " Cohen said. In 2011, for example, a student died while being coerced into excessive alcohol consumption, according to a lawsuit. SAE's previous insurer dumped the fraternity. "As a result, we are paying Lloyd's of London the highest insurance rates in the Greek-letter world," Cohen said. Universities have turned down SAE's attempts to open new chapters, and the fraternity had to close 12 in 18 months over hazing incidents."""
input_ids = tokenizer(article, return_tensors="tf").input_ids
output_ids = model.generate(input_ids)
summary = tokenizer.decode(output_ids[0], skip_special_tokens=True)
print(summary)
# should produce
# sae was founded in 1856, five years before the civil war. the fraternity has had to work hard to change recently. the university of oklahoma president says the university's affiliation with the fraternity is permanently done. the sae has had a string of members in recent mon ths.
```
## Training script:
For the original PyTorch BERT2BERT model, please follow this tutorial to see how to warm-start a BERT2BERT model:
https://colab.research.google.com/drive/1WIk2bxglElfZewOHboPFNj8H44_VAyKE?usp=sharing
The obtained results should be:
| - | Rouge2 - mid -precision | Rouge2 - mid - recall | Rouge2 - mid - fmeasure |
|----------|:-------------:|:------:|:------:|
| **CNN/Daily Mail** | 16.12 | 17.07 | **16.1** |
|
gpt2-medium | [
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"gpt2",
"text-generation",
"en",
"arxiv:1910.09700",
"transformers",
"license:mit",
"has_space"
] | text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 759,601 | 2021-10-06T22:22:36Z | ---
tags:
- image-classification
library_name: generic
---
## Example
The model is by no means a state-of-the-art model, but nevertheless
produces reasonable image captioning results. It was mainly fine-tuned
as a proof-of-concept for the ๐ค FlaxVisionEncoderDecoder Framework.
The model can be used as follows:
```python
import requests
from PIL import Image
from transformers import ViTFeatureExtractor, AutoTokenizer, FlaxVisionEncoderDecoderModel
loc = "ydshieh/vit-gpt2-coco-en"
feature_extractor = ViTFeatureExtractor.from_pretrained(loc)
tokenizer = AutoTokenizer.from_pretrained(loc)
model = FlaxVisionEncoderDecoderModel.from_pretrained(loc)
# We will verify our results on an image of cute cats
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
with Image.open(requests.get(url, stream=True).raw) as img:
pixel_values = feature_extractor(images=img, return_tensors="np").pixel_values
def generate_step(pixel_values):
output_ids = model.generate(pixel_values, max_length=16, num_beams=4).sequences
preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
preds = [pred.strip() for pred in preds]
return preds
preds = generate_step(pixel_values)
print(preds)
# should produce
# ['a cat laying on top of a couch next to another cat']
``` |
ARCYVILK/gpt2-bot | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2021-05-23T23:34:01Z | ---
language: en
tags:
- bert
- qqp
- glue
- torchdistill
license: apache-2.0
datasets:
- qqp
metrics:
- f1
- accuracy
---
`bert-large-uncased` fine-tuned on QQP dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb).
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/qqp/ce/bert_large_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **80.2**.
|
Alerosae/SocratesGPT-2 | [
"pytorch",
"gpt2",
"feature-extraction",
"en",
"transformers",
"text-generation"
] | text-generation | {
"architectures": [
"GPT2Model"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
tags:
- conversational
---
#Rick and Morty DialoGPT
|
Alexander-Learn/bert-finetuned-ner-accelerate | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] | token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
# language: protein
tags:
- protein language model
datasets:
- ProteinKG25
widget:
- text: "D L I P T S S K L V V [MASK] D T S L Q V K K A F F A L V T"
---
# OntoProtein model
Pretrained model on protein sequences using masked language modeling (MLM) and knowledge embedding (KE) objective objective. It was introduced in [this paper](https://openreview.net/pdf?id=yfe1VMYAXa4) and first released in [this repository](https://github.com/zjunlp/OntoProtein). This model is trained on uppercase amino acids: it only works with capital letter amino acids.
## Model description
OntoProtein is the first general framework that makes use of structure in GO (Gene Ontology) into protein pre-training models. We construct a novel large-scale knowledge graph that consists of GO and its related proteins, and gene annotation texts or protein sequences describe all nodes in the graph. We propose novel contrastive learning with knowledge-aware negative sampling to jointly optimize the knowledge graph and protein embedding during pre-training.
### BibTeX entry and citation info
```bibtex
@inproceedings{
zhang2022ontoprotein,
title={OntoProtein: Protein Pretraining With Gene Ontology Embedding},
author={Ningyu Zhang and Zhen Bi and Xiaozhuan Liang and Siyuan Cheng and Haosen Hong and Shumin Deng and Qiang Zhang and Jiazhang Lian and Huajun Chen},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=yfe1VMYAXa4}
}
```
|
Amalq/distilroberta-base-finetuned-anxiety-depression | [] | null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags: autonlp
language: en
widget:
- text: "I love AutoNLP ๐ค"
datasets:
- zwang199/autonlp-data-traffic_nlp_binary
co2_eq_emissions: 1.171798205242445
---
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 537215209
- CO2 Emissions (in grams): 1.171798205242445
## Validation Metrics
- Loss: 0.3879534602165222
- Accuracy: 0.8597449908925319
- Precision: 0.8318042813455657
- Recall: 0.9251700680272109
- AUC: 0.9230158730158731
- F1: 0.8760064412238325
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/zwang199/autonlp-traffic_nlp_binary-537215209
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("zwang199/autonlp-traffic_nlp_binary-537215209", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("zwang199/autonlp-traffic_nlp_binary-537215209", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
``` |
AnonARR/qqp-bert | [
"pytorch",
"bert",
"text-classification",
"transformers"
] | text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 38 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: kabelomalapane/Helsinki-NLP-opus-finetuned-en-to-zu
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# kabelomalapane/Helsinki-NLP-opus-finetuned-en-to-zu
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-mul](https://huggingface.co/Helsinki-NLP/opus-mt-en-mul) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.5907
- Validation Loss: 1.6321
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
This model is to be used to translate English into Zulu. But there are still some problems in running this model, so it's still to be modified.
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 783, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.1622 | 1.7379 | 0 |
| 1.7292 | 1.6529 | 1 |
| 1.5907 | 1.6321 | 2 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Anonymous/ReasonBERT-BERT | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: ernie_roberta_summarization_cnn_dailymail
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ernie_roberta_summarization_cnn_dailymail
This model is a fine-tuned version of [](https://huggingface.co/) on the cnn_dailymail dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
AnonymousSub/AR_rule_based_roberta_twostagetriplet_epochs_1_shard_1 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | 2022-03-04T08:48:29Z | ---
datasets:
- ticket-tagger
metrics:
- accuracy
model-index:
- name: distil-bert-uncased-finetuned-github-issues
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: ticket tagger
type: ticket tagger
args: full
metrics:
- name: Accuracy
type: accuracy
value: 0.7862
---
# Model Description
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) and fine-tuning it on the
[github ticket tagger dataset](https://tickettagger.blob.core.windows.net/datasets/dataset-labels-top3-30k-real.txt). It classifies issue into 3 common categories: Bug, Enhancement, Questions.
It achieves the following results on the evaluation set:
- Accuracy: 0.7862
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-5
- train_batch_size: 16
- optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 0
- num_epochs: 5
### Codes
https://github.com/IvanLauLinTiong/IntelliLabel |
AnonymousSub/AR_rule_based_roberta_twostagetriplet_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | 2022-03-04T09:03:25Z | ---
datasets:
- mc4
license: apache-2.0
---
# ByT5-Korean - large
ByT5-Korean is a Korean specific extension of Google's [ByT5](https://github.com/google-research/byt5).
A Korean syllable has three components (called Jamo): a beginning consonant, a middle vowel, and an optional final consonant; they are like individual characters of alphabet.
While the ByT5's utf-8 encoding allows generic encoding for multiple languages, it is unnatural for Korean because it splits the bits representation of each Jamo in the middle.
ByT5-Korean extends ByT5's utf-8 encoding with special care for Korean syllables; each Jamo is represented with a extra token.
ByT5-Korean was pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) with 70% Korean and 30% English.
## Encoding Scheme
```text
id: token
0: <pad>
1: <eos>
2: <unk>
3~258: utf-8 encoding
259~277: beginning consonants(์ด์ฑ), 19๊ฐ(ใฑใฒใดใทใธในใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
)
278~298: middle vowel(์ค์ฑ), 21๊ฐ(ใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
กใ
ขใ
ฃ)
299~326: final consonant(์ข
์ฑ), ๋ฌด์ข
์ฑ+27๊ฐ(ใฑใฒใณใดใตใถใทในใบใปใผใฝใพใฟใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
)
327~384: from <extra_id_0> to <extra_id_57>
```
## Example Inference
```python
import torch
from tokenizer import ByT5KoreanTokenizer # https://huggingface.co/everdoubling/byt5-Korean-large/blob/main/tokenizer.py
from transformers import T5ForConditionalGeneration
tokenizer_jamo = ByT5KoreanTokenizer()
model = T5ForConditionalGeneration.from_pretrained('everdoubling/byt5-Korean-large')
input_sentence = 'ํ๊ตญ์ด ์ํค๋ฐฑ๊ณผ(์์ด: Korean Wikipedia)๋ ํ๊ตญ์ด๋ก ์ด์๋๋ ์ํค๋ฐฑ๊ณผ์ ๋ค์ธ์ดํ ๊ฐ์ด๋ฐ ํ๋๋ก์, 2002๋
10์ 11์ผ์ <extra_id_0>. ๋ํ ํ์ฌ ํ๊ตญ์ด ์ํค๋ฐฑ๊ณผ์๋ ๋๊ฒจ์ฃผ๊ธฐ, ํ ๋ก , ๊ทธ๋ฆผ ๋ฑ ํ์ด์ง๋ก ๋ถ๋ฆฌ๋ ๋ชจ๋ ๋ฌธ์๋ฅผ ํฌํจํ๋ฉด ์ด 2,629,860๊ฐ๊ฐ <extra_id_1>๋์ด ์์ผ๋ฉฐ, ๋๊ฒจ์ฃผ๊ธฐ๋ฅผ ํฌํจํ ์ผ๋ฐ ๋ฌธ์ ์๋ 1,278,560๊ฐ,[1] ๊ทธ์ค ๋๊ฒจ์ฃผ๊ธฐ, ๋ง๋ค๋ฅธ ๋ฌธ์๋ฅผ ์ ์ธํ ์ผ๋ฐ ๋ฌธ์ ์๋ 573,149๊ฐ์ด๋ค.'
input_ids_jamo = tokenizer_jamo(input_sentence).input_ids
outputs_jamo = model_jamo.generate(torch.tensor([input_ids_jamo]))
print(tokenizer_jamo.decode(outputs_jamo[0]))
# <pad><extra_id_0>์ค๋ฆฝ๋์๋ค<extra_id_1>ฤฤ
```
Additional information coming soon...
|
AnonymousSub/SR_EManuals-RoBERTa | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | 2022-03-04T12:37:20Z | ---
language:
- gn
- es
license: mit
datasets:
- wikipedia
- wiktionary
widget:
- text: "Paraguay ha'e peteฤฉ tรกva oฤฉva [MASK] retรฃme "
- text: "Augusto Roa Bastos ha'e peteฤฉ [MASK] arandu"
---
# BETO+gn-base-cased
[BETO-base-cased (pre-trained Spanish BERT model)](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) fine-tuned for **Guarani** language modeling (Spanish + Guarani). Trained on Wikipedia + Wiktionary (~800K tokens).
|
AnonymousSub/SR_rule_based_roberta_only_classfn_twostage_epochs_1_shard_1 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
] | feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | 2022-03-04T17:28:10Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab-9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab-9
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.03
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.