repo_id
stringlengths
4
122
author
stringlengths
2
38
model_type
stringlengths
2
33
files_per_repo
int64
2
39k
downloads_30d
int64
0
33.7M
library
stringlengths
2
37
likes
int64
0
4.87k
pipeline
stringlengths
5
30
pytorch
bool
2 classes
tensorflow
bool
2 classes
jax
bool
2 classes
license
stringlengths
2
33
languages
stringlengths
2
1.63k
datasets
stringlengths
2
2.58k
co2
stringlengths
6
258
prs_count
int64
0
125
prs_open
int64
0
120
prs_merged
int64
0
46
prs_closed
int64
0
34
discussions_count
int64
0
218
discussions_open
int64
0
148
discussions_closed
int64
0
70
tags
stringlengths
2
513
has_model_index
bool
2 classes
has_metadata
bool
2 classes
has_text
bool
1 class
text_length
int64
201
598k
readme
stringlengths
0
598k
Graphcore/bert-large-ipu
Graphcore
null
3
276
null
1
null
false
false
false
null
null
null
null
0
0
0
0
0
0
0
[]
false
false
true
2,392
# Graphcore/bert-large-ipu Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore). Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project. ## Model description BERT (Bidirectional Encoder Representations from Transformers) is a transformers model which is designed to pretrain bidirectional representations from unlabelled texts. It enables easy and fast fine-tuning for different downstream tasks such as Sequence Classification, Named Entity Recognition, Question Answering, Multiple Choice and MaskedLM. It was trained with two objectives in pretraining : Masked language modelling (MLM) and Next sentence prediction(NSP). First, MLM is different from traditional LM which sees the words one after another while BERT allows the model to learn a bidirectional representation. In addition to MLM, NSP is used for jointly pertaining text-pair representations. It reduces the need of many engineering efforts for building task specific architectures through pre-trained representation. And achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks. # Intended uses & limitations This model contains just the `IPUConfig` files for running the BERT large model (e.g. [bert-large-uncased](https://huggingface.co/bert-large-uncased) or [bert-large-cased](https://huggingface.co/bert-large-cased)) on Graphcore IPUs. **This model contains no model weights, only an IPUConfig.** ## Usage ``` from optimum.graphcore import IPUConfig ipu_config = IPUConfig.from_pretrained("Graphcore/bert-large-ipu") ```
Graphcore/bert-large-uncased-squad
Graphcore
bert
8
0
transformers
2
question-answering
true
false
false
apache-2.0
null
['squad']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
2,404
# Graphcore/bert-large-uncased-squad Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore). Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project. ## Model description BERT (Bidirectional Encoder Representations from Transformers) is a transformers model which is designed to pretrain bidirectional representations from unlabelled texts. It enables easy and fast fine-tuning for different downstream tasks such as Sequence Classification, Named Entity Recognition, Question Answering, Multiple Choice and MaskedLM. It was trained with two objectives in pretraining : Masked language modelling (MLM) and Next sentence prediction(NSP). First, MLM is different from traditional LM which sees the words one after another while BERT allows the model to learn a bidirectional representation. In addition to MLM, NSP is used for jointly pertaining text-pair representations. It reduces the need of many engineering efforts for building task specific architectures through pre-trained representation. And achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks. ## Intended uses & limitations This model is a fine-tuned version of [Graphcore/bert-large-uncased](https://huggingface.co/Graphcore/bert-large-uncased) on the SQuAD dataset. ## Training and evaluation data Trained on SQuAD dataset: - [HuggingFace/squad](https://huggingface.co/datasets/squad) ## Training procedure Model was trained on 16 Graphcore Mk2 IPUs using the [optimum-graphcore](https://github.com/huggingface/optimum-graphcore) library.
Graphcore/bert-large-uncased
Graphcore
bert
13
1
transformers
6
null
true
false
false
apache-2.0
null
['Graphcore/wikipedia-bert-128', 'Graphcore/wikipedia-bert-512']
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
5,734
# Graphcore/bert-large-uncased Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore). Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project. ## Model description BERT (Bidirectional Encoder Representations from Transformers) is a transformers model which is designed to pretrain bidirectional representations from unlabelled texts. It enables easy and fast fine-tuning for different downstream tasks such as Sequence Classification, Named Entity Recognition, Question Answering, Multiple Choice and MaskedLM. It was trained with two objectives in pretraining : Masked language modelling (MLM) and Next sentence prediction(NSP). First, MLM is different from traditional LM which sees the words one after another while BERT allows the model to learn a bidirectional representation. In addition to MLM, NSP is used for jointly pertaining text-pair representations. It reduces the need of many engineering efforts for building task specific architectures through pre-trained representation. And achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks. ## Intended uses & limitations This model is a pre-trained BERT-Large trained in two phases on the [Graphcore/wikipedia-bert-128](https://huggingface.co/datasets/Graphcore/wikipedia-bert-128) and [Graphcore/wikipedia-bert-512](https://huggingface.co/datasets/Graphcore/wikipedia-bert-512) datasets. ## Training and evaluation data Trained on wikipedia datasets: - [Graphcore/wikipedia-bert-128](https://huggingface.co/datasets/Graphcore/wikipedia-bert-128) - [Graphcore/wikipedia-bert-512](https://huggingface.co/datasets/Graphcore/wikipedia-bert-512) ## Training procedure Trained MLM and NSP pre-training scheme from [Large Batch Optimization for Deep Learning: Training BERT in 76 minutes](https://arxiv.org/abs/1904.00962). Trained on 64 Graphcore Mk2 IPUs using [`optimum-graphcore`](https://github.com/huggingface/optimum-graphcore) Command lines: Phase 1: ``` python examples/language-modeling/run_pretraining.py \ --config_name bert-large-uncased \ --tokenizer_name bert-large-uncased \ --ipu_config_name Graphcore/bert-large-ipu \ --dataset_name Graphcore/wikipedia-bert-128 \ --do_train \ --logging_steps 5 \ --max_seq_length 128 \ --max_steps 10550 \ --is_already_preprocessed \ --dataloader_num_workers 64 \ --dataloader_mode async_rebatched \ --lamb \ --lamb_no_bias_correction \ --per_device_train_batch_size 8 \ --gradient_accumulation_steps 512 \ --pod_type pod64 \ --learning_rate 0.006 \ --lr_scheduler_type linear \ --loss_scaling 32768 \ --weight_decay 0.01 \ --warmup_ratio 0.28 \ --config_overrides "layer_norm_eps=0.001" \ --ipu_config_overrides "matmul_proportion=[0.14 0.19 0.19 0.19]" \ --output_dir output-pretrain-bert-large-phase1 ``` Phase 2: ``` python examples/language-modeling/run_pretraining.py \ --config_name bert-large-uncased \ --tokenizer_name bert-large-uncased \ --model_name_or_path ./output-pretrain-bert-large-phase1 \ --ipu_config_name Graphcore/bert-large-ipu \ --dataset_name Graphcore/wikipedia-bert-512 \ --do_train \ --logging_steps 5 \ --max_seq_length 512 \ --max_steps 2038 \ --is_already_preprocessed \ --dataloader_num_workers 96 \ --dataloader_mode async_rebatched \ --lamb \ --lamb_no_bias_correction \ --per_device_train_batch_size 2 \ --gradient_accumulation_steps 512 \ --pod_type pod64 \ --learning_rate 0.002828 \ --lr_scheduler_type linear \ --loss_scaling 16384 \ --weight_decay 0.01 \ --warmup_ratio 0.128 \ --config_overrides "layer_norm_eps=0.001" \ --ipu_config_overrides "matmul_proportion=[0.14 0.19 0.19 0.19]" \ --output_dir output-pretrain-bert-large-phase2 ``` ### Training hyperparameters The following hyperparameters were used during phase 1 training: - learning_rate: 0.006 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: IPU - gradient_accumulation_steps: 512 - total_train_batch_size: 65536 - total_eval_batch_size: 512 - optimizer: LAMB - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.28 - training_steps: 10550 - training precision: Mixed Precision The following hyperparameters were used during phase 2 training: - learning_rate: 0.002828 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - distributed_type: IPU - gradient_accumulation_steps: 512 - total_train_batch_size: 16384 - total_eval_batch_size: 512 - optimizer: LAMB - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.128 - training_steps: 2038 - training precision: Mixed Precision ### Training results ``` train/epoch: 2.04 train/global_step: 2038 train/loss: 1.2002 train/train_runtime: 12022.3897 train/train_steps_per_second: 0.17 train/train_samples_per_second: 2777.367 ``` ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cpu - Datasets 2.0.0 - Tokenizers 0.11.6
Graphcore/deberta-base-ipu
Graphcore
null
3
789
null
0
null
false
false
false
null
null
null
null
0
0
0
0
0
0
0
[]
false
false
true
1,855
# Graphcore/deberta-base-ipu Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore). Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project. ## Model description DeBERTa([Decoding-enhanced BERT with Disentangled Attention ](https://arxiv.org/abs/2006.03654 )) improves the BERT and RoBERTa models using the disentangled attention mechanism and an enhanced mask decoder which is used to replace the output softmax layer to predict the masked tokens for model pretraining. Through two techniques, it could significantly improve the efficiency of model pre-training and performance of downstream tasks. # Intended uses & limitations This model contains just the `IPUConfig` files for running the DeBERTa-base model (e.g. [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base)) on Graphcore IPUs. **This model contains no model weights, only an IPUConfig.** ## Usage ``` from optimum.graphcore import IPUConfig ipu_config = IPUConfig.from_pretrained("Graphcore/deberta-base-ipu") ```
Graphcore/gpt2-medium-ipu
Graphcore
null
3
10
null
0
null
false
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,834
# Graphcore/gpt2-medium-ipu Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore). Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project. ## Model description GPT2 is a large transformer-based language model. It is built using transformer decoder blocks. BERT, on the other hand, uses transformer encoder blocks. It adds Layer normalisation to the input of each sub-block, similar to a pre-activation residual networks and an additional layer normalisation. Paper link : [Language Models are Unsupervised Multitask Learners](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf) ## Intended uses & limitations This model contains just the `IPUConfig` files for running the [HuggingFace/gpt2-medium](https://huggingface.co/gpt2-medium) model on Graphcore IPUs. **This model contains no model weights, only an IPUConfig.** ## Usage ``` from optimum.graphcore import IPUConfig ipu_config = IPUConfig.from_pretrained("Graphcore/gpt2-medium-ipu") ```
Graphcore/gpt2-small-ipu
Graphcore
null
3
644
null
1
null
false
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,812
# Graphcore/gpt2-small-ipu Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore). Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project. ## Model description GPT2 is a large transformer-based language model. It is built using transformer decoder blocks. BERT, on the other hand, uses transformer encoder blocks. It adds Layer normalisation to the input of each sub-block, similar to a pre-activation residual networks and an additional layer normalisation. Paper link : [Language Models are Unsupervised Multitask Learners](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf) ## Intended uses & limitations This model contains just the `IPUConfig` files for running the [GPT2 Small](https://huggingface.co/gpt2) model on Graphcore IPUs. **This model contains no model weights, only an IPUConfig.** ## Usage ``` from optimum.graphcore import IPUConfig ipu_config = IPUConfig.from_pretrained("Graphcore/gpt2-small-ipu") ```
Graphcore/roberta-base-ipu
Graphcore
null
3
1,067
null
1
null
false
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,952
# Graphcore/roberta-base-ipu Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore). Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project. ## Model description RoBERTa is based on BERT pretraining approach and improves on it by carefully evaluating a number of design decisions of BERT pretraining which it found to cause the model to be undertrained. It suggested a way to improve the performance by training the model longer, with bigger batches over more data, removing the next sentence prediction objectives, training on longer sequences and dynamically changing the mask pattern applied to the training data. As a result, it achieved state-of-the-art results on GLUE, RACE and SQuAD. Paper link : [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/pdf/1907.11692.pdf) ## Intended uses & limitations This model contains just the `IPUConfig` files for running the [roberta-base](https://huggingface.co/roberta-base) model on Graphcore IPUs. ## Usage ``` from optimum.graphcore import IPUConfig ipu_config = IPUConfig.from_pretrained("Graphcore/roberta-base-ipu") ```
Graphcore/roberta-large-ipu
Graphcore
null
3
9
null
1
null
false
false
false
null
null
null
null
0
0
0
0
0
0
0
[]
false
false
true
2,022
# Graphcore/roberta-large-ipu Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore). Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project. ## Model description RoBERTa is based on BERT pretraining approach and improves on it by carefully evaluating a number of design decisions of BERT pretraining which it found to cause the model to be undertrained. It suggested a way to improve the performance by training the model longer, with bigger batches over more data, removing the next sentence prediction objectives, training on longer sequences and dynamically changing the mask pattern applied to the training data. As a result, it achieved state-of-the-art results on GLUE, RACE and SQuAD. Paper link : [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/pdf/1907.11692.pdf) ## Intended uses & limitations This model contains just the `IPUConfig` files for running the [roberta-large](https://huggingface.co/roberta-large) model on Graphcore IPUs. **This model contains no model weights, only an IPUConfig.** ## Usage ``` from optimum.graphcore import IPUConfig ipu_config = IPUConfig.from_pretrained("Graphcore/roberta-large-ipu") ```
Graphcore/t5-small-ipu
Graphcore
null
3
311
null
1
null
false
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,938
# Graphcore/t5-small-ipu Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore). Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project. ## Model description Text-to-Text Transfer Transformer (T5), is a Transformer based model that uses a text-to-text approach for translation, question answering, and classification. It introduces an unified framework that converts all text-based language problems into a text-to-text format for transfer learning for NLP. This allows for the use of the same model, loss function, hyperparameters, etc. across our diverse set of tasks. Paper link :[Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) ## Intended uses & limitations This model contains just the `IPUConfig` files for running the T5 Small model (e.g. [HuggingFace/t5-small](https://huggingface.co/t5-small)) on Graphcore IPUs. **This model contains no model weights, only an IPUConfig.** ## Usage ``` from optimum.graphcore import IPUConfig ipu_config = IPUConfig.from_pretrained("Graphcore/t5-small-ipu") ```
Graphcore/vit-base-ipu
Graphcore
null
3
336
null
1
null
false
false
false
null
null
null
null
0
0
0
0
0
0
0
[]
false
false
true
2,109
# Graphcore/vit-base-ipu Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore). Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project. ## Model description The Vision Transformer (ViT) is a model for image recognition that employs a Transformer-like architecture over patches of the image which was widely used for NLP pretraining. It uses a standard Transformer encoder as used in NLP and simple, yet scalable, strategy works surprisingly well when coupled with pre-training on large amounts of dataset and tranferred to multiple size image recognition benchmarks while requiring substantially fewer computational resources to train. Paper link : [AN IMAGE IS WORTH 16X16 WORDS:TRANSFORMERS FOR IMAGE RECOGNITION AT SCALE](https://arxiv.org/pdf/2010.11929.pdf) ## Intended uses & limitations This model contains just the `IPUConfig` files for running the ViT base model (e.g. [vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) or [deit-base-patch16-384](https://huggingface.co/facebook/deit-base-patch16-384)) on Graphcore IPUs. **This model contains no model weights, only an IPUConfig.** ## Usage ``` from optimum.graphcore import IPUConfig ipu_config = IPUConfig.from_pretrained("Graphcore/vit-base-ipu") ```
Gregor/bert-base-multilingual-cased-wmt21-qe
Gregor
bert
6
0
adapter-transformers
0
null
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['adapter-transformers', 'adapterhub:quality_estimation/wmt21', 'bert']
false
true
true
1,214
# Adapter `Gregor/bert-base-multilingual-cased-wmt21-qe` for bert-base-multilingual-cased An [adapter](https://adapterhub.ml) for the bert-base-multilingual-cased model that was trained on the [quality_estimation/wmt21](https://adapterhub.ml/explore/quality_estimation/wmt21/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("bert-base-multilingual-cased") adapter_name = model.load_adapter("Gregor/bert-base-multilingual-cased-wmt21-qe") model.active_adapters = adapter_name ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
Gregor/xlm-roberta-base-wmt21-qe
Gregor
xlm-roberta
6
0
adapter-transformers
0
null
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['adapter-transformers', 'adapterhub:quality_estimation/wmt21', 'xlm-roberta']
false
true
true
1,154
# Adapter `Gregor/xlm-roberta-base-wmt21-qe` for xlm-roberta-base An [adapter](https://adapterhub.ml) for the xlm-roberta-base model that was trained on the [quality_estimation/wmt21](https://adapterhub.ml/explore/quality_estimation/wmt21/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("xlm-roberta-base") adapter_name = model.load_adapter("Gregor/xlm-roberta-base-wmt21-qe") model.active_adapters = adapter_name ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
Gregor/xlm-roberta-large-wmt21-qe
Gregor
xlm-roberta
6
2
adapter-transformers
0
null
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['adapter-transformers', 'xlm-roberta', 'adapterhub:quality_estimation/wmt21']
false
true
true
1,159
# Adapter `Gregor/xlm-roberta-large-wmt21-qe` for xlm-roberta-large An [adapter](https://adapterhub.ml) for the xlm-roberta-large model that was trained on the [quality_estimation/wmt21](https://adapterhub.ml/explore/quality_estimation/wmt21/) dataset and includes a prediction head for classification. This adapter was created for usage with the **[adapter-transformers](https://github.com/Adapter-Hub/adapter-transformers)** library. ## Usage First, install `adapter-transformers`: ``` pip install -U adapter-transformers ``` _Note: adapter-transformers is a fork of transformers that acts as a drop-in replacement with adapter support. [More](https://docs.adapterhub.ml/installation.html)_ Now, the adapter can be loaded and activated like this: ```python from transformers import AutoModelWithHeads model = AutoModelWithHeads.from_pretrained("xlm-roberta-large") adapter_name = model.load_adapter("Gregor/xlm-roberta-large-wmt21-qe") model.active_adapters = adapter_name ``` ## Architecture & Training <!-- Add some description here --> ## Evaluation results <!-- Add some description here --> ## Citation <!-- Add some description here -->
GroNLP/bert-base-dutch-cased-frisian
GroNLP
bert
9
2
transformers
1
fill-mask
true
true
true
null
['fy']
null
null
0
0
0
0
0
0
0
['BERTje']
false
true
true
1,515
Wietse de Vries • Martijn Bartelds • Malvina Nissim • Martijn Wieling # Adapting Monolingual Models: Data can be Scarce when Language Similarity is High This model is part of this paper + code: - 📝 [Paper](https://arxiv.org/abs/2105.02855) - 💻 [Code](https://github.com/wietsedv/low-resource-adapt) ## Models The best fine-tuned models for Gronings and West Frisian are available on the HuggingFace model hub: ### Lexical layers These models are identical to [BERTje](https://github.com/wietsedv/bertje), but with different lexical layers (`bert.embeddings.word_embeddings`). - 🤗 [`GroNLP/bert-base-dutch-cased`](https://huggingface.co/GroNLP/bert-base-dutch-cased) (Dutch; source language) - 🤗 [`GroNLP/bert-base-dutch-cased-gronings`](https://huggingface.co/GroNLP/bert-base-dutch-cased-gronings) (Gronings) - 🤗 [`GroNLP/bert-base-dutch-cased-frisian`](https://huggingface.co/GroNLP/bert-base-dutch-cased-frisian) (West Frisian) ### POS tagging These models share the same fine-tuned Transformer layers + classification head, but with the retrained lexical layers from the models above. - 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino) (Dutch) - 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino-gronings`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino-gronings) (Gronings) - 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino-frisian`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino-frisian) (West Frisian)
GroNLP/bert-base-dutch-cased-gronings
GroNLP
bert
9
3
transformers
0
fill-mask
true
true
true
null
['gos']
null
null
0
0
0
0
0
0
0
['BERTje']
false
true
true
1,515
Wietse de Vries • Martijn Bartelds • Malvina Nissim • Martijn Wieling # Adapting Monolingual Models: Data can be Scarce when Language Similarity is High This model is part of this paper + code: - 📝 [Paper](https://arxiv.org/abs/2105.02855) - 💻 [Code](https://github.com/wietsedv/low-resource-adapt) ## Models The best fine-tuned models for Gronings and West Frisian are available on the HuggingFace model hub: ### Lexical layers These models are identical to [BERTje](https://github.com/wietsedv/bertje), but with different lexical layers (`bert.embeddings.word_embeddings`). - 🤗 [`GroNLP/bert-base-dutch-cased`](https://huggingface.co/GroNLP/bert-base-dutch-cased) (Dutch; source language) - 🤗 [`GroNLP/bert-base-dutch-cased-gronings`](https://huggingface.co/GroNLP/bert-base-dutch-cased-gronings) (Gronings) - 🤗 [`GroNLP/bert-base-dutch-cased-frisian`](https://huggingface.co/GroNLP/bert-base-dutch-cased-frisian) (West Frisian) ### POS tagging These models share the same fine-tuned Transformer layers + classification head, but with the retrained lexical layers from the models above. - 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino) (Dutch) - 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino-gronings`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino-gronings) (Gronings) - 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino-frisian`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino-frisian) (West Frisian)
GroNLP/bert-base-dutch-cased-upos-alpino-frisian
GroNLP
bert
9
9
transformers
0
token-classification
true
true
true
null
['fy']
null
null
0
0
0
0
0
0
0
['BERTje', 'pos']
false
true
true
1,515
Wietse de Vries • Martijn Bartelds • Malvina Nissim • Martijn Wieling # Adapting Monolingual Models: Data can be Scarce when Language Similarity is High This model is part of this paper + code: - 📝 [Paper](https://arxiv.org/abs/2105.02855) - 💻 [Code](https://github.com/wietsedv/low-resource-adapt) ## Models The best fine-tuned models for Gronings and West Frisian are available on the HuggingFace model hub: ### Lexical layers These models are identical to [BERTje](https://github.com/wietsedv/bertje), but with different lexical layers (`bert.embeddings.word_embeddings`). - 🤗 [`GroNLP/bert-base-dutch-cased`](https://huggingface.co/GroNLP/bert-base-dutch-cased) (Dutch; source language) - 🤗 [`GroNLP/bert-base-dutch-cased-gronings`](https://huggingface.co/GroNLP/bert-base-dutch-cased-gronings) (Gronings) - 🤗 [`GroNLP/bert-base-dutch-cased-frisian`](https://huggingface.co/GroNLP/bert-base-dutch-cased-frisian) (West Frisian) ### POS tagging These models share the same fine-tuned Transformer layers + classification head, but with the retrained lexical layers from the models above. - 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino) (Dutch) - 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino-gronings`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino-gronings) (Gronings) - 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino-frisian`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino-frisian) (West Frisian)
GroNLP/bert-base-dutch-cased-upos-alpino-gronings
GroNLP
bert
9
7
transformers
0
token-classification
true
true
true
null
['gos']
null
null
0
0
0
0
0
0
0
['BERTje', 'pos']
false
true
true
1,515
Wietse de Vries • Martijn Bartelds • Malvina Nissim • Martijn Wieling # Adapting Monolingual Models: Data can be Scarce when Language Similarity is High This model is part of this paper + code: - 📝 [Paper](https://arxiv.org/abs/2105.02855) - 💻 [Code](https://github.com/wietsedv/low-resource-adapt) ## Models The best fine-tuned models for Gronings and West Frisian are available on the HuggingFace model hub: ### Lexical layers These models are identical to [BERTje](https://github.com/wietsedv/bertje), but with different lexical layers (`bert.embeddings.word_embeddings`). - 🤗 [`GroNLP/bert-base-dutch-cased`](https://huggingface.co/GroNLP/bert-base-dutch-cased) (Dutch; source language) - 🤗 [`GroNLP/bert-base-dutch-cased-gronings`](https://huggingface.co/GroNLP/bert-base-dutch-cased-gronings) (Gronings) - 🤗 [`GroNLP/bert-base-dutch-cased-frisian`](https://huggingface.co/GroNLP/bert-base-dutch-cased-frisian) (West Frisian) ### POS tagging These models share the same fine-tuned Transformer layers + classification head, but with the retrained lexical layers from the models above. - 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino) (Dutch) - 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino-gronings`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino-gronings) (Gronings) - 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino-frisian`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino-frisian) (West Frisian)
GroNLP/bert-base-dutch-cased-upos-alpino
GroNLP
bert
9
6
transformers
0
token-classification
true
true
true
null
['nl']
null
null
0
0
0
0
0
0
0
['BERTje', 'pos']
false
true
true
1,515
Wietse de Vries • Martijn Bartelds • Malvina Nissim • Martijn Wieling # Adapting Monolingual Models: Data can be Scarce when Language Similarity is High This model is part of this paper + code: - 📝 [Paper](https://arxiv.org/abs/2105.02855) - 💻 [Code](https://github.com/wietsedv/low-resource-adapt) ## Models The best fine-tuned models for Gronings and West Frisian are available on the HuggingFace model hub: ### Lexical layers These models are identical to [BERTje](https://github.com/wietsedv/bertje), but with different lexical layers (`bert.embeddings.word_embeddings`). - 🤗 [`GroNLP/bert-base-dutch-cased`](https://huggingface.co/GroNLP/bert-base-dutch-cased) (Dutch; source language) - 🤗 [`GroNLP/bert-base-dutch-cased-gronings`](https://huggingface.co/GroNLP/bert-base-dutch-cased-gronings) (Gronings) - 🤗 [`GroNLP/bert-base-dutch-cased-frisian`](https://huggingface.co/GroNLP/bert-base-dutch-cased-frisian) (West Frisian) ### POS tagging These models share the same fine-tuned Transformer layers + classification head, but with the retrained lexical layers from the models above. - 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino) (Dutch) - 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino-gronings`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino-gronings) (Gronings) - 🤗 [`GroNLP/bert-base-dutch-cased-upos-alpino-frisian`](https://huggingface.co/GroNLP/bert-base-dutch-cased-upos-alpino-frisian) (West Frisian)
GroNLP/bert-base-dutch-cased
GroNLP
bert
9
78,874
transformers
5
fill-mask
true
true
true
null
['nl']
null
null
0
0
0
0
0
0
0
['BERTje']
false
true
true
6,472
# BERTje: A Dutch BERT model [Wietse de Vries](https://www.semanticscholar.org/author/Wietse-de-Vries/144611157) • [Andreas van Cranenburgh](https://www.semanticscholar.org/author/Andreas-van-Cranenburgh/2791585) • [Arianna Bisazza](https://www.semanticscholar.org/author/Arianna-Bisazza/3242253) • [Tommaso Caselli](https://www.semanticscholar.org/author/Tommaso-Caselli/1864635) • [Gertjan van Noord](https://www.semanticscholar.org/author/Gertjan-van-Noord/143715131) • [Malvina Nissim](https://www.semanticscholar.org/author/M.-Nissim/2742475) ## Model description BERTje is a Dutch pre-trained BERT model developed at the University of Groningen. <img src="https://raw.githubusercontent.com/wietsedv/bertje/master/bertje.png" height="250"> For details, check out our paper on [arXiv](https://arxiv.org/abs/1912.09582), the code on [Github](https://github.com/wietsedv/bertje) and related work on [Semantic Scholar](https://www.semanticscholar.org/paper/BERTje%3A-A-Dutch-BERT-Model-Vries-Cranenburgh/a4d5e425cac0bf84c86c0c9f720b6339d6288ffa). The paper and Github page mention fine-tuned models that are available [here](https://huggingface.co/wietsedv). ## How to use ```python from transformers import AutoTokenizer, AutoModel, TFAutoModel tokenizer = AutoTokenizer.from_pretrained("GroNLP/bert-base-dutch-cased") model = AutoModel.from_pretrained("GroNLP/bert-base-dutch-cased") # PyTorch model = TFAutoModel.from_pretrained("GroNLP/bert-base-dutch-cased") # Tensorflow ``` **WARNING:** The vocabulary size of BERTje has changed in 2021. If you use an older fine-tuned model and experience problems with the `GroNLP/bert-base-dutch-cased` tokenizer, use use the following tokenizer: ```python tokenizer = AutoTokenizer.from_pretrained("GroNLP/bert-base-dutch-cased", revision="v1") # v1 is the old vocabulary ``` ## Benchmarks The arXiv paper lists benchmarks. Here are a couple of comparisons between BERTje, multilingual BERT, BERT-NL and RobBERT that were done after writing the paper. Unlike some other comparisons, the fine-tuning procedures for these benchmarks are identical for each pre-trained model. You may be able to achieve higher scores for individual models by optimizing fine-tuning procedures. More experimental results will be added to this page when they are finished. Technical details about how a fine-tuned these models will be published later as well as downloadable fine-tuned checkpoints. All of the tested models are *base* sized (12) layers with cased tokenization. Headers in the tables below link to original data sources. Scores link to the model pages that corresponds to that specific fine-tuned model. These tables will be updated when more simple fine-tuned models are made available. ### Named Entity Recognition | Model | [CoNLL-2002](https://www.clips.uantwerpen.be/conll2002/ner/) | [SoNaR-1](https://ivdnt.org/downloads/taalmaterialen/tstc-sonar-corpus) | spaCy UD LassySmall | | ---------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- | | **BERTje** | [**90.24**](https://huggingface.co/wietsedv/bert-base-dutch-cased-finetuned-conll2002-ner) | [**84.93**](https://huggingface.co/wietsedv/bert-base-dutch-cased-finetuned-sonar-ner) | [86.10](https://huggingface.co/wietsedv/bert-base-dutch-cased-finetuned-udlassy-ner) | | [mBERT](https://github.com/google-research/bert/blob/master/multilingual.md) | [88.61](https://huggingface.co/wietsedv/bert-base-multilingual-cased-finetuned-conll2002-ner) | [84.19](https://huggingface.co/wietsedv/bert-base-multilingual-cased-finetuned-sonar-ner) | [**86.77**](https://huggingface.co/wietsedv/bert-base-multilingual-cased-finetuned-udlassy-ner) | | [BERT-NL](http://textdata.nl) | 85.05 | 80.45 | 81.62 | | [RobBERT](https://github.com/iPieter/RobBERT) | 84.72 | 81.98 | 79.84 | ### Part-of-speech tagging | Model | [UDv2.5 LassySmall](https://universaldependencies.org/treebanks/nl_lassysmall/index.html) | | ---------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------- | | **BERTje** | **96.48** | | [mBERT](https://github.com/google-research/bert/blob/master/multilingual.md) | 96.20 | | [BERT-NL](http://textdata.nl) | 96.10 | | [RobBERT](https://github.com/iPieter/RobBERT) | 95.91 | ### BibTeX entry and citation info ```bibtex @misc{devries2019bertje, \ttitle = {{BERTje}: {A} {Dutch} {BERT} {Model}}, \tshorttitle = {{BERTje}}, \tauthor = {de Vries, Wietse and van Cranenburgh, Andreas and Bisazza, Arianna and Caselli, Tommaso and Noord, Gertjan van and Nissim, Malvina}, \tyear = {2019}, \tmonth = dec, \thowpublished = {arXiv:1912.09582}, \turl = {http://arxiv.org/abs/1912.09582}, } ```
GroNLP/gpt2-medium-dutch-embeddings
GroNLP
gpt2
11
125
transformers
1
text-generation
true
true
true
null
['nl']
null
null
0
0
0
0
0
0
0
['adaption', 'recycled', 'gpt2-medium']
false
true
true
2,505
# GPT-2 recycled for Dutch (medium, adapted lexical embeddings) [Wietse de Vries](https://www.semanticscholar.org/author/Wietse-de-Vries/144611157) • [Malvina Nissim](https://www.semanticscholar.org/author/M.-Nissim/2742475) ## Model description This model is based on the medium OpenAI GPT-2 ([`gpt2-medium`](https://huggingface.co/gpt2-medium)) model. The Transformer layer weights in this model are identical to the original English, model but the lexical layer has been retrained for a Dutch vocabulary. For details, check out our paper on [arXiv](https://arxiv.org/abs/2012.05628) and the code on [Github](https://github.com/wietsedv/gpt2-recycle). ## Related models ### Dutch - [`gpt2-small-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-small-dutch-embeddings): Small model size with only retrained lexical embeddings. - [`gpt2-small-dutch`](https://huggingface.co/GroNLP/gpt2-small-dutch): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**) - [`gpt2-medium-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-dutch-embeddings): Medium model size with only retrained lexical embeddings. ### Italian - [`gpt2-small-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-small-italian-embeddings): Small model size with only retrained lexical embeddings. - [`gpt2-small-italian`](https://huggingface.co/GroNLP/gpt2-small-italian): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**) - [`gpt2-medium-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-italian-embeddings): Medium model size with only retrained lexical embeddings. ## How to use ```python from transformers import pipeline pipe = pipeline("text-generation", model="GroNLP/gpt2-medium-dutch-embeddings") ``` ```python from transformers import AutoTokenizer, AutoModel, TFAutoModel tokenizer = AutoTokenizer.from_pretrained("GroNLP/gpt2-medium-dutch-embeddings") model = AutoModel.from_pretrained("GroNLP/gpt2-medium-dutch-embeddings") # PyTorch model = TFAutoModel.from_pretrained("GroNLP/gpt2-medium-dutch-embeddings") # Tensorflow ``` ## BibTeX entry ```bibtex @misc{devries2020good, title={As good as new. How to successfully recycle English GPT-2 to make models for other languages}, author={Wietse de Vries and Malvina Nissim}, year={2020}, eprint={2012.05628}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
GroNLP/gpt2-medium-italian-embeddings
GroNLP
gpt2
10
239
transformers
0
text-generation
true
true
true
null
['it']
null
null
0
0
0
0
0
0
0
['adaption', 'recycled', 'gpt2-medium']
false
true
true
2,518
# GPT-2 recycled for Italian (medium, adapted lexical embeddings) [Wietse de Vries](https://www.semanticscholar.org/author/Wietse-de-Vries/144611157) • [Malvina Nissim](https://www.semanticscholar.org/author/M.-Nissim/2742475) ## Model description This model is based on the medium OpenAI GPT-2 ([`gpt2-medium`](https://huggingface.co/gpt2-medium)) model. The Transformer layer weights in this model are identical to the original English, model but the lexical layer has been retrained for an Italian vocabulary. For details, check out our paper on [arXiv](https://arxiv.org/abs/2012.05628) and the code on [Github](https://github.com/wietsedv/gpt2-recycle). ## Related models ### Dutch - [`gpt2-small-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-small-dutch-embeddings): Small model size with only retrained lexical embeddings. - [`gpt2-small-dutch`](https://huggingface.co/GroNLP/gpt2-small-dutch): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**) - [`gpt2-medium-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-dutch-embeddings): Medium model size with only retrained lexical embeddings. ### Italian - [`gpt2-small-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-small-italian-embeddings): Small model size with only retrained lexical embeddings. - [`gpt2-small-italian`](https://huggingface.co/GroNLP/gpt2-small-italian): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**) - [`gpt2-medium-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-italian-embeddings): Medium model size with only retrained lexical embeddings. ## How to use ```python from transformers import pipeline pipe = pipeline("text-generation", model="GroNLP/gpt2-medium-italian-embeddings") ``` ```python from transformers import AutoTokenizer, AutoModel, TFAutoModel tokenizer = AutoTokenizer.from_pretrained("GroNLP/gpt2-medium-italian-embeddings") model = AutoModel.from_pretrained("GroNLP/gpt2-medium-italian-embeddings") # PyTorch model = TFAutoModel.from_pretrained("GroNLP/gpt2-medium-italian-embeddings") # Tensorflow ``` ## BibTeX entry ```bibtex @misc{devries2020good, title={As good as new. How to successfully recycle English GPT-2 to make models for other languages}, author={Wietse de Vries and Malvina Nissim}, year={2020}, eprint={2012.05628}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
GroNLP/gpt2-small-dutch-embeddings
GroNLP
gpt2
10
6
transformers
0
text-generation
true
true
true
null
['nl']
null
null
0
0
0
0
0
0
0
['adaption', 'recycled', 'gpt2-small']
false
true
true
2,485
# GPT-2 recycled for Dutch (small, adapted lexical embeddings) [Wietse de Vries](https://www.semanticscholar.org/author/Wietse-de-Vries/144611157) • [Malvina Nissim](https://www.semanticscholar.org/author/M.-Nissim/2742475) ## Model description This model is based on the small OpenAI GPT-2 ([`gpt2`](https://huggingface.co/gpt2)) model. The Transformer layer weights in this model are identical to the original English, model but the lexical layer has been retrained for a Dutch vocabulary. For details, check out our paper on [arXiv](https://arxiv.org/abs/2012.05628) and the code on [Github](https://github.com/wietsedv/gpt2-recycle). ## Related models ### Dutch - [`gpt2-small-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-small-dutch-embeddings): Small model size with only retrained lexical embeddings. - [`gpt2-small-dutch`](https://huggingface.co/GroNLP/gpt2-small-dutch): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**) - [`gpt2-medium-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-dutch-embeddings): Medium model size with only retrained lexical embeddings. ### Italian - [`gpt2-small-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-small-italian-embeddings): Small model size with only retrained lexical embeddings. - [`gpt2-small-italian`](https://huggingface.co/GroNLP/gpt2-small-italian): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**) - [`gpt2-medium-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-italian-embeddings): Medium model size with only retrained lexical embeddings. ## How to use ```python from transformers import pipeline pipe = pipeline("text-generation", model="GroNLP/gpt2-small-dutch-embeddings") ``` ```python from transformers import AutoTokenizer, AutoModel, TFAutoModel tokenizer = AutoTokenizer.from_pretrained("GroNLP/gpt2-small-dutch-embeddings") model = AutoModel.from_pretrained("GroNLP/gpt2-small-dutch-embeddings") # PyTorch model = TFAutoModel.from_pretrained("GroNLP/gpt2-small-dutch-embeddings") # Tensorflow ``` ## BibTeX entry ```bibtex @misc{devries2020good, title={As good as new. How to successfully recycle English GPT-2 to make models for other languages}, author={Wietse de Vries and Malvina Nissim}, year={2020}, eprint={2012.05628}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
GroNLP/gpt2-small-dutch
GroNLP
gpt2
10
1,578
transformers
3
text-generation
true
true
true
null
['nl']
null
null
0
0
0
0
0
0
0
['adaption', 'recycled', 'gpt2-small']
false
true
true
2,258
# GPT-2 recycled for Dutch (small) [Wietse de Vries](https://www.semanticscholar.org/author/Wietse-de-Vries/144611157) • [Malvina Nissim](https://www.semanticscholar.org/author/M.-Nissim/2742475) ## Model description This model is based on the small OpenAI GPT-2 ([`gpt2`](https://huggingface.co/gpt2)) model. For details, check out our paper on [arXiv](https://arxiv.org/abs/2012.05628) and the code on [Github](https://github.com/wietsedv/gpt2-recycle). ## Related models ### Dutch - [`gpt2-small-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-small-dutch-embeddings): Small model size with only retrained lexical embeddings. - [`gpt2-small-dutch`](https://huggingface.co/GroNLP/gpt2-small-dutch): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**) - [`gpt2-medium-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-dutch-embeddings): Medium model size with only retrained lexical embeddings. ### Italian - [`gpt2-small-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-small-italian-embeddings): Small model size with only retrained lexical embeddings. - [`gpt2-small-italian`](https://huggingface.co/GroNLP/gpt2-small-italian): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**) - [`gpt2-medium-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-italian-embeddings): Medium model size with only retrained lexical embeddings. ## How to use ```python from transformers import pipeline pipe = pipeline("text-generation", model="GroNLP/gpt2-small-dutch") ``` ```python from transformers import AutoTokenizer, AutoModel, TFAutoModel tokenizer = AutoTokenizer.from_pretrained("GroNLP/gpt2-small-dutch") model = AutoModel.from_pretrained("GroNLP/gpt2-small-dutch") # PyTorch model = TFAutoModel.from_pretrained("GroNLP/gpt2-small-dutch") # Tensorflow ``` ## BibTeX entry ```bibtex @misc{devries2020good, title={As good as new. How to successfully recycle English GPT-2 to make models for other languages}, author={Wietse de Vries and Malvina Nissim}, year={2020}, eprint={2012.05628}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
GroNLP/gpt2-small-italian-embeddings
GroNLP
gpt2
10
14
transformers
0
text-generation
true
true
true
null
['it']
null
null
0
0
0
0
0
0
0
['adaption', 'recycled', 'gpt2-small']
false
true
true
2,498
# GPT-2 recycled for Italian (small, adapted lexical embeddings) [Wietse de Vries](https://www.semanticscholar.org/author/Wietse-de-Vries/144611157) • [Malvina Nissim](https://www.semanticscholar.org/author/M.-Nissim/2742475) ## Model description This model is based on the small OpenAI GPT-2 ([`gpt2`](https://huggingface.co/gpt2)) model. The Transformer layer weights in this model are identical to the original English, model but the lexical layer has been retrained for an Italian vocabulary. For details, check out our paper on [arXiv](https://arxiv.org/abs/2012.05628) and the code on [Github](https://github.com/wietsedv/gpt2-recycle). ## Related models ### Dutch - [`gpt2-small-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-small-dutch-embeddings): Small model size with only retrained lexical embeddings. - [`gpt2-small-dutch`](https://huggingface.co/GroNLP/gpt2-small-dutch): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**) - [`gpt2-medium-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-dutch-embeddings): Medium model size with only retrained lexical embeddings. ### Italian - [`gpt2-small-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-small-italian-embeddings): Small model size with only retrained lexical embeddings. - [`gpt2-small-italian`](https://huggingface.co/GroNLP/gpt2-small-italian): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**) - [`gpt2-medium-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-italian-embeddings): Medium model size with only retrained lexical embeddings. ## How to use ```python from transformers import pipeline pipe = pipeline("text-generation", model="GroNLP/gpt2-small-italian-embeddings") ``` ```python from transformers import AutoTokenizer, AutoModel, TFAutoModel tokenizer = AutoTokenizer.from_pretrained("GroNLP/gpt2-small-italian-embeddings") model = AutoModel.from_pretrained("GroNLP/gpt2-small-italian-embeddings") # PyTorch model = TFAutoModel.from_pretrained("GroNLP/gpt2-small-italian-embeddings") # Tensorflow ``` ## BibTeX entry ```bibtex @misc{devries2020good, title={As good as new. How to successfully recycle English GPT-2 to make models for other languages}, author={Wietse de Vries and Malvina Nissim}, year={2020}, eprint={2012.05628}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
GroNLP/gpt2-small-italian
GroNLP
gpt2
10
331
transformers
2
text-generation
true
true
true
null
['it']
null
null
0
0
0
0
0
0
0
['adaption', 'recycled', 'gpt2-small']
false
true
true
2,268
# GPT-2 recycled for Italian (small) [Wietse de Vries](https://www.semanticscholar.org/author/Wietse-de-Vries/144611157) • [Malvina Nissim](https://www.semanticscholar.org/author/M.-Nissim/2742475) ## Model description This model is based on the small OpenAI GPT-2 ([`gpt2`](https://huggingface.co/gpt2)) model. For details, check out our paper on [arXiv](https://arxiv.org/abs/2012.05628) and the code on [Github](https://github.com/wietsedv/gpt2-recycle). ## Related models ### Dutch - [`gpt2-small-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-small-dutch-embeddings): Small model size with only retrained lexical embeddings. - [`gpt2-small-dutch`](https://huggingface.co/GroNLP/gpt2-small-dutch): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**) - [`gpt2-medium-dutch-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-dutch-embeddings): Medium model size with only retrained lexical embeddings. ### Italian - [`gpt2-small-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-small-italian-embeddings): Small model size with only retrained lexical embeddings. - [`gpt2-small-italian`](https://huggingface.co/GroNLP/gpt2-small-italian): Small model size with retrained lexical embeddings and additional fine-tuning of the full model. (**Recommended**) - [`gpt2-medium-italian-embeddings`](https://huggingface.co/GroNLP/gpt2-medium-italian-embeddings): Medium model size with only retrained lexical embeddings. ## How to use ```python from transformers import pipeline pipe = pipeline("text-generation", model="GroNLP/gpt2-small-italian") ``` ```python from transformers import AutoTokenizer, AutoModel, TFAutoModel tokenizer = AutoTokenizer.from_pretrained("GroNLP/gpt2-small-italian") model = AutoModel.from_pretrained("GroNLP/gpt2-small-italian") # PyTorch model = TFAutoModel.from_pretrained("GroNLP/gpt2-small-italian") # Tensorflow ``` ## BibTeX entry ```bibtex @misc{devries2020good, title={As good as new. How to successfully recycle English GPT-2 to make models for other languages}, author={Wietse de Vries and Malvina Nissim}, year={2020}, eprint={2012.05628}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
GroNLP/hateBERT
GroNLP
bert
8
2,891
transformers
11
fill-mask
true
false
false
null
['en']
null
null
0
0
0
0
0
0
0
['HateBERT', 'text classification', 'abusive language', 'hate speech', 'offensive language']
false
true
true
2,355
# [Tommaso Caselli](https://www.semanticscholar.org/author/Tommaso-Caselli/1864635) • [Valerio Basile](https://www.semanticscholar.org/author/Valerio-Basile/3101511) • [Jelena Mitrovic](https://www.semanticscholar.org/author/Jelena-Mitrovic/145157863) • [Michael Granizter](https://www.semanticscholar.org/author/M.-Granitzer/2389675) ## Model description HateBERT is an English pre-trained BERT model obtained by further training the English BERT base uncased model with more than 1 million posts from banned communites from Reddit. The model has been developed as a collaboration between the University of Groningen, the university of Turin, and the University of Passau. For details, check out the paper presented at [WOAH 2021](https://aclanthology.org/2021.woah-1.3/). The code and the fine-tuned models are available on [OSF](https://osf.io/tbd58/?view_onlycb79b3228d4248ddb875eb1803525ad8). ### BibTeX entry and citation info ```bibtex @inproceedings{caselli-etal-2021-hatebert, \ttitle = "{H}ate{BERT}: Retraining {BERT} for Abusive Language Detection in {E}nglish", \tauthor = "Caselli, Tommaso and Basile, Valerio and Mitrovi{\'c}, Jelena and Granitzer, Michael", \tbooktitle = "Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021)", \tmonth = aug, \tyear = "2021", \taddress = "Online", \tpublisher = "Association for Computational Linguistics", \tturl = "https://aclanthology.org/2021.woah-1.3", \tdoi = "10.18653/v1/2021.woah-1.3", \tpages = "17--25", \tabstract = "We introduce HateBERT, a re-trained BERT model for abusive language detection in English. The model was trained on RAL-E, a large-scale dataset of Reddit comments in English from communities banned for being offensive, abusive, or hateful that we have curated and made available to the public. We present the results of a detailed comparison between a general pre-trained language model and the retrained version on three English datasets for offensive, abusive language and hate speech detection tasks. In all datasets, HateBERT outperforms the corresponding general BERT model. We also discuss a battery of experiments comparing the portability of the fine-tuned models across the datasets, suggesting that portability is affected by compatibility of the annotated phenomena.", } ```
Guan-Ting/StyleSpeech-MelGAN-vocoder-16kHz
Guan-Ting
null
4
0
null
4
null
false
false
false
null
null
null
null
0
0
0
0
0
0
0
[]
false
false
true
995
### The MelGAN vocoder for StyleSpeech #### About StyleSpeech * StyleSpeech or Meta-StyleSpeech is a model for Multi-Speaker Adaptive Text-to-Speech Generation * The StyleSpeech model can be trained by official implementation (https://github.com/KevinMIN95/StyleSpeech). #### About MelGAN vocoder * This MelGAN vocoder is used to transform the mel-spectrogram back to the waveform. * StyleSpeech is based on 16k Hz sampling rate, and there is no available 16k Hz multi-speaker vocoder. * Thus I train this vocoder from scratch using Libri-TTS train-100 hour dataset. The training pipeline is the same as the official MelGAN (https://github.com/descriptinc/melgan-neurips). * The synthesized sounds are close to the official demo with good quality. #### Usage * Please follow the official MelGAN (https://github.com/descriptinc/melgan-neurips) to load pre-trained checkpoint and convert your mel-spectrogram back to the waveform. #### Training Details * GPU: RTX 2080Ti * Training epoch: 3000
GunnarThor/talromur_f_tacotron2
GunnarThor
null
18
12
espnet
0
text-to-speech
false
false
false
cc-by-4.0
['en']
['talromur']
null
0
0
0
0
0
0
0
['espnet', 'audio', 'text-to-speech']
false
true
true
6,062
## ESPnet2 TTS model ### `GunnarThor/talromur_f_tacotron2` This model was trained by Gunnar Thor using talromur recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet git checkout 81522029063e42ce807d9d145b64d3f9aca45987 pip install -e . cd egs2/talromur/tts1 ./run.sh --skip_data_prep false --skip_train true --download_model GunnarThor/talromur_f_tacotron2 ``` ## TTS config <details><summary>expand</summary> ``` config: ./conf/tuning/train_tacotron2.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp_f/tts_train_tacotron2_raw_phn_none ngpu: 1 seed: 0 num_workers: 1 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: 2 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 55005 dist_launcher: null multiprocessing_distributed: true unused_parameters: false sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 200 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - loss - min - - train - loss - min keep_nbest_models: 5 nbest_averaging_interval: 0 grad_clip: 1.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: null use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: 500 batch_size: 20 valid_batch_size: null batch_bins: 5120000 valid_batch_bins: null train_shape_file: - exp_f/tts_stats_raw_phn_none/train/text_shape.phn - exp_f/tts_stats_raw_phn_none/train/speech_shape valid_shape_file: - exp_f/tts_stats_raw_phn_none/valid/text_shape.phn - exp_f/tts_stats_raw_phn_none/valid/speech_shape batch_type: numel valid_batch_type: null fold_length: - 150 - 204800 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_f_phn/text - text - text - - dump/raw/train_f_phn/wav.scp - speech - sound valid_data_path_and_name_and_type: - - dump/raw/dev_f_phn/text - text - text - - dump/raw/dev_f_phn/wav.scp - speech - sound allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.001 eps: 1.0e-06 weight_decay: 0.0 scheduler: null scheduler_conf: {} token_list: - <blank> - <unk> - ',' - . - r - t - n - a0 - s - I0 - D - l - Y0 - m - v - h - k - E1 - a:1 - E:1 - f - G - j - a1 - T - p - c - au:1 - E0 - i:1 - O:1 - I:1 - I1 - r_0 - t_h - k_h - Y1 - ei1 - i0 - ei:1 - ou:1 - u:1 - O1 - N - l_0 - '91' - ai0 - au1 - ou0 - ai:1 - n_0 - ei0 - O0 - ou1 - i1 - '9:1' - ai1 - '90' - au0 - x - c_h - 9i:1 - C - p_h - u0 - Y:1 - J - 9i1 - u1 - 9i0 - N_0 - m_0 - J_0 - Yi0 - Oi1 - Yi1 - Oi0 - au:0 - '9:0' - E:0 - <sos/eos> odim: null model_conf: {} use_preprocessor: true token_type: phn bpemodel: null non_linguistic_symbols: null cleaner: null g2p: null feats_extract: fbank feats_extract_conf: n_fft: 1024 hop_length: 256 win_length: null fs: 22050 fmin: 80 fmax: 7600 n_mels: 80 normalize: global_mvn normalize_conf: stats_file: exp_f/tts_stats_raw_phn_none/train/feats_stats.npz tts: tacotron2 tts_conf: embed_dim: 512 elayers: 1 eunits: 512 econv_layers: 3 econv_chans: 512 econv_filts: 5 atype: location adim: 512 aconv_chans: 32 aconv_filts: 15 cumulate_att_w: true dlayers: 2 dunits: 1024 prenet_layers: 2 prenet_units: 256 postnet_layers: 5 postnet_chans: 512 postnet_filts: 5 output_activation: null use_batch_norm: true use_concate: true use_residual: false dropout_rate: 0.5 zoneout_rate: 0.1 reduction_factor: 1 spk_embed_dim: null use_masking: true bce_pos_weight: 5.0 use_guided_attn_loss: true guided_attn_loss_sigma: 0.4 guided_attn_loss_lambda: 1.0 pitch_extract: null pitch_extract_conf: {} pitch_normalize: null pitch_normalize_conf: {} energy_extract: null energy_extract_conf: {} energy_normalize: null energy_normalize_conf: {} required: - output_dir - token_list version: 0.10.5a1 distributed: true ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
GusNicho/distilbert-base-cased-finetuned
GusNicho
distilbert
13
5
transformers
0
fill-mask
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,301
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-cased-finetuned This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.9161 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.3101 | 1.0 | 974 | 2.0502 | | 2.0831 | 2.0 | 1948 | 1.9627 | | 2.0198 | 3.0 | 2922 | 1.8998 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
GusNicho/roberta-base-finetuned
GusNicho
roberta
13
4
transformers
0
fill-mask
true
false
false
mit
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,116
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 1.4057 - eval_runtime: 3.7087 - eval_samples_per_second: 167.712 - eval_steps_per_second: 2.696 - epoch: 2.11 - step: 2053 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
Guscode/DKbert-hatespeech-detection
Guscode
bert
9
14
transformers
1
text-classification
true
true
false
mit
['da']
['DKHate - OffensEval2020']
null
0
0
0
0
0
0
0
['Hatespeech', 'Danish', 'BERT']
false
true
true
967
# DKbert-hatespeech-classification Use this model to detect hatespeech in Danish. For details, guide and command line tool see [DK hate github](https://github.com/Guscode/DKbert-hatespeech-detection) ## Training data Training data is from OffensEval2020 which can be found [here]( https://figshare.com/articles/dataset/Danish_Hate_Speech_Abusive_Language_data/12220805) ## Performance The model achieves a macro F1-score of 0.78 Precision hateful: 0.77 Recall hateful: 0.49 See more on [DK hate github](https://github.com/Guscode/DKbert-hatespeech-detection) ## Training procedure - [BOTXO Nordic Bert](https://huggingface.co/DJSammy/bert-base-danish-uncased_BotXO,ai) - Learning rate: 1e-5, - Batch size: 16 - Max sequence length: 128 ## Project information This model was made in collaboration between [Johan Horsmans](https://github.com/JohanHorsmans) and [Gustav Aarup Lauridsen](https://github.com/Guscode) for their Cultural Data Science Exam.
HHousen/distil-led-large-cnn-16384
HHousen
led
8
299
transformers
3
text2text-generation
true
false
false
apache-2.0
['en']
['cnn_dailymail']
null
0
0
0
0
0
0
0
[]
false
true
true
581
## DistilLED Large CNN 16384 *distil-led-large-cnn-16384* was initialized from [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6), in a fashion similar to [allenai/led-large-16384](https://huggingface.co/allenai/led-large-16384). To be able to process 16K tokens, *sshleifer/distilbart-cnn-12-6*'s position embedding matrix was simply copied 16 times. This checkpoint should be loaded into `LEDForConditionalGeneration.from_pretrained`. See the [LED documentation](https://huggingface.co/transformers/model_doc/led.html) for more information.
HHousen/household-rooms
HHousen
vit
11
107
transformers
1
image-classification
true
false
false
null
null
null
null
0
0
0
0
0
0
0
['image-classification', 'pytorch', 'huggingpics']
false
true
true
605
# household-rooms Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### bathroom ![bathroom](images/bathroom.jpg) #### bedroom ![bedroom](images/bedroom.jpg) #### dining room ![dining room](images/dining_room.jpg) #### kitchen ![kitchen](images/kitchen.jpg) #### living room ![living room](images/living_room.jpg)
HScomcom/gpt2-MyLittlePony
HScomcom
gpt2
10
7
transformers
1
text-generation
true
false
true
null
null
null
null
0
0
0
0
0
0
0
[]
false
false
true
841
The model that generates the My little pony script Fine tuning data: [Kaggle](https://www.kaggle.com/liury123/my-little-pony-transcript?select=clean_dialog.csv) API page: [Ainize](https://ainize.ai/fpem123/GPT2-MyLittlePony) Demo page: [End point](https://master-gpt2-my-little-pony-fpem123.endpoint.ainize.ai/) ### Model information Base model: gpt-2 large Epoch: 30 Train runtime: 4943.9641 secs Loss: 0.0291 ###===Teachable NLP=== To train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free. Teachable NLP: [Teachable NLP](https://ainize.ai/teachable-nlp) Tutorial: [Tutorial](https://forum.ainetwork.ai/t/teachable-nlp-how-to-use-teachable-nlp/65?utm_source=community&utm_medium=huggingface&utm_campaign=model&utm_content=teachable%20nlp)
HScomcom/gpt2-fairytales
HScomcom
gpt2
10
10
transformers
0
text-generation
true
false
true
null
null
null
null
0
0
0
0
0
0
0
[]
false
false
true
909
### Model information Fine tuning data: https://www.kaggle.com/cuddlefish/fairy-tales License: CC0: Public Domain Base model: gpt-2 large Epoch: 30 Train runtime: 17861.6048 secs Loss: 0.0412 API page: [Ainize](https://ainize.ai/fpem123/GPT2-FairyTales?branch=master) Demo page: [End-point](https://master-gpt2-fairy-tales-fpem123.endpoint.ainize.ai/) ### ===Teachable NLP=== ### To train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free. Teachable NLP: [Teachable NLP](https://ainize.ai/teachable-nlp) Tutorial: [Tutorial](https://forum.ainetwork.ai/t/teachable-nlp-how-to-use-teachable-nlp/65?utm_source=community&utm_medium=huggingface&utm_campaign=model&utm_content=teachable%20nlp) And my other fairytale model: [showcase](https://forum.ainetwork.ai/t/teachable-nlp-gpt-2-fairy-tales/68)
HScomcom/gpt2-lovecraft
HScomcom
gpt2
10
3
transformers
1
text-generation
true
false
true
null
null
null
null
0
0
0
0
0
0
0
[]
false
false
true
899
### Model information Fine tuning data: https://www.kaggle.com/bennijesus/lovecraft-fiction License: CC0: Public Domain Base model: gpt-2 large Epoch: 30 Train runtime: 10307.3488 secs Loss: 0.0292 API page: [Ainize](https://ainize.ai/fpem123/GPT2-LoveCraft?branch=master) Demo page: [End-point](https://master-gpt2-love-craft-fpem123.endpoint.ainize.ai/) ### ===Teachable NLP=== To train a GPT-2 model, write code and require GPU resources, but can easily fine-tune and get an API to use the model here for free. Teachable NLP: [Teachable NLP](https://ainize.ai/teachable-nlp) Tutorial: [Tutorial](https://forum.ainetwork.ai/t/teachable-nlp-how-to-use-teachable-nlp/65?utm_source=community&utm_medium=huggingface&utm_campaign=model&utm_content=teachable%20nlp) And my other lovecraft model: [showcase](https://forum.ainetwork.ai/t/teachable-nlp-gpt-2-lovecraft/71)
HamidRezaAttar/gpt2-product-description-generator
HamidRezaAttar
gpt2
8
84
transformers
10
text-generation
true
false
false
apache-2.0
['en']
null
null
0
0
0
0
0
0
0
['text-generation']
false
true
true
2,117
## GPT2-Home This model is fine-tuned using GPT-2 on amazon home products metadata. It can generate descriptions for your **home** products by getting a text prompt. ### Model description [GPT-2](https://openai.com/blog/better-language-models/) is a large [transformer](https://arxiv.org/abs/1706.03762)-based language model with 1.5 billion parameters, trained on a dataset of 8 million web pages. GPT-2 is trained with a simple objective: predict the next word, given all of the previous words within some text. The diversity of the dataset causes this simple goal to contain naturally occurring demonstrations of many tasks across diverse domains. GPT-2 is a direct scale-up of GPT, with more than 10X the parameters and trained on more than 10X the amount of data. ### Live Demo For testing model with special configuration, please visit [Demo](https://huggingface.co/spaces/HamidRezaAttar/gpt2-home) ### Blog Post For more detailed information about project development please refer to my [blog post](https://hamidrezaattar.github.io/blog/markdown/2022/02/17/gpt2-home.html). ### How to use For best experience and clean outputs, you can use Live Demo mentioned above, also you can use the notebook mentioned in my [GitHub](https://github.com/HamidRezaAttar/GPT2-Home) You can use this model directly with a pipeline for text generation. ```python >>> from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline >>> tokenizer = AutoTokenizer.from_pretrained("HamidRezaAttar/gpt2-product-description-generator") >>> model = AutoModelForCausalLM.from_pretrained("HamidRezaAttar/gpt2-product-description-generator") >>> generator = pipeline('text-generation', model, tokenizer=tokenizer, config={'max_length':100}) >>> generated_text = generator("This bed is very comfortable.") ``` ### Citation info ```bibtex @misc{GPT2-Home, author = {HamidReza Fatollah Zadeh Attar}, title = {GPT2-Home the English home product description generator}, year = {2021}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/HamidRezaAttar/GPT2-Home}}, } ```
Hank/distilbert-base-uncased-finetuned-ner
Hank
distilbert
13
9
transformers
0
token-classification
true
false
false
apache-2.0
null
['conll2003']
null
1
1
0
0
0
0
0
['generated_from_trainer']
false
true
true
1,554
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0612 - Precision: 0.9259 - Recall: 0.9369 - F1: 0.9314 - Accuracy: 0.9839 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.243 | 1.0 | 878 | 0.0703 | 0.9134 | 0.9181 | 0.9158 | 0.9806 | | 0.0515 | 2.0 | 1756 | 0.0609 | 0.9214 | 0.9343 | 0.9278 | 0.9832 | | 0.0305 | 3.0 | 2634 | 0.0612 | 0.9259 | 0.9369 | 0.9314 | 0.9839 | ### Framework versions - Transformers 4.9.1 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
Haotian/distilgpt2-finetuned-wikitext2
Haotian
gpt2
9
2
transformers
0
text-generation
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
false
true
true
1,243
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-wikitext2 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.6424 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.7608 | 1.0 | 2334 | 3.6655 | | 3.6335 | 2.0 | 4668 | 3.6455 | | 3.6066 | 3.0 | 7002 | 3.6424 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.9.0+cu102 - Datasets 1.12.0 - Tokenizers 0.10.3
HarrisDePerceptron/xls-r-1b-ur
HarrisDePerceptron
wav2vec2
25
0
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['ur']
['mozilla-foundation/common_voice_8_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'ur', 'robust-speech-event', 'hf-asr-leaderboard']
true
true
true
2,802
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - UR dataset. It achieves the following results on the evaluation set: - Loss: 0.9613 - Wer: 0.5376 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 50.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.3118 | 1.96 | 100 | 2.9093 | 0.9982 | | 2.2071 | 3.92 | 200 | 1.1737 | 0.7779 | | 1.6098 | 5.88 | 300 | 0.9984 | 0.7015 | | 1.4333 | 7.84 | 400 | 0.9800 | 0.6705 | | 1.2859 | 9.8 | 500 | 0.9582 | 0.6487 | | 1.2073 | 11.76 | 600 | 0.8841 | 0.6077 | | 1.1417 | 13.73 | 700 | 0.9118 | 0.6343 | | 1.0988 | 15.69 | 800 | 0.9217 | 0.6196 | | 1.0279 | 17.65 | 900 | 0.9165 | 0.5867 | | 0.9765 | 19.61 | 1000 | 0.9306 | 0.5978 | | 0.9161 | 21.57 | 1100 | 0.9305 | 0.5768 | | 0.8395 | 23.53 | 1200 | 0.9828 | 0.5819 | | 0.8306 | 25.49 | 1300 | 0.9397 | 0.5760 | | 0.7819 | 27.45 | 1400 | 0.9544 | 0.5742 | | 0.7509 | 29.41 | 1500 | 0.9278 | 0.5690 | | 0.7218 | 31.37 | 1600 | 0.9003 | 0.5587 | | 0.6725 | 33.33 | 1700 | 0.9659 | 0.5554 | | 0.6287 | 35.29 | 1800 | 0.9522 | 0.5561 | | 0.6077 | 37.25 | 1900 | 0.9154 | 0.5465 | | 0.5873 | 39.22 | 2000 | 0.9331 | 0.5469 | | 0.5621 | 41.18 | 2100 | 0.9335 | 0.5491 | | 0.5168 | 43.14 | 2200 | 0.9632 | 0.5458 | | 0.5114 | 45.1 | 2300 | 0.9349 | 0.5387 | | 0.4986 | 47.06 | 2400 | 0.9364 | 0.5380 | | 0.4761 | 49.02 | 2500 | 0.9584 | 0.5391 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
HarrisDePerceptron/xls-r-300m-ur-cv7
HarrisDePerceptron
wav2vec2
18
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['ur']
['common_voice']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_7_0', 'generated_from_trainer']
true
true
true
4,238
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - UR dataset. It achieves the following results on the evaluation set: - Loss: 1.2924 - Wer: 0.7201 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 200.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 11.2783 | 4.17 | 100 | 4.6409 | 1.0 | | 3.5578 | 8.33 | 200 | 3.1649 | 1.0 | | 3.1279 | 12.5 | 300 | 3.0335 | 1.0 | | 2.9944 | 16.67 | 400 | 2.9526 | 0.9983 | | 2.9275 | 20.83 | 500 | 2.9291 | 1.0009 | | 2.8077 | 25.0 | 600 | 2.5633 | 0.9895 | | 2.4438 | 29.17 | 700 | 1.9045 | 0.9564 | | 1.9659 | 33.33 | 800 | 1.4114 | 0.7960 | | 1.7092 | 37.5 | 900 | 1.2584 | 0.7637 | | 1.517 | 41.67 | 1000 | 1.2040 | 0.7507 | | 1.3966 | 45.83 | 1100 | 1.1273 | 0.7463 | | 1.3197 | 50.0 | 1200 | 1.1054 | 0.6957 | | 1.2476 | 54.17 | 1300 | 1.1035 | 0.7001 | | 1.1796 | 58.33 | 1400 | 1.0890 | 0.7097 | | 1.1237 | 62.5 | 1500 | 1.0883 | 0.7167 | | 1.0777 | 66.67 | 1600 | 1.1067 | 0.7219 | | 1.0051 | 70.83 | 1700 | 1.1115 | 0.7236 | | 0.9521 | 75.0 | 1800 | 1.0867 | 0.7132 | | 0.9147 | 79.17 | 1900 | 1.0852 | 0.7210 | | 0.8798 | 83.33 | 2000 | 1.1411 | 0.7097 | | 0.8317 | 87.5 | 2100 | 1.1634 | 0.7018 | | 0.7946 | 91.67 | 2200 | 1.1621 | 0.7201 | | 0.7594 | 95.83 | 2300 | 1.1482 | 0.7036 | | 0.729 | 100.0 | 2400 | 1.1493 | 0.7062 | | 0.7055 | 104.17 | 2500 | 1.1726 | 0.6931 | | 0.6622 | 108.33 | 2600 | 1.1938 | 0.7001 | | 0.6583 | 112.5 | 2700 | 1.1832 | 0.7149 | | 0.6299 | 116.67 | 2800 | 1.1996 | 0.7175 | | 0.5903 | 120.83 | 2900 | 1.1986 | 0.7132 | | 0.5816 | 125.0 | 3000 | 1.1909 | 0.7010 | | 0.5583 | 129.17 | 3100 | 1.2079 | 0.6870 | | 0.5392 | 133.33 | 3200 | 1.2109 | 0.7228 | | 0.5412 | 137.5 | 3300 | 1.2353 | 0.7245 | | 0.5136 | 141.67 | 3400 | 1.2390 | 0.7254 | | 0.5007 | 145.83 | 3500 | 1.2273 | 0.7123 | | 0.4883 | 150.0 | 3600 | 1.2773 | 0.7289 | | 0.4835 | 154.17 | 3700 | 1.2678 | 0.7289 | | 0.4568 | 158.33 | 3800 | 1.2592 | 0.7350 | | 0.4525 | 162.5 | 3900 | 1.2705 | 0.7254 | | 0.4379 | 166.67 | 4000 | 1.2717 | 0.7306 | | 0.4198 | 170.83 | 4100 | 1.2618 | 0.7219 | | 0.4216 | 175.0 | 4200 | 1.2909 | 0.7158 | | 0.4305 | 179.17 | 4300 | 1.2808 | 0.7167 | | 0.399 | 183.33 | 4400 | 1.2750 | 0.7193 | | 0.3937 | 187.5 | 4500 | 1.2719 | 0.7149 | | 0.3905 | 191.67 | 4600 | 1.2816 | 0.7158 | | 0.3892 | 195.83 | 4700 | 1.2951 | 0.7210 | | 0.3932 | 200.0 | 4800 | 1.2924 | 0.7201 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
HarrisDePerceptron/xls-r-300m-ur-cv8-hi
HarrisDePerceptron
wav2vec2
22
0
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['ur']
['common_voice']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer']
true
true
true
4,403
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [DrishtiSharma/wav2vec2-large-xls-r-300m-hi-d3](https://huggingface.co/DrishtiSharma/wav2vec2-large-xls-r-300m-hi-d3) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - UR dataset. It achieves the following results on the evaluation set: - Loss: 1.5443 - Wer: 0.7030 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000388 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 750 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 10.7052 | 1.96 | 100 | 3.4683 | 1.0 | | 3.2395 | 3.92 | 200 | 3.1489 | 1.0 | | 2.9951 | 5.88 | 300 | 2.9823 | 1.0007 | | 2.3574 | 7.84 | 400 | 1.2614 | 0.7598 | | 1.7287 | 9.8 | 500 | 1.1817 | 0.7421 | | 1.6144 | 11.76 | 600 | 1.1315 | 0.7321 | | 1.5598 | 13.73 | 700 | 1.2322 | 0.7550 | | 1.5418 | 15.69 | 800 | 1.2721 | 0.7819 | | 1.4578 | 17.65 | 900 | 1.1710 | 0.7531 | | 1.4311 | 19.61 | 1000 | 1.2042 | 0.7491 | | 1.3483 | 21.57 | 1100 | 1.1702 | 0.7465 | | 1.3078 | 23.53 | 1200 | 1.1963 | 0.7421 | | 1.2576 | 25.49 | 1300 | 1.1501 | 0.7280 | | 1.2173 | 27.45 | 1400 | 1.2526 | 0.7299 | | 1.2217 | 29.41 | 1500 | 1.2479 | 0.7310 | | 1.1536 | 31.37 | 1600 | 1.2567 | 0.7432 | | 1.0939 | 33.33 | 1700 | 1.2801 | 0.7247 | | 1.0745 | 35.29 | 1800 | 1.2340 | 0.7151 | | 1.0454 | 37.25 | 1900 | 1.2372 | 0.7151 | | 1.0101 | 39.22 | 2000 | 1.2461 | 0.7376 | | 0.9833 | 41.18 | 2100 | 1.2553 | 0.7269 | | 0.9314 | 43.14 | 2200 | 1.2372 | 0.7015 | | 0.9147 | 45.1 | 2300 | 1.3035 | 0.7358 | | 0.8758 | 47.06 | 2400 | 1.2598 | 0.7092 | | 0.8356 | 49.02 | 2500 | 1.2557 | 0.7144 | | 0.8105 | 50.98 | 2600 | 1.2619 | 0.7236 | | 0.7947 | 52.94 | 2700 | 1.3994 | 0.7491 | | 0.7623 | 54.9 | 2800 | 1.2932 | 0.7133 | | 0.7282 | 56.86 | 2900 | 1.2799 | 0.7089 | | 0.7108 | 58.82 | 3000 | 1.3615 | 0.7148 | | 0.6896 | 60.78 | 3100 | 1.3129 | 0.7041 | | 0.6496 | 62.75 | 3200 | 1.4050 | 0.6934 | | 0.6075 | 64.71 | 3300 | 1.3571 | 0.7026 | | 0.6242 | 66.67 | 3400 | 1.3369 | 0.7063 | | 0.5865 | 68.63 | 3500 | 1.4368 | 0.7140 | | 0.5721 | 70.59 | 3600 | 1.4224 | 0.7066 | | 0.5475 | 72.55 | 3700 | 1.4798 | 0.7118 | | 0.5086 | 74.51 | 3800 | 1.5107 | 0.7232 | | 0.4958 | 76.47 | 3900 | 1.4849 | 0.7089 | | 0.5046 | 78.43 | 4000 | 1.4451 | 0.7114 | | 0.4694 | 80.39 | 4100 | 1.4674 | 0.7089 | | 0.4386 | 82.35 | 4200 | 1.5245 | 0.7103 | | 0.4516 | 84.31 | 4300 | 1.5032 | 0.7103 | | 0.4113 | 86.27 | 4400 | 1.5246 | 0.7196 | | 0.3972 | 88.24 | 4500 | 1.5318 | 0.7114 | | 0.4006 | 90.2 | 4600 | 1.5543 | 0.6982 | | 0.4014 | 92.16 | 4700 | 1.5442 | 0.7048 | | 0.3672 | 94.12 | 4800 | 1.5542 | 0.7137 | | 0.3666 | 96.08 | 4900 | 1.5414 | 0.7018 | | 0.3574 | 98.04 | 5000 | 1.5465 | 0.7059 | | 0.3428 | 100.0 | 5100 | 1.5443 | 0.7030 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
HarrisDePerceptron/xls-r-300m-ur
HarrisDePerceptron
wav2vec2
29
0
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['ur']
['mozilla-foundation/common_voice_8_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'ur', 'robust-speech-event', 'hf-asr-leaderboard']
true
true
true
4,415
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [HarrisDePerceptron/xls-r-300m-ur](https://huggingface.co/HarrisDePerceptron/xls-r-300m-ur) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - UR dataset. It achieves the following results on the evaluation set: - Loss: 1.0517 - WER: 0.5151291512915129 - CER: 0.23689640940982254 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.2991 | 1.96 | 100 | 0.9769 | 0.6627 | | 1.3415 | 3.92 | 200 | 0.9701 | 0.6594 | | 1.2998 | 5.88 | 300 | 0.9678 | 0.6668 | | 1.2881 | 7.84 | 400 | 0.9650 | 0.6613 | | 1.2369 | 9.8 | 500 | 0.9392 | 0.6502 | | 1.2293 | 11.76 | 600 | 0.9536 | 0.6480 | | 1.1709 | 13.73 | 700 | 0.9265 | 0.6402 | | 1.1492 | 15.69 | 800 | 0.9636 | 0.6506 | | 1.1044 | 17.65 | 900 | 0.9305 | 0.6351 | | 1.0704 | 19.61 | 1000 | 0.9329 | 0.6280 | | 1.0039 | 21.57 | 1100 | 0.9413 | 0.6295 | | 0.9756 | 23.53 | 1200 | 0.9718 | 0.6185 | | 0.9633 | 25.49 | 1300 | 0.9731 | 0.6133 | | 0.932 | 27.45 | 1400 | 0.9659 | 0.6199 | | 0.9252 | 29.41 | 1500 | 0.9766 | 0.6196 | | 0.9172 | 31.37 | 1600 | 1.0052 | 0.6199 | | 0.8733 | 33.33 | 1700 | 0.9955 | 0.6203 | | 0.868 | 35.29 | 1800 | 1.0069 | 0.6240 | | 0.8547 | 37.25 | 1900 | 0.9783 | 0.6258 | | 0.8451 | 39.22 | 2000 | 0.9845 | 0.6052 | | 0.8374 | 41.18 | 2100 | 0.9496 | 0.6137 | | 0.8153 | 43.14 | 2200 | 0.9756 | 0.6122 | | 0.8134 | 45.1 | 2300 | 0.9712 | 0.6096 | | 0.8019 | 47.06 | 2400 | 0.9565 | 0.5970 | | 0.7746 | 49.02 | 2500 | 0.9864 | 0.6096 | | 0.7664 | 50.98 | 2600 | 0.9988 | 0.6092 | | 0.7708 | 52.94 | 2700 | 1.0181 | 0.6255 | | 0.7468 | 54.9 | 2800 | 0.9918 | 0.6148 | | 0.7241 | 56.86 | 2900 | 1.0150 | 0.6018 | | 0.7165 | 58.82 | 3000 | 1.0439 | 0.6063 | | 0.7104 | 60.78 | 3100 | 1.0016 | 0.6037 | | 0.6954 | 62.75 | 3200 | 1.0117 | 0.5970 | | 0.6753 | 64.71 | 3300 | 1.0191 | 0.6037 | | 0.6803 | 66.67 | 3400 | 1.0190 | 0.6033 | | 0.661 | 68.63 | 3500 | 1.0284 | 0.6007 | | 0.6597 | 70.59 | 3600 | 1.0060 | 0.5967 | | 0.6398 | 72.55 | 3700 | 1.0372 | 0.6048 | | 0.6105 | 74.51 | 3800 | 1.0048 | 0.6044 | | 0.6164 | 76.47 | 3900 | 1.0398 | 0.6148 | | 0.6354 | 78.43 | 4000 | 1.0272 | 0.6133 | | 0.5952 | 80.39 | 4100 | 1.0364 | 0.6081 | | 0.5814 | 82.35 | 4200 | 1.0418 | 0.6092 | | 0.6079 | 84.31 | 4300 | 1.0277 | 0.5967 | | 0.5748 | 86.27 | 4400 | 1.0362 | 0.6041 | | 0.5624 | 88.24 | 4500 | 1.0427 | 0.6007 | | 0.5767 | 90.2 | 4600 | 1.0370 | 0.5919 | | 0.5793 | 92.16 | 4700 | 1.0442 | 0.6011 | | 0.547 | 94.12 | 4800 | 1.0516 | 0.5982 | | 0.5513 | 96.08 | 4900 | 1.0461 | 0.5989 | | 0.5429 | 98.04 | 5000 | 1.0504 | 0.5996 | | 0.5404 | 100.0 | 5100 | 1.0517 | 0.5967 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
HarrisDePerceptron/xlsr-large-53-ur
HarrisDePerceptron
wav2vec2
21
5
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
['ur']
['mozilla-foundation/common_voice_8_0']
null
0
0
0
0
0
0
0
['automatic-speech-recognition', 'mozilla-foundation/common_voice_8_0', 'generated_from_trainer', 'ur', 'robust-speech-event', 'hf-asr-leaderboard']
true
true
true
2,812
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - UR dataset. It achieves the following results on the evaluation set: - Loss: 0.8888 - Wer: 0.6642 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 50 - num_epochs: 50.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 10.1224 | 1.96 | 100 | 3.5429 | 1.0 | | 3.2411 | 3.92 | 200 | 3.1786 | 1.0 | | 3.1283 | 5.88 | 300 | 3.0571 | 1.0 | | 3.0044 | 7.84 | 400 | 2.9560 | 0.9996 | | 2.9388 | 9.8 | 500 | 2.8977 | 1.0011 | | 2.86 | 11.76 | 600 | 2.6944 | 0.9952 | | 2.5538 | 13.73 | 700 | 2.0967 | 0.9435 | | 2.1214 | 15.69 | 800 | 1.4816 | 0.8428 | | 1.8136 | 17.65 | 900 | 1.2459 | 0.8048 | | 1.6795 | 19.61 | 1000 | 1.1232 | 0.7649 | | 1.5571 | 21.57 | 1100 | 1.0510 | 0.7432 | | 1.4975 | 23.53 | 1200 | 1.0298 | 0.6963 | | 1.4485 | 25.49 | 1300 | 0.9775 | 0.7074 | | 1.3924 | 27.45 | 1400 | 0.9798 | 0.6956 | | 1.3604 | 29.41 | 1500 | 0.9345 | 0.7092 | | 1.3224 | 31.37 | 1600 | 0.9535 | 0.6830 | | 1.2816 | 33.33 | 1700 | 0.9178 | 0.6679 | | 1.2623 | 35.29 | 1800 | 0.9249 | 0.6679 | | 1.2421 | 37.25 | 1900 | 0.9124 | 0.6734 | | 1.2208 | 39.22 | 2000 | 0.8962 | 0.6664 | | 1.2145 | 41.18 | 2100 | 0.8903 | 0.6734 | | 1.1888 | 43.14 | 2200 | 0.8883 | 0.6708 | | 1.1933 | 45.1 | 2300 | 0.8928 | 0.6723 | | 1.1838 | 47.06 | 2400 | 0.8868 | 0.6679 | | 1.1634 | 49.02 | 2500 | 0.8886 | 0.6657 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
Harshveer/autonlp-formality_scoring_2-32597818
Harshveer
roberta
10
3
transformers
0
text-classification
true
false
false
null
['en']
['Harshveer/autonlp-data-formality_scoring_2']
8.655894631203154
0
0
0
0
0
0
0
autonlp
false
true
true
1,040
# Model Trained Using AutoNLP - Problem type: Single Column Regression - Model ID: 32597818 - CO2 Emissions (in grams): 8.655894631203154 ## Validation Metrics - Loss: 0.5410276651382446 - MSE: 0.5410276651382446 - MAE: 0.5694561004638672 - R2: 0.6830431129198475 - RMSE: 0.735545814037323 - Explained Variance: 0.6834385395050049 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Harshveer/autonlp-formality_scoring_2-32597818 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Harshveer/autonlp-formality_scoring_2-32597818", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Harshveer/autonlp-formality_scoring_2-32597818", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
Harveenchadha/model-entailment
Harveenchadha
null
7
1
keras
0
null
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['nlp']
false
true
true
832
## Multimodal entailment Author: Sayak Paul Date created: 2021/08/08 Last modified: 2021/08/15 Description: Training a multimodal model for predicting entailment. ### What is multimodal entailment? On social media platforms, to audit and moderate content we may want to find answers to the following questions in near real-time: Does a given piece of information contradict the other? Does a given piece of information imply the other? In NLP, this task is called analyzing textual entailment. However, that's only when the information comes from text content. In practice, it's often the case the information available comes not just from text content, but from a multimodal combination of text, images, audio, video, etc. Multimodal entailment is simply the extension of textual entailment to a variety of new input modalities.
Harveenchadha/vakyansh-wav2vec2-hindi-him-4200
Harveenchadha
wav2vec2
12
1,267
transformers
0
automatic-speech-recognition
true
false
false
mit
['hi']
null
null
0
0
0
0
2
2
0
['audio', 'automatic-speech-recognition', 'speech']
true
true
true
4,119
## Spaces Demo Check the spaces demo [here](https://huggingface.co/spaces/Harveenchadha/wav2vec2-vakyansh-hindi/tree/main) ## Pretrained Model Fine-tuned on Multilingual Pretrained Model [CLSRIL-23](https://arxiv.org/abs/2107.07402). The original fairseq checkpoint is present [here](https://github.com/Open-Speech-EkStep/vakyansh-models). When using this model, make sure that your speech input is sampled at 16kHz. **Note: The result from this model is without a language model so you may witness a higher WER in some cases.** ## Dataset This model was trained on 4200 hours of Hindi Labelled Data. The labelled data is not present in public domain as of now. ## Training Script Models were trained using experimental platform setup by Vakyansh team at Ekstep. Here is the [training repository](https://github.com/Open-Speech-EkStep/vakyansh-wav2vec2-experimentation). In case you want to explore training logs on wandb they are [here](https://wandb.ai/harveenchadha/hindi_finetuning_multilingual?workspace=user-harveenchadha). ## [Colab Demo](https://colab.research.google.com/github/harveenchadha/bol/blob/main/demos/hf/hindi/hf_hindi_him_4200_demo.ipynb) ## Usage The model can be used directly (without a language model) as follows: ```python import soundfile as sf import torch from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import argparse def parse_transcription(wav_file): # load pretrained model processor = Wav2Vec2Processor.from_pretrained("Harveenchadha/vakyansh-wav2vec2-hindi-him-4200") model = Wav2Vec2ForCTC.from_pretrained("Harveenchadha/vakyansh-wav2vec2-hindi-him-4200") # load audio audio_input, sample_rate = sf.read(wav_file) # pad input values and return pt tensor input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values # INFERENCE # retrieve logits & take argmax logits = model(input_values).logits predicted_ids = torch.argmax(logits, dim=-1) # transcribe transcription = processor.decode(predicted_ids[0], skip_special_tokens=True) print(transcription) ``` ## Evaluation The model can be evaluated as follows on the hindi test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "hi", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("Harveenchadha/vakyansh-wav2vec2-hindi-him-4200") model = Wav2Vec2ForCTC.from_pretrained("Harveenchadha/vakyansh-wav2vec2-hindi-him-4200") model.to("cuda") resampler = torchaudio.transforms.Resample(48_000, 16_000) chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]' # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids, skip_special_tokens=True) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 33.17 % [**Colab Evaluation**](https://colab.research.google.com/github/harveenchadha/bol/blob/main/demos/hf/hindi/hf_vakyansh_hindi_him_4200_evaluation_common_voice.ipynb) ## Credits Thanks to Ekstep Foundation for making this possible. The vakyansh team will be open sourcing speech models in all the Indic Languages.
Harveenchadha/vakyansh-wav2vec2-punjabi-pam-10
Harveenchadha
wav2vec2
8
11
transformers
0
automatic-speech-recognition
true
false
false
mit
['pa']
null
null
0
0
0
0
0
0
0
['audio', 'automatic-speech-recognition', 'speech']
true
true
true
387
Fine-tuned on Multilingual Pretrained Model [CLSRIL-23](https://arxiv.org/abs/2107.07402). The original fairseq checkpoint is present [here](https://github.com/Open-Speech-EkStep/vakyansh-models). When using this model, make sure that your speech input is sampled at 16kHz. **Note: The result from this model is without a language model so you may witness a higher WER in some cases.**
Harveenchadha/vakyansh-wav2vec2-tamil-tam-250
Harveenchadha
wav2vec2
8
5
transformers
0
automatic-speech-recognition
true
false
false
mit
['ta']
null
null
0
0
0
0
0
0
0
['audio', 'automatic-speech-recognition', 'speech']
true
true
true
3,918
## Pretrained Model Fine-tuned on Multilingual Pretrained Model [CLSRIL-23](https://arxiv.org/abs/2107.07402). The original fairseq checkpoint is present [here](https://github.com/Open-Speech-EkStep/vakyansh-models). When using this model, make sure that your speech input is sampled at 16kHz. **Note: The result from this model is without a language model so you may witness a higher WER in some cases.** ## Dataset This model was trained on 4200 hours of Hindi Labelled Data. The labelled data is not present in public domain as of now. ## Training Script Models were trained using experimental platform setup by Vakyansh team at Ekstep. Here is the [training repository](https://github.com/Open-Speech-EkStep/vakyansh-wav2vec2-experimentation). In case you want to explore training logs on wandb they are [here](https://wandb.ai/harveenchadha/tamil-finetuning-multilingual). ## [Colab Demo](https://github.com/harveenchadha/bol/blob/main/demos/hf/tamil/hf_tamil_tnm_4200_demo.ipynb) ## Usage The model can be used directly (without a language model) as follows: ```python import soundfile as sf import torch from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import argparse def parse_transcription(wav_file): # load pretrained model processor = Wav2Vec2Processor.from_pretrained("Harveenchadha/vakyansh-wav2vec2-tamil-tam-250") model = Wav2Vec2ForCTC.from_pretrained("Harveenchadha/vakyansh-wav2vec2-tamil-tam-250") # load audio audio_input, sample_rate = sf.read(wav_file) # pad input values and return pt tensor input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values # INFERENCE # retrieve logits & take argmax logits = model(input_values).logits predicted_ids = torch.argmax(logits, dim=-1) # transcribe transcription = processor.decode(predicted_ids[0], skip_special_tokens=True) print(transcription) ``` ## Evaluation The model can be evaluated as follows on the hindi test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "ta", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("Harveenchadha/vakyansh-wav2vec2-tamil-tam-250") model = Wav2Vec2ForCTC.from_pretrained("Harveenchadha/vakyansh-wav2vec2-tamil-tam-250") model.to("cuda") resampler = torchaudio.transforms.Resample(48_000, 16_000) chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]' # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids, skip_special_tokens=True) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 53.64 % [**Colab Evaluation**](https://github.com/harveenchadha/bol/blob/main/demos/hf/tamil/hf_vakyansh_tamil_tnm_4200_evaluation_common_voice.ipynb) ## Credits Thanks to Ekstep Foundation for making this possible. The vakyansh team will be open sourcing speech models in all the Indic Languages.
Harveenchadha/wav2vec2-pretrained-clsril-23-10k
Harveenchadha
wav2vec2
4
12
transformers
3
feature-extraction
true
false
false
null
null
null
null
0
0
0
0
0
0
0
[]
false
false
true
1,547
## Overview We present a CLSRIL-23 (Cross Lingual Speech Representations on Indic Languages), a self supervised learning based audio pre-trained model which learns cross lingual speech representations from raw audio across **23 Indic languages**. It is built on top of wav2vec 2.0 which is solved by training a contrastive task over masked latent speech representations and jointly learns the quantization of latents shared across all languages. [Arxiv Link](https://arxiv.org/pdf/2107.07402.pdf) [Original Repo](https://github.com/Open-Speech-EkStep/vakyansh-models) contains models in fairseq format. ## Languages in the pretraining dataset | Language | Data (In Hrs) | |-----------|---------------| | Assamese | 254.9 | | Bengali | 331.3 | | Bodo | 26.9 | | Dogri | 17.1 | | English | 819.7 | | Gujarati | 336.7 | | Hindi | 4563.7 | | Kannada | 451.8 | | Kashmiri | 67.8 | | Konkani | 36.8 | | Maithili | 113.8 | | Malayalam | 297.7 | | Manipuri | 171.9 | | Marathi | 458.2 | | Nepali | 31.6 | | Odia | 131.4 | | Punjabi | 486.05 | | Sanskrit | 58.8 | | Santali | 6.56 | | Sindhi | 16 | | Tamil | 542.6 | | Telugu | 302.8 | | Urdu | 259.68 | ## Repo for training: [Experimentation](https://github.com/Open-Speech-EkStep/vakyansh-wav2vec2-experimentation) platform built on top of fairseq.
Hate-speech-CNERG/bert-base-uncased-hatexplain-rationale-two
Hate-speech-CNERG
bert
8
1,888
transformers
5
text-classification
true
false
false
apache-2.0
['en']
['hatexplain']
null
2
1
1
0
0
0
0
[]
false
true
true
4,787
## Table of Contents - [Model Details](#model-details) - [How to Get Started With the Model](#how-to-get-started-with-the-model) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation) - [Technical Specifications](#technical-specifications) - [Citation Information](#citation-information) ## Model Details **Model Description:** The model is used for classifying a text as Abusive (Hatespeech and Offensive) or Normal. The model is trained using data from Gab and Twitter and Human Rationales were included as part of the training data to boost the performance. The model also has a rationale predictor head that can predict the rationales given an abusive sentence - **Developed by:** Binny Mathew, Punyajoy Saha, Seid Muhie Yimam, Chris Biemann, Pawan Goyal, and Animesh Mukherjee - **Model Type:** Text Classification - **Language(s):** English - **License:** Apache-2.0 - **Parent Model:** See the [BERT base uncased model](https://huggingface.co/bert-base-uncased) for more information about the BERT base model. - **Resources for more information:** - [Research Paper](https://arxiv.org/abs/2012.10289) Accepted at AAAI 2021. - [GitHub Repo with datatsets and models](https://github.com/punyajoy/HateXplain) ## How to Get Started with the Model **Details of usage** Please use the **Model_Rational_Label** class inside [models.py](models.py) to load the models. The default prediction in this hosted inference API may be wrong due to the use of different class initialisations. ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification ### from models.py from models import * tokenizer = AutoTokenizer.from_pretrained("Hate-speech-CNERG/bert-base-uncased-hatexplain-rationale-two") model = Model_Rational_Label.from_pretrained("Hate-speech-CNERG/bert-base-uncased-hatexplain-rationale-two") inputs = tokenizer('He is a great guy", return_tensors="pt") prediction_logits, _ = model(input_ids=inputs['input_ids'],attention_mask=inputs['attention_mask']) ``` ## Uses #### Direct Use This model can be used for Text Classification #### Downstream Use [More information needed] #### Misuse and Out-of-scope Use The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). (and if you can generate an example of a biased prediction, also something like this): Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups. For ![example:](https://github.com/hate-alert/HateXplain/blob/master/Figures/dataset_example.png) The model author's also note in their HateXplain paper that they > *have not considered any external context such as profile bio, user gender, history of posts etc., which might be helpful in the classification task. Also, in this work we have focused on the English language. It does not consider multilingual hate speech into account.* #### Training Procedure ##### Preprocessing The authors detail their preprocessing procedure in the [Github repository](https://github.com/hate-alert/HateXplain/tree/master/Preprocess) ## Evaluation The mode authors detail the Hidden layer size and attention for the HateXplain fien tuned models in the [associated paper](https://arxiv.org/pdf/2012.10289.pdf) #### Results The model authors both in their paper and in the git repository provide the illustrative output of the BERT - HateXplain in comparison to BERT and and other HateXplain fine tuned ![models]( https://github.com/hate-alert/HateXplain/blob/master/Figures/bias-subgroup.pdf) ## Citation Information ```bibtex @article{mathew2020hatexplain, title={HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection}, author={Mathew, Binny and Saha, Punyajoy and Yimam, Seid Muhie and Biemann, Chris and Goyal, Pawan and Mukherjee, Animesh}, journal={arXiv preprint arXiv:2012.10289}, year={2020} } ```
Hate-speech-CNERG/bert-base-uncased-hatexplain
Hate-speech-CNERG
bert
8
1,486
transformers
9
text-classification
true
false
true
apache-2.0
['en']
['hatexplain']
null
2
2
0
0
0
0
0
[]
false
true
true
946
The model is used for classifying a text as **Hatespeech**, **Offensive**, or **Normal**. The model is trained using data from Gab and Twitter and *Human Rationales* were included as part of the training data to boost the performance. The dataset and models are available here: https://github.com/punyajoy/HateXplain **For more details about our paper** Binny Mathew, Punyajoy Saha, Seid Muhie Yimam, Chris Biemann, Pawan Goyal, and Animesh Mukherjee "[HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection)". Accepted at AAAI 2021. ***Please cite our paper in any published work that uses any of these resources.*** ~~~ @article{mathew2020hatexplain, title={HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection}, author={Mathew, Binny and Saha, Punyajoy and Yimam, Seid Muhie and Biemann, Chris and Goyal, Pawan and Mukherjee, Animesh}, journal={arXiv preprint arXiv:2012.10289}, year={2020} } ~~~
Hate-speech-CNERG/dehatebert-mono-arabic
Hate-speech-CNERG
bert
8
13
transformers
0
text-classification
true
false
true
apache-2.0
['ar']
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,014
This model is used detecting **hatespeech** in **Arabic language**. The mono in the name refers to the monolingual setting, where the model is trained using only Arabic language data. It is finetuned on multilingual bert model. The model is trained with different learning rates and the best validation score achieved is 0.877609 for a learning rate of 2e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT) ### For more details about our paper Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020. ***Please cite our paper in any published work that uses any of these resources.*** ~~~ @article{aluru2020deep, title={Deep Learning Models for Multilingual Hate Speech Detection}, author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh}, journal={arXiv preprint arXiv:2004.06465}, year={2020} } ~~~
Hate-speech-CNERG/dehatebert-mono-english
Hate-speech-CNERG
bert
8
1,652
transformers
3
text-classification
true
false
true
apache-2.0
['en']
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,006
This model is used detecting **hatespeech** in **English language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model. The model is trained with different learning rates and the best validation score achieved is 0.726030 for a learning rate of 2e-5. Training code can be found here https://github.com/punyajoy/DE-LIMIT ### For more details about our paper Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020. ***Please cite our paper in any published work that uses any of these resources.*** ~~~ @article{aluru2020deep, title={Deep Learning Models for Multilingual Hate Speech Detection}, author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh}, journal={arXiv preprint arXiv:2004.06465}, year={2020} } ~~~
Hate-speech-CNERG/dehatebert-mono-french
Hate-speech-CNERG
bert
8
87
transformers
2
text-classification
true
false
true
apache-2.0
['fr']
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,017
This model is used detecting **hatespeech** in **French language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model. The model is trained with different learning rates and the best validation score achieved is 0.692094 for a learning rate of 3e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT) ### For more details about our paper Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020. ***Please cite our paper in any published work that uses any of these resources.*** ~~~ @article{aluru2020deep, title={Deep Learning Models for Multilingual Hate Speech Detection}, author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh}, journal={arXiv preprint arXiv:2004.06465}, year={2020} } ~~~
Hate-speech-CNERG/dehatebert-mono-german
Hate-speech-CNERG
bert
8
134
transformers
0
text-classification
true
false
true
apache-2.0
['de']
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,017
This model is used detecting **hatespeech** in **German language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model. The model is trained with different learning rates and the best validation score achieved is 0.649794 for a learning rate of 3e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT) ### For more details about our paper Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020. ***Please cite our paper in any published work that uses any of these resources.*** ~~~ @article{aluru2020deep, title={Deep Learning Models for Multilingual Hate Speech Detection}, author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh}, journal={arXiv preprint arXiv:2004.06465}, year={2020} } ~~~
Hate-speech-CNERG/dehatebert-mono-indonesian
Hate-speech-CNERG
bert
8
921
transformers
0
text-classification
true
false
true
null
null
null
null
0
0
0
0
0
0
0
[]
false
false
true
1,019
This model is used detecting **hatespeech** in **Indonesian language**. The mono in the name refers to the monolingual setting, where the model is trained using only Arabic language data. It is finetuned on multilingual bert model. The model is trained with different learning rates and the best validation score achieved is 0.844494 for a learning rate of 2e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT) ### For more details about our paper Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020. ***Please cite our paper in any published work that uses any of these resources.*** ~~~ @article{aluru2020deep, title={Deep Learning Models for Multilingual Hate Speech Detection}, author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh}, journal={arXiv preprint arXiv:2004.06465}, year={2020} } ~~~
Hate-speech-CNERG/dehatebert-mono-italian
Hate-speech-CNERG
bert
8
88
transformers
0
text-classification
true
false
true
apache-2.0
['it']
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,017
This model is used detecting **hatespeech** in **Italian language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model. The model is trained with different learning rates and the best validation score achieved is 0.837288 for a learning rate of 3e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT) ### For more details about our paper Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020. ***Please cite our paper in any published work that uses any of these resources.*** ~~~ @article{aluru2020deep, title={Deep Learning Models for Multilingual Hate Speech Detection}, author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh}, journal={arXiv preprint arXiv:2004.06465}, year={2020} } ~~~
Hate-speech-CNERG/dehatebert-mono-polish
Hate-speech-CNERG
bert
8
6
transformers
0
text-classification
true
false
true
apache-2.0
['pl']
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,016
This model is used detecting **hatespeech** in **Polish language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model. The model is trained with different learning rates and the best validation score achieved is 0.723254 for a learning rate of 2e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT) ### For more details about our paper Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020. ***Please cite our paper in any published work that uses any of these resources.*** ~~~ @article{aluru2020deep, title={Deep Learning Models for Multilingual Hate Speech Detection}, author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh}, journal={arXiv preprint arXiv:2004.06465}, year={2020} } ~~~
Hate-speech-CNERG/dehatebert-mono-portugese
Hate-speech-CNERG
bert
8
9
transformers
2
text-classification
true
false
true
apache-2.0
['pt']
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,020
This model is used detecting **hatespeech** in **Portuguese language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model. The model is trained with different learning rates and the best validation score achieved is 0.716119 for a learning rate of 3e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT) ### For more details about our paper Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020. ***Please cite our paper in any published work that uses any of these resources.*** ~~~ @article{aluru2020deep, title={Deep Learning Models for Multilingual Hate Speech Detection}, author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh}, journal={arXiv preprint arXiv:2004.06465}, year={2020} } ~~~
Hate-speech-CNERG/dehatebert-mono-spanish
Hate-speech-CNERG
bert
8
231
transformers
2
text-classification
true
false
true
apache-2.0
['es']
null
null
0
0
0
0
0
0
0
[]
false
true
true
1,017
This model is used detecting **hatespeech** in **Spanish language**. The mono in the name refers to the monolingual setting, where the model is trained using only English language data. It is finetuned on multilingual bert model. The model is trained with different learning rates and the best validation score achieved is 0.740287 for a learning rate of 3e-5. Training code can be found at this [url](https://github.com/punyajoy/DE-LIMIT) ### For more details about our paper Sai Saketh Aluru, Binny Mathew, Punyajoy Saha and Animesh Mukherjee. "[Deep Learning Models for Multilingual Hate Speech Detection](https://arxiv.org/abs/2004.06465)". Accepted at ECML-PKDD 2020. ***Please cite our paper in any published work that uses any of these resources.*** ~~~ @article{aluru2020deep, title={Deep Learning Models for Multilingual Hate Speech Detection}, author={Aluru, Sai Saket and Mathew, Binny and Saha, Punyajoy and Mukherjee, Animesh}, journal={arXiv preprint arXiv:2004.06465}, year={2020} } ~~~
Hate-speech-CNERG/deoffxlmr-mono-kannada
Hate-speech-CNERG
xlm-roberta
7
1
transformers
0
text-classification
true
false
false
apache-2.0
['kn']
null
null
0
0
0
0
0
0
0
[]
false
true
true
2,473
This model is used to detect **Offensive Content** in **Kannada Code-Mixed language**. The mono in the name refers to the monolingual setting, where the model is trained using only Kannada(pure and code-mixed) data. The weights are initialized from pretrained XLM-Roberta-Base and pretrained using Masked Language Modelling on the target dataset before fine-tuning using Cross-Entropy Loss. This model is the best of multiple trained for **EACL 2021 Shared Task on Offensive Language Identification in Dravidian Languages**. Genetic-Algorithm based ensembled test predictions got the second-highest weighted F1 score at the leaderboard (Weighted F1 score on hold out test set: This model - 0.73, Ensemble - 0.74) ### For more details about our paper Debjoy Saha, Naman Paharia, Debajit Chakraborty, Punyajoy Saha, Animesh Mukherjee. "[Hate-Alert@DravidianLangTech-EACL2021: Ensembling strategies for Transformer-based Offensive language Detection](https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38/)". ***Please cite our paper in any published work that uses any of these resources.*** ~~~ @inproceedings{saha-etal-2021-hate, title = "Hate-Alert@{D}ravidian{L}ang{T}ech-{EACL}2021: Ensembling strategies for Transformer-based Offensive language Detection", author = "Saha, Debjoy and Paharia, Naman and Chakraborty, Debajit and Saha, Punyajoy and Mukherjee, Animesh", booktitle = "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages", month = apr, year = "2021", address = "Kyiv", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38", pages = "270--276", abstract = "Social media often acts as breeding grounds for different forms of offensive content. For low resource languages like Tamil, the situation is more complex due to the poor performance of multilingual or language-specific models and lack of proper benchmark datasets. Based on this shared task {``}Offensive Language Identification in Dravidian Languages{''} at EACL 2021; we present an exhaustive exploration of different transformer models, We also provide a genetic algorithm technique for ensembling different models. Our ensembled models trained separately for each language secured the first position in Tamil, the second position in Kannada, and the first position in Malayalam sub-tasks. The models and codes are provided.", } ~~~
Hate-speech-CNERG/deoffxlmr-mono-malyalam
Hate-speech-CNERG
xlm-roberta
7
1
transformers
0
text-classification
true
false
false
apache-2.0
['ml']
null
null
0
0
0
0
0
0
0
[]
false
true
true
2,470
This model is used to detect **Offensive Content** in **Malayalam Code-Mixed language**. The mono in the name refers to the monolingual setting, where the model is trained using only Malayalam(pure and code-mixed) data. The weights are initialized from pretrained XLM-Roberta-Base and pretrained using Masked Language Modelling on the target dataset before fine-tuning using Cross-Entropy Loss. This model is the best of multiple trained for **EACL 2021 Shared Task on Offensive Language Identification in Dravidian Languages**. Genetic-Algorithm based ensembled test predictions got the highest weighted F1 score at the leaderboard (Weighted F1 score on hold out test set: This model - 0.97, Ensemble - 0.97) ### For more details about our paper Debjoy Saha, Naman Paharia, Debajit Chakraborty, Punyajoy Saha, Animesh Mukherjee. "[Hate-Alert@DravidianLangTech-EACL2021: Ensembling strategies for Transformer-based Offensive language Detection](https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38/)". ***Please cite our paper in any published work that uses any of these resources.*** ~~~ @inproceedings{saha-etal-2021-hate, title = "Hate-Alert@{D}ravidian{L}ang{T}ech-{EACL}2021: Ensembling strategies for Transformer-based Offensive language Detection", author = "Saha, Debjoy and Paharia, Naman and Chakraborty, Debajit and Saha, Punyajoy and Mukherjee, Animesh", booktitle = "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages", month = apr, year = "2021", address = "Kyiv", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38", pages = "270--276", abstract = "Social media often acts as breeding grounds for different forms of offensive content. For low resource languages like Tamil, the situation is more complex due to the poor performance of multilingual or language-specific models and lack of proper benchmark datasets. Based on this shared task {``}Offensive Language Identification in Dravidian Languages{''} at EACL 2021; we present an exhaustive exploration of different transformer models, We also provide a genetic algorithm technique for ensembling different models. Our ensembled models trained separately for each language secured the first position in Tamil, the second position in Kannada, and the first position in Malayalam sub-tasks. The models and codes are provided.", } ~~~
Hate-speech-CNERG/deoffxlmr-mono-tamil
Hate-speech-CNERG
xlm-roberta
7
1
transformers
0
text-classification
true
false
false
apache-2.0
['ta']
null
null
0
0
0
0
0
0
0
[]
false
true
true
2,462
This model is used to detect **Offensive Content** in **Tamil Code-Mixed language**. The mono in the name refers to the monolingual setting, where the model is trained using only Tamil(pure and code-mixed) data. The weights are initialized from pretrained XLM-Roberta-Base and pretrained using Masked Language Modelling on the target dataset before fine-tuning using Cross-Entropy Loss. This model is the best of multiple trained for **EACL 2021 Shared Task on Offensive Language Identification in Dravidian Languages**. Genetic-Algorithm based ensembled test predictions got the highest weighted F1 score at the leaderboard (Weighted F1 score on hold out test set: This model - 0.76, Ensemble - 0.78) ### For more details about our paper Debjoy Saha, Naman Paharia, Debajit Chakraborty, Punyajoy Saha, Animesh Mukherjee. "[Hate-Alert@DravidianLangTech-EACL2021: Ensembling strategies for Transformer-based Offensive language Detection](https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38/)". ***Please cite our paper in any published work that uses any of these resources.*** ~~~ @inproceedings{saha-etal-2021-hate, title = "Hate-Alert@{D}ravidian{L}ang{T}ech-{EACL}2021: Ensembling strategies for Transformer-based Offensive language Detection", author = "Saha, Debjoy and Paharia, Naman and Chakraborty, Debajit and Saha, Punyajoy and Mukherjee, Animesh", booktitle = "Proceedings of the First Workshop on Speech and Language Technologies for Dravidian Languages", month = apr, year = "2021", address = "Kyiv", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.dravidianlangtech-1.38", pages = "270--276", abstract = "Social media often acts as breeding grounds for different forms of offensive content. For low resource languages like Tamil, the situation is more complex due to the poor performance of multilingual or language-specific models and lack of proper benchmark datasets. Based on this shared task {``}Offensive Language Identification in Dravidian Languages{''} at EACL 2021; we present an exhaustive exploration of different transformer models, We also provide a genetic algorithm technique for ensembling different models. Our ensembled models trained separately for each language secured the first position in Tamil, the second position in Kannada, and the first position in Malayalam sub-tasks. The models and codes are provided.", } ~~~
Heldhy/wav2vec2-base-timit-demo-colab
Heldhy
wav2vec2
12
6
transformers
0
automatic-speech-recognition
true
false
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['generated_from_trainer']
true
true
true
1,641
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4568 - Wer: 0.3422 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.3896 | 4.0 | 500 | 1.1573 | 0.8886 | | 0.5667 | 8.0 | 1000 | 0.4841 | 0.4470 | | 0.2126 | 12.0 | 1500 | 0.4201 | 0.3852 | | 0.1235 | 16.0 | 2000 | 0.4381 | 0.3623 | | 0.0909 | 20.0 | 2500 | 0.4784 | 0.3748 | | 0.0611 | 24.0 | 3000 | 0.4390 | 0.3577 | | 0.0454 | 28.0 | 3500 | 0.4568 | 0.3422 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.13.3 - Tokenizers 0.10.3
Hellisotherpeople/debate2vec
Hellisotherpeople
null
6
0
fasttext
5
text-classification
false
false
false
null
null
null
null
0
0
0
0
0
0
0
['text-classification']
false
true
true
2,028
# debate2vec Word-vectors created from a large corpus of competitive debate evidence, and data extraction / processing scripts #usage ``` import fasttext.util ft = fasttext.load_model('debate2vec.bin') ft.get_word_vector('dialectics') ``` # Download Link Github won't let me store large files in their repos. * [FastText Vectors Here](https://drive.google.com/file/d/1m-CwPcaIUun4qvg69Hx2gom9dMScuQwS/view?usp=sharing) (~260mb) # About Created from all publically available Cross Examination Competitive debate evidence posted by the community on [Open Evidence](https://openev.debatecoaches.org/) (From 2013-2020) Search through the original evidence by going to [debate.cards](http://debate.cards/) Stats about this corpus: * 222485 unique documents larger than 200 words (DebateSum plus some additional debate docs that weren't well-formed enough for inclusion into DebateSum) * 107555 unique words (showing up more than 10 times in the corpus) * 101 million total words Stats about debate2vec vectors: * 300 dimensions, minimum number of appearances of a word was 10, trained for 100 epochs with lr set to 0.10 using FastText * lowercased (will release cased) * No subword information The corpus includes the following topics * 2013-2014 Cuba/Mexico/Venezuela Economic Engagement * 2014-2015 Oceans * 2015-2016 Domestic Surveillance * 2016-2017 China * 2017-2018 Education * 2018-2019 Immigration * 2019-2020 Reducing Arms Sales Other topics that this word vector model will handle extremely well * Philosophy (Especially Left-Wing / Post-modernist) * Law * Government * Politics Initial release is of fasttext vectors without subword information. Future releases will include fine-tuned GPT-2 and other high end models as my GPU compute allows. # Screenshots ![](https://github.com/Hellisotherpeople/debate2vec/blob/master/debate2vec.jpg) ![](https://github.com/Hellisotherpeople/debate2vec/blob/master/debate2vec2.jpg) ![](https://github.com/Hellisotherpeople/debate2vec/blob/master/debate2vec3.jpg)
Helsinki-NLP/opus-mt-NORTH_EU-NORTH_EU
Helsinki-NLP
marian
10
7
transformers
2
translation
true
true
false
apache-2.0
null
null
null
1
1
0
0
0
0
0
['translation']
false
true
true
1,246
### opus-mt-NORTH_EU-NORTH_EU * source languages: de,nl,fy,af,da,fo,is,no,nb,nn,sv * target languages: de,nl,fy,af,da,fo,is,no,nb,nn,sv * OPUS readme: [de+nl+fy+af+da+fo+is+no+nb+nn+sv-de+nl+fy+af+da+fo+is+no+nb+nn+sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/de+nl+fy+af+da+fo+is+no+nb+nn+sv-de+nl+fy+af+da+fo+is+no+nb+nn+sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/de+nl+fy+af+da+fo+is+no+nb+nn+sv-de+nl+fy+af+da+fo+is+no+nb+nn+sv/opus-2020-01-15.zip) * test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/de+nl+fy+af+da+fo+is+no+nb+nn+sv-de+nl+fy+af+da+fo+is+no+nb+nn+sv/opus-2020-01-15.test.txt) * test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/de+nl+fy+af+da+fo+is+no+nb+nn+sv-de+nl+fy+af+da+fo+is+no+nb+nn+sv/opus-2020-01-15.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.de.sv | 48.1 | 0.663 |
Helsinki-NLP/opus-mt-ROMANCE-en
Helsinki-NLP
marian
11
161,105
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
1
1
0
['translation']
false
true
true
2,107
### opus-mt-ROMANCE-en * source languages: fr,fr_BE,fr_CA,fr_FR,wa,frp,oc,ca,rm,lld,fur,lij,lmo,es,es_AR,es_CL,es_CO,es_CR,es_DO,es_EC,es_ES,es_GT,es_HN,es_MX,es_NI,es_PA,es_PE,es_PR,es_SV,es_UY,es_VE,pt,pt_br,pt_BR,pt_PT,gl,lad,an,mwl,it,it_IT,co,nap,scn,vec,sc,ro,la * target languages: en * OPUS readme: [fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la-en/README.md) * dataset: opus * model: transformer * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-04-01.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la-en/opus-2020-04-01.zip) * test set translations: [opus-2020-04-01.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la-en/opus-2020-04-01.test.txt) * test set scores: [opus-2020-04-01.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr+fr_BE+fr_CA+fr_FR+wa+frp+oc+ca+rm+lld+fur+lij+lmo+es+es_AR+es_CL+es_CO+es_CR+es_DO+es_EC+es_ES+es_GT+es_HN+es_MX+es_NI+es_PA+es_PE+es_PR+es_SV+es_UY+es_VE+pt+pt_br+pt_BR+pt_PT+gl+lad+an+mwl+it+it_IT+co+nap+scn+vec+sc+ro+la-en/opus-2020-04-01.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.fr.en | 62.2 | 0.750 |
Helsinki-NLP/opus-mt-SCANDINAVIA-SCANDINAVIA
Helsinki-NLP
marian
10
7
transformers
1
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
1,108
### opus-mt-SCANDINAVIA-SCANDINAVIA * source languages: da,fo,is,no,nb,nn,sv * target languages: da,fo,is,no,nb,nn,sv * OPUS readme: [da+fo+is+no+nb+nn+sv-da+fo+is+no+nb+nn+sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/da+fo+is+no+nb+nn+sv-da+fo+is+no+nb+nn+sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/da+fo+is+no+nb+nn+sv-da+fo+is+no+nb+nn+sv/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/da+fo+is+no+nb+nn+sv-da+fo+is+no+nb+nn+sv/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/da+fo+is+no+nb+nn+sv-da+fo+is+no+nb+nn+sv/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.da.sv | 69.2 | 0.811 |
Helsinki-NLP/opus-mt-aav-en
Helsinki-NLP
marian
11
19
transformers
0
translation
true
true
false
apache-2.0
['vi', 'km', 'aav', 'en']
null
null
2
2
0
0
0
0
0
['translation']
false
true
true
2,393
### aav-eng * source group: Austro-Asiatic languages * target group: English * OPUS readme: [aav-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/aav-eng/README.md) * model: transformer * source language(s): hoc hoc_Latn kha khm khm_Latn mnw vie vie_Hani * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/aav-eng/opus2m-2020-07-31.zip) * test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/aav-eng/opus2m-2020-07-31.test.txt) * test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/aav-eng/opus2m-2020-07-31.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.hoc-eng.hoc.eng | 0.3 | 0.095 | | Tatoeba-test.kha-eng.kha.eng | 1.0 | 0.115 | | Tatoeba-test.khm-eng.khm.eng | 8.9 | 0.271 | | Tatoeba-test.mnw-eng.mnw.eng | 0.8 | 0.118 | | Tatoeba-test.multi.eng | 24.8 | 0.391 | | Tatoeba-test.vie-eng.vie.eng | 38.7 | 0.567 | ### System Info: - hf_name: aav-eng - source_languages: aav - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/aav-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['vi', 'km', 'aav', 'en'] - src_constituents: {'mnw', 'vie', 'kha', 'khm', 'vie_Hani', 'khm_Latn', 'hoc_Latn', 'hoc'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/aav-eng/opus2m-2020-07-31.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/aav-eng/opus2m-2020-07-31.test.txt - src_alpha3: aav - tgt_alpha3: eng - short_pair: aav-en - chrF2_score: 0.391 - bleu: 24.8 - brevity_penalty: 0.968 - ref_len: 36693.0 - src_name: Austro-Asiatic languages - tgt_name: English - train_date: 2020-07-31 - src_alpha2: aav - tgt_alpha2: en - prefer_old: False - long_pair: aav-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-aed-es
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-aed-es * source languages: aed * target languages: es * OPUS readme: [aed-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/aed-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-15.zip](https://object.pouta.csc.fi/OPUS-MT-models/aed-es/opus-2020-01-15.zip) * test set translations: [opus-2020-01-15.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/aed-es/opus-2020-01-15.test.txt) * test set scores: [opus-2020-01-15.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/aed-es/opus-2020-01-15.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.aed.es | 89.1 | 0.915 |
Helsinki-NLP/opus-mt-af-de
Helsinki-NLP
marian
10
15
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
770
### opus-mt-af-de * source languages: af * target languages: de * OPUS readme: [af-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/af-de/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-19.zip](https://object.pouta.csc.fi/OPUS-MT-models/af-de/opus-2020-01-19.zip) * test set translations: [opus-2020-01-19.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-de/opus-2020-01-19.test.txt) * test set scores: [opus-2020-01-19.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-de/opus-2020-01-19.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.af.de | 48.6 | 0.681 |
Helsinki-NLP/opus-mt-af-en
Helsinki-NLP
marian
10
1,360
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
770
### opus-mt-af-en * source languages: af * target languages: en * OPUS readme: [af-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/af-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/af-en/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-en/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-en/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.af.en | 60.8 | 0.736 |
Helsinki-NLP/opus-mt-af-eo
Helsinki-NLP
marian
11
33
transformers
0
translation
true
true
false
apache-2.0
['af', 'eo']
null
null
2
2
0
0
0
0
0
['translation']
false
true
true
2,003
### afr-epo * source group: Afrikaans * target group: Esperanto * OPUS readme: [afr-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-epo/README.md) * model: transformer-align * source language(s): afr * target language(s): epo * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-epo/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-epo/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-epo/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.afr.epo | 18.3 | 0.411 | ### System Info: - hf_name: afr-epo - source_languages: afr - target_languages: epo - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-epo/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['af', 'eo'] - src_constituents: {'afr'} - tgt_constituents: {'epo'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-epo/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-epo/opus-2020-06-16.test.txt - src_alpha3: afr - tgt_alpha3: epo - short_pair: af-eo - chrF2_score: 0.41100000000000003 - bleu: 18.3 - brevity_penalty: 0.995 - ref_len: 7517.0 - src_name: Afrikaans - tgt_name: Esperanto - train_date: 2020-06-16 - src_alpha2: af - tgt_alpha2: eo - prefer_old: False - long_pair: afr-epo - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-af-es
Helsinki-NLP
marian
11
183
transformers
0
translation
true
true
false
apache-2.0
['af', 'es']
null
null
2
2
0
0
0
0
0
['translation']
false
true
true
1,986
### afr-spa * source group: Afrikaans * target group: Spanish * OPUS readme: [afr-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-spa/README.md) * model: transformer-align * source language(s): afr * target language(s): spa * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.afr.spa | 49.9 | 0.680 | ### System Info: - hf_name: afr-spa - source_languages: afr - target_languages: spa - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-spa/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['af', 'es'] - src_constituents: {'afr'} - tgt_constituents: {'spa'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-spa/opus-2020-06-17.test.txt - src_alpha3: afr - tgt_alpha3: spa - short_pair: af-es - chrF2_score: 0.68 - bleu: 49.9 - brevity_penalty: 1.0 - ref_len: 2783.0 - src_name: Afrikaans - tgt_name: Spanish - train_date: 2020-06-17 - src_alpha2: af - tgt_alpha2: es - prefer_old: False - long_pair: afr-spa - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-af-fi
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-af-fi * source languages: af * target languages: fi * OPUS readme: [af-fi](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/af-fi/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/af-fi/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-fi/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-fi/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.af.fi | 32.3 | 0.576 |
Helsinki-NLP/opus-mt-af-fr
Helsinki-NLP
marian
10
55
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-af-fr * source languages: af * target languages: fr * OPUS readme: [af-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/af-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/af-fr/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-fr/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-fr/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.af.fr | 35.3 | 0.543 |
Helsinki-NLP/opus-mt-af-nl
Helsinki-NLP
marian
11
12
transformers
0
translation
true
true
false
apache-2.0
['af', 'nl']
null
null
2
2
0
0
0
0
0
['translation']
false
true
true
1,985
### afr-nld * source group: Afrikaans * target group: Dutch * OPUS readme: [afr-nld](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-nld/README.md) * model: transformer-align * source language(s): afr * target language(s): nld * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-nld/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-nld/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-nld/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.afr.nld | 55.2 | 0.715 | ### System Info: - hf_name: afr-nld - source_languages: afr - target_languages: nld - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-nld/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['af', 'nl'] - src_constituents: {'afr'} - tgt_constituents: {'nld'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-nld/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-nld/opus-2020-06-17.test.txt - src_alpha3: afr - tgt_alpha3: nld - short_pair: af-nl - chrF2_score: 0.715 - bleu: 55.2 - brevity_penalty: 0.995 - ref_len: 6710.0 - src_name: Afrikaans - tgt_name: Dutch - train_date: 2020-06-17 - src_alpha2: af - tgt_alpha2: nl - prefer_old: False - long_pair: afr-nld - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-af-ru
Helsinki-NLP
marian
11
30
transformers
0
translation
true
true
false
apache-2.0
['af', 'ru']
null
null
2
2
0
0
0
0
0
['translation']
false
true
true
1,988
### afr-rus * source group: Afrikaans * target group: Russian * OPUS readme: [afr-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-rus/README.md) * model: transformer-align * source language(s): afr * target language(s): rus * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.afr.rus | 38.2 | 0.580 | ### System Info: - hf_name: afr-rus - source_languages: afr - target_languages: rus - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-rus/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['af', 'ru'] - src_constituents: {'afr'} - tgt_constituents: {'rus'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/afr-rus/opus-2020-06-17.test.txt - src_alpha3: afr - tgt_alpha3: rus - short_pair: af-ru - chrF2_score: 0.58 - bleu: 38.2 - brevity_penalty: 0.992 - ref_len: 1213.0 - src_name: Afrikaans - tgt_name: Russian - train_date: 2020-06-17 - src_alpha2: af - tgt_alpha2: ru - prefer_old: False - long_pair: afr-rus - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-af-sv
Helsinki-NLP
marian
10
30
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-af-sv * source languages: af * target languages: sv * OPUS readme: [af-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/af-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/af-sv/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-sv/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/af-sv/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.af.sv | 40.4 | 0.599 |
Helsinki-NLP/opus-mt-afa-afa
Helsinki-NLP
marian
11
7
transformers
0
translation
true
true
false
apache-2.0
['so', 'ti', 'am', 'he', 'mt', 'ar', 'afa']
null
null
2
2
0
0
0
0
0
['translation']
false
true
true
3,241
### afa-afa * source group: Afro-Asiatic languages * target group: Afro-Asiatic languages * OPUS readme: [afa-afa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afa-afa/README.md) * model: transformer * source language(s): apc ara arq arz heb kab mlt shy_Latn thv * target language(s): apc ara arq arz heb kab mlt shy_Latn thv * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-07-26.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-afa/opus-2020-07-26.zip) * test set translations: [opus-2020-07-26.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-afa/opus-2020-07-26.test.txt) * test set scores: [opus-2020-07-26.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-afa/opus-2020-07-26.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ara-ara.ara.ara | 4.3 | 0.148 | | Tatoeba-test.ara-heb.ara.heb | 31.9 | 0.525 | | Tatoeba-test.ara-kab.ara.kab | 0.3 | 0.120 | | Tatoeba-test.ara-mlt.ara.mlt | 14.0 | 0.428 | | Tatoeba-test.ara-shy.ara.shy | 1.3 | 0.050 | | Tatoeba-test.heb-ara.heb.ara | 17.0 | 0.464 | | Tatoeba-test.heb-kab.heb.kab | 1.9 | 0.104 | | Tatoeba-test.kab-ara.kab.ara | 0.3 | 0.044 | | Tatoeba-test.kab-heb.kab.heb | 5.1 | 0.099 | | Tatoeba-test.kab-shy.kab.shy | 2.2 | 0.009 | | Tatoeba-test.kab-tmh.kab.tmh | 10.7 | 0.007 | | Tatoeba-test.mlt-ara.mlt.ara | 29.1 | 0.498 | | Tatoeba-test.multi.multi | 20.8 | 0.434 | | Tatoeba-test.shy-ara.shy.ara | 1.2 | 0.053 | | Tatoeba-test.shy-kab.shy.kab | 2.0 | 0.134 | | Tatoeba-test.tmh-kab.tmh.kab | 0.0 | 0.047 | ### System Info: - hf_name: afa-afa - source_languages: afa - target_languages: afa - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afa-afa/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['so', 'ti', 'am', 'he', 'mt', 'ar', 'afa'] - src_constituents: {'som', 'rif_Latn', 'tir', 'kab', 'arq', 'afb', 'amh', 'arz', 'heb', 'shy_Latn', 'apc', 'mlt', 'thv', 'ara', 'hau_Latn', 'acm', 'ary'} - tgt_constituents: {'som', 'rif_Latn', 'tir', 'kab', 'arq', 'afb', 'amh', 'arz', 'heb', 'shy_Latn', 'apc', 'mlt', 'thv', 'ara', 'hau_Latn', 'acm', 'ary'} - src_multilingual: True - tgt_multilingual: True - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/afa-afa/opus-2020-07-26.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/afa-afa/opus-2020-07-26.test.txt - src_alpha3: afa - tgt_alpha3: afa - short_pair: afa-afa - chrF2_score: 0.434 - bleu: 20.8 - brevity_penalty: 1.0 - ref_len: 15215.0 - src_name: Afro-Asiatic languages - tgt_name: Afro-Asiatic languages - train_date: 2020-07-26 - src_alpha2: afa - tgt_alpha2: afa - prefer_old: False - long_pair: afa-afa - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-afa-en
Helsinki-NLP
marian
11
13
transformers
0
translation
true
true
false
apache-2.0
['so', 'ti', 'am', 'he', 'mt', 'ar', 'afa', 'en']
null
null
2
2
0
0
0
0
0
['translation']
false
true
true
2,771
### afa-eng * source group: Afro-Asiatic languages * target group: English * OPUS readme: [afa-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afa-eng/README.md) * model: transformer * source language(s): acm afb amh apc ara arq ary arz hau_Latn heb kab mlt rif_Latn shy_Latn som tir * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-eng/opus2m-2020-07-31.zip) * test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-eng/opus2m-2020-07-31.test.txt) * test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/afa-eng/opus2m-2020-07-31.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.amh-eng.amh.eng | 35.9 | 0.550 | | Tatoeba-test.ara-eng.ara.eng | 36.6 | 0.543 | | Tatoeba-test.hau-eng.hau.eng | 11.9 | 0.327 | | Tatoeba-test.heb-eng.heb.eng | 42.7 | 0.591 | | Tatoeba-test.kab-eng.kab.eng | 4.3 | 0.213 | | Tatoeba-test.mlt-eng.mlt.eng | 44.3 | 0.618 | | Tatoeba-test.multi.eng | 27.1 | 0.464 | | Tatoeba-test.rif-eng.rif.eng | 3.5 | 0.141 | | Tatoeba-test.shy-eng.shy.eng | 0.6 | 0.125 | | Tatoeba-test.som-eng.som.eng | 23.6 | 0.472 | | Tatoeba-test.tir-eng.tir.eng | 13.1 | 0.328 | ### System Info: - hf_name: afa-eng - source_languages: afa - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afa-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['so', 'ti', 'am', 'he', 'mt', 'ar', 'afa', 'en'] - src_constituents: {'som', 'rif_Latn', 'tir', 'kab', 'arq', 'afb', 'amh', 'arz', 'heb', 'shy_Latn', 'apc', 'mlt', 'thv', 'ara', 'hau_Latn', 'acm', 'ary'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/afa-eng/opus2m-2020-07-31.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/afa-eng/opus2m-2020-07-31.test.txt - src_alpha3: afa - tgt_alpha3: eng - short_pair: afa-en - chrF2_score: 0.46399999999999997 - bleu: 27.1 - brevity_penalty: 1.0 - ref_len: 69373.0 - src_name: Afro-Asiatic languages - tgt_name: English - train_date: 2020-07-31 - src_alpha2: afa - tgt_alpha2: en - prefer_old: False - long_pair: afa-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-alv-en
Helsinki-NLP
marian
11
13
transformers
0
translation
true
true
false
apache-2.0
['sn', 'rw', 'wo', 'ig', 'sg', 'ee', 'zu', 'lg', 'ts', 'ln', 'ny', 'yo', 'rn', 'xh', 'alv', 'en']
null
null
2
2
0
0
0
0
0
['translation']
false
true
true
3,208
### alv-eng * source group: Atlantic-Congo languages * target group: English * OPUS readme: [alv-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/alv-eng/README.md) * model: transformer * source language(s): ewe fuc fuv ibo kin lin lug nya run sag sna swh toi_Latn tso umb wol xho yor zul * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/alv-eng/opus2m-2020-07-31.zip) * test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/alv-eng/opus2m-2020-07-31.test.txt) * test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/alv-eng/opus2m-2020-07-31.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ewe-eng.ewe.eng | 6.3 | 0.328 | | Tatoeba-test.ful-eng.ful.eng | 0.4 | 0.108 | | Tatoeba-test.ibo-eng.ibo.eng | 4.5 | 0.196 | | Tatoeba-test.kin-eng.kin.eng | 30.7 | 0.511 | | Tatoeba-test.lin-eng.lin.eng | 2.8 | 0.213 | | Tatoeba-test.lug-eng.lug.eng | 3.4 | 0.140 | | Tatoeba-test.multi.eng | 20.9 | 0.376 | | Tatoeba-test.nya-eng.nya.eng | 38.7 | 0.492 | | Tatoeba-test.run-eng.run.eng | 24.5 | 0.417 | | Tatoeba-test.sag-eng.sag.eng | 5.5 | 0.177 | | Tatoeba-test.sna-eng.sna.eng | 26.9 | 0.412 | | Tatoeba-test.swa-eng.swa.eng | 4.9 | 0.196 | | Tatoeba-test.toi-eng.toi.eng | 3.9 | 0.147 | | Tatoeba-test.tso-eng.tso.eng | 76.7 | 0.957 | | Tatoeba-test.umb-eng.umb.eng | 4.0 | 0.195 | | Tatoeba-test.wol-eng.wol.eng | 3.7 | 0.170 | | Tatoeba-test.xho-eng.xho.eng | 38.9 | 0.556 | | Tatoeba-test.yor-eng.yor.eng | 25.1 | 0.412 | | Tatoeba-test.zul-eng.zul.eng | 46.1 | 0.623 | ### System Info: - hf_name: alv-eng - source_languages: alv - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/alv-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['sn', 'rw', 'wo', 'ig', 'sg', 'ee', 'zu', 'lg', 'ts', 'ln', 'ny', 'yo', 'rn', 'xh', 'alv', 'en'] - src_constituents: {'sna', 'kin', 'wol', 'ibo', 'swh', 'sag', 'ewe', 'zul', 'fuc', 'lug', 'tso', 'lin', 'nya', 'yor', 'run', 'xho', 'fuv', 'toi_Latn', 'umb'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/alv-eng/opus2m-2020-07-31.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/alv-eng/opus2m-2020-07-31.test.txt - src_alpha3: alv - tgt_alpha3: eng - short_pair: alv-en - chrF2_score: 0.376 - bleu: 20.9 - brevity_penalty: 1.0 - ref_len: 15208.0 - src_name: Atlantic-Congo languages - tgt_name: English - train_date: 2020-07-31 - src_alpha2: alv - tgt_alpha2: en - prefer_old: False - long_pair: alv-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-am-sv
Helsinki-NLP
marian
10
53
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
768
### opus-mt-am-sv * source languages: am * target languages: sv * OPUS readme: [am-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/am-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/am-sv/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/am-sv/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/am-sv/opus-2020-01-08.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.am.sv | 21.0 | 0.377 |
Helsinki-NLP/opus-mt-ar-de
Helsinki-NLP
marian
11
181
transformers
0
translation
true
true
false
apache-2.0
['ar', 'de']
null
null
2
2
0
0
0
0
0
['translation']
false
true
true
2,070
### ara-deu * source group: Arabic * target group: German * OPUS readme: [ara-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-deu/README.md) * model: transformer-align * source language(s): afb apc ara ara_Latn arq arz * target language(s): deu * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-deu/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-deu/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-deu/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ara.deu | 44.7 | 0.629 | ### System Info: - hf_name: ara-deu - source_languages: ara - target_languages: deu - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-deu/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ar', 'de'] - src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'} - tgt_constituents: {'deu'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-deu/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-deu/opus-2020-07-03.test.txt - src_alpha3: ara - tgt_alpha3: deu - short_pair: ar-de - chrF2_score: 0.629 - bleu: 44.7 - brevity_penalty: 0.986 - ref_len: 8371.0 - src_name: Arabic - tgt_name: German - train_date: 2020-07-03 - src_alpha2: ar - tgt_alpha2: de - prefer_old: False - long_pair: ara-deu - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-ar-el
Helsinki-NLP
marian
11
10
transformers
0
translation
true
true
false
apache-2.0
['ar', 'el']
null
null
2
2
0
0
0
0
0
['translation']
false
true
true
2,077
### ara-ell * source group: Arabic * target group: Modern Greek (1453-) * OPUS readme: [ara-ell](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-ell/README.md) * model: transformer-align * source language(s): ara arz * target language(s): ell * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ell/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ell/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ell/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ara.ell | 43.9 | 0.636 | ### System Info: - hf_name: ara-ell - source_languages: ara - target_languages: ell - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-ell/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ar', 'el'] - src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'} - tgt_constituents: {'ell'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ell/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ell/opus-2020-07-03.test.txt - src_alpha3: ara - tgt_alpha3: ell - short_pair: ar-el - chrF2_score: 0.636 - bleu: 43.9 - brevity_penalty: 0.993 - ref_len: 2009.0 - src_name: Arabic - tgt_name: Modern Greek (1453-) - train_date: 2020-07-03 - src_alpha2: ar - tgt_alpha2: el - prefer_old: False - long_pair: ara-ell - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-ar-en
Helsinki-NLP
marian
11
75,079
transformers
8
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
770
### opus-mt-ar-en * source languages: ar * target languages: en * OPUS readme: [ar-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ar-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2019-12-18.zip](https://object.pouta.csc.fi/OPUS-MT-models/ar-en/opus-2019-12-18.zip) * test set translations: [opus-2019-12-18.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ar-en/opus-2019-12-18.test.txt) * test set scores: [opus-2019-12-18.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ar-en/opus-2019-12-18.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.ar.en | 49.4 | 0.661 |
Helsinki-NLP/opus-mt-ar-eo
Helsinki-NLP
marian
11
7
transformers
0
translation
true
true
false
apache-2.0
['ar', 'eo']
null
null
2
2
0
0
0
0
0
['translation']
false
true
true
2,077
### ara-epo * source group: Arabic * target group: Esperanto * OPUS readme: [ara-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-epo/README.md) * model: transformer-align * source language(s): apc apc_Latn ara arq arq_Latn arz * target language(s): epo * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-epo/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-epo/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-epo/opus-2020-06-16.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ara.epo | 18.9 | 0.376 | ### System Info: - hf_name: ara-epo - source_languages: ara - target_languages: epo - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-epo/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ar', 'eo'] - src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'} - tgt_constituents: {'epo'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-epo/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-epo/opus-2020-06-16.test.txt - src_alpha3: ara - tgt_alpha3: epo - short_pair: ar-eo - chrF2_score: 0.376 - bleu: 18.9 - brevity_penalty: 0.948 - ref_len: 4506.0 - src_name: Arabic - tgt_name: Esperanto - train_date: 2020-06-16 - src_alpha2: ar - tgt_alpha2: eo - prefer_old: False - long_pair: ara-epo - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-ar-es
Helsinki-NLP
marian
11
487
transformers
0
translation
true
true
false
apache-2.0
['ar', 'es']
null
null
2
2
0
0
0
0
0
['translation']
false
true
true
2,078
### ara-spa * source group: Arabic * target group: Spanish * OPUS readme: [ara-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-spa/README.md) * model: transformer * source language(s): apc apc_Latn ara arq * target language(s): spa * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-spa/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-spa/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-spa/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ara.spa | 46.0 | 0.641 | ### System Info: - hf_name: ara-spa - source_languages: ara - target_languages: spa - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-spa/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ar', 'es'] - src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'} - tgt_constituents: {'spa'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-spa/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-spa/opus-2020-07-03.test.txt - src_alpha3: ara - tgt_alpha3: spa - short_pair: ar-es - chrF2_score: 0.6409999999999999 - bleu: 46.0 - brevity_penalty: 0.9620000000000001 - ref_len: 9708.0 - src_name: Arabic - tgt_name: Spanish - train_date: 2020-07-03 - src_alpha2: ar - tgt_alpha2: es - prefer_old: False - long_pair: ara-spa - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-ar-fr
Helsinki-NLP
marian
10
101
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
770
### opus-mt-ar-fr * source languages: ar * target languages: fr * OPUS readme: [ar-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ar-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/ar-fr/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ar-fr/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ar-fr/opus-2020-01-24.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba.ar.fr | 43.5 | 0.602 |
Helsinki-NLP/opus-mt-ar-he
Helsinki-NLP
marian
11
10
transformers
0
translation
true
true
false
apache-2.0
['ar', 'he']
null
null
2
2
0
0
0
0
0
['translation']
false
true
true
2,052
### ara-heb * source group: Arabic * target group: Hebrew * OPUS readme: [ara-heb](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-heb/README.md) * model: transformer * source language(s): apc apc_Latn ara arq arz * target language(s): heb * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-heb/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-heb/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-heb/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ara.heb | 40.4 | 0.605 | ### System Info: - hf_name: ara-heb - source_languages: ara - target_languages: heb - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-heb/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ar', 'he'] - src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'} - tgt_constituents: {'heb'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-heb/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-heb/opus-2020-07-03.test.txt - src_alpha3: ara - tgt_alpha3: heb - short_pair: ar-he - chrF2_score: 0.605 - bleu: 40.4 - brevity_penalty: 1.0 - ref_len: 6801.0 - src_name: Arabic - tgt_name: Hebrew - train_date: 2020-07-03 - src_alpha2: ar - tgt_alpha2: he - prefer_old: False - long_pair: ara-heb - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-ar-it
Helsinki-NLP
marian
11
21
transformers
0
translation
true
true
false
apache-2.0
['ar', 'it']
null
null
2
2
0
0
0
0
0
['translation']
false
true
true
2,061
### ara-ita * source group: Arabic * target group: Italian * OPUS readme: [ara-ita](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-ita/README.md) * model: transformer * source language(s): ara * target language(s): ita * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ita/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ita/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ita/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ara.ita | 44.2 | 0.658 | ### System Info: - hf_name: ara-ita - source_languages: ara - target_languages: ita - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-ita/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ar', 'it'] - src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'} - tgt_constituents: {'ita'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ita/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-ita/opus-2020-07-03.test.txt - src_alpha3: ara - tgt_alpha3: ita - short_pair: ar-it - chrF2_score: 0.6579999999999999 - bleu: 44.2 - brevity_penalty: 0.9890000000000001 - ref_len: 1495.0 - src_name: Arabic - tgt_name: Italian - train_date: 2020-07-03 - src_alpha2: ar - tgt_alpha2: it - prefer_old: False - long_pair: ara-ita - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-ar-pl
Helsinki-NLP
marian
11
13
transformers
0
translation
true
true
false
apache-2.0
['ar', 'pl']
null
null
2
2
0
0
0
0
0
['translation']
false
true
true
2,037
### ara-pol * source group: Arabic * target group: Polish * OPUS readme: [ara-pol](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-pol/README.md) * model: transformer * source language(s): ara arz * target language(s): pol * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-pol/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-pol/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-pol/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ara.pol | 38.0 | 0.623 | ### System Info: - hf_name: ara-pol - source_languages: ara - target_languages: pol - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-pol/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ar', 'pl'] - src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'} - tgt_constituents: {'pol'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-pol/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-pol/opus-2020-07-03.test.txt - src_alpha3: ara - tgt_alpha3: pol - short_pair: ar-pl - chrF2_score: 0.623 - bleu: 38.0 - brevity_penalty: 0.948 - ref_len: 1171.0 - src_name: Arabic - tgt_name: Polish - train_date: 2020-07-03 - src_alpha2: ar - tgt_alpha2: pl - prefer_old: False - long_pair: ara-pol - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-ar-ru
Helsinki-NLP
marian
11
149
transformers
0
translation
true
true
false
apache-2.0
['ar', 'ru']
null
null
2
2
0
0
0
0
0
['translation']
false
true
true
2,043
### ara-rus * source group: Arabic * target group: Russian * OPUS readme: [ara-rus](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-rus/README.md) * model: transformer * source language(s): apc ara arz * target language(s): rus * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-rus/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-rus/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-rus/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ara.rus | 42.5 | 0.605 | ### System Info: - hf_name: ara-rus - source_languages: ara - target_languages: rus - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-rus/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ar', 'ru'] - src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'} - tgt_constituents: {'rus'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-rus/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-rus/opus-2020-07-03.test.txt - src_alpha3: ara - tgt_alpha3: rus - short_pair: ar-ru - chrF2_score: 0.605 - bleu: 42.5 - brevity_penalty: 0.97 - ref_len: 21830.0 - src_name: Arabic - tgt_name: Russian - train_date: 2020-07-03 - src_alpha2: ar - tgt_alpha2: ru - prefer_old: False - long_pair: ara-rus - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-ar-tr
Helsinki-NLP
marian
11
20
transformers
0
translation
true
true
false
apache-2.0
['ar', 'tr']
null
null
2
2
0
0
0
0
0
['translation']
false
true
true
2,075
### ara-tur * source group: Arabic * target group: Turkish * OPUS readme: [ara-tur](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-tur/README.md) * model: transformer * source language(s): apc_Latn ara ara_Latn arq_Latn * target language(s): tur * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-07-03.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.zip) * test set translations: [opus-2020-07-03.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.test.txt) * test set scores: [opus-2020-07-03.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.ara.tur | 33.1 | 0.619 | ### System Info: - hf_name: ara-tur - source_languages: ara - target_languages: tur - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-tur/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ar', 'tr'] - src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'} - tgt_constituents: {'tur'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-tur/opus-2020-07-03.test.txt - src_alpha3: ara - tgt_alpha3: tur - short_pair: ar-tr - chrF2_score: 0.619 - bleu: 33.1 - brevity_penalty: 0.9570000000000001 - ref_len: 6949.0 - src_name: Arabic - tgt_name: Turkish - train_date: 2020-07-03 - src_alpha2: ar - tgt_alpha2: tr - prefer_old: False - long_pair: ara-tur - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-art-en
Helsinki-NLP
marian
11
10
transformers
0
translation
true
true
false
apache-2.0
['eo', 'io', 'art', 'en']
null
null
2
2
0
0
0
0
0
['translation']
false
true
true
3,217
### art-eng * source group: Artificial languages * target group: English * OPUS readme: [art-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/art-eng/README.md) * model: transformer * source language(s): afh_Latn avk_Latn dws_Latn epo ido ido_Latn ile_Latn ina_Latn jbo jbo_Cyrl jbo_Latn ldn_Latn lfn_Cyrl lfn_Latn nov_Latn qya qya_Latn sjn_Latn tlh_Latn tzl tzl_Latn vol_Latn * target language(s): eng * model: transformer * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/art-eng/opus2m-2020-07-31.zip) * test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/art-eng/opus2m-2020-07-31.test.txt) * test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/art-eng/opus2m-2020-07-31.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.afh-eng.afh.eng | 1.2 | 0.099 | | Tatoeba-test.avk-eng.avk.eng | 0.4 | 0.105 | | Tatoeba-test.dws-eng.dws.eng | 1.6 | 0.076 | | Tatoeba-test.epo-eng.epo.eng | 34.6 | 0.530 | | Tatoeba-test.ido-eng.ido.eng | 12.7 | 0.310 | | Tatoeba-test.ile-eng.ile.eng | 4.6 | 0.218 | | Tatoeba-test.ina-eng.ina.eng | 5.8 | 0.254 | | Tatoeba-test.jbo-eng.jbo.eng | 0.2 | 0.115 | | Tatoeba-test.ldn-eng.ldn.eng | 0.7 | 0.083 | | Tatoeba-test.lfn-eng.lfn.eng | 1.8 | 0.172 | | Tatoeba-test.multi.eng | 11.6 | 0.287 | | Tatoeba-test.nov-eng.nov.eng | 5.1 | 0.215 | | Tatoeba-test.qya-eng.qya.eng | 0.7 | 0.113 | | Tatoeba-test.sjn-eng.sjn.eng | 0.9 | 0.090 | | Tatoeba-test.tlh-eng.tlh.eng | 0.2 | 0.124 | | Tatoeba-test.tzl-eng.tzl.eng | 1.4 | 0.109 | | Tatoeba-test.vol-eng.vol.eng | 0.5 | 0.115 | ### System Info: - hf_name: art-eng - source_languages: art - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/art-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['eo', 'io', 'art', 'en'] - src_constituents: {'sjn_Latn', 'tzl', 'vol_Latn', 'qya', 'tlh_Latn', 'ile_Latn', 'ido_Latn', 'tzl_Latn', 'jbo_Cyrl', 'jbo', 'lfn_Latn', 'nov_Latn', 'dws_Latn', 'ldn_Latn', 'avk_Latn', 'lfn_Cyrl', 'ina_Latn', 'jbo_Latn', 'epo', 'afh_Latn', 'qya_Latn', 'ido'} - tgt_constituents: {'eng'} - src_multilingual: True - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/art-eng/opus2m-2020-07-31.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/art-eng/opus2m-2020-07-31.test.txt - src_alpha3: art - tgt_alpha3: eng - short_pair: art-en - chrF2_score: 0.287 - bleu: 11.6 - brevity_penalty: 1.0 - ref_len: 73037.0 - src_name: Artificial languages - tgt_name: English - train_date: 2020-07-31 - src_alpha2: art - tgt_alpha2: en - prefer_old: False - long_pair: art-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Helsinki-NLP/opus-mt-ase-de
Helsinki-NLP
marian
10
8
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-ase-de * source languages: ase * target languages: de * OPUS readme: [ase-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ase-de/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ase-de/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-de/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-de/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ase.de | 27.2 | 0.478 |
Helsinki-NLP/opus-mt-ase-en
Helsinki-NLP
marian
10
32
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-ase-en * source languages: ase * target languages: en * OPUS readme: [ase-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ase-en/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ase-en/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-en/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-en/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ase.en | 99.5 | 0.997 |
Helsinki-NLP/opus-mt-ase-es
Helsinki-NLP
marian
10
7
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-ase-es * source languages: ase * target languages: es * OPUS readme: [ase-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ase-es/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ase-es/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-es/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-es/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ase.es | 31.7 | 0.498 |
Helsinki-NLP/opus-mt-ase-fr
Helsinki-NLP
marian
10
11
transformers
0
translation
true
true
false
apache-2.0
null
null
null
0
0
0
0
0
0
0
['translation']
false
true
true
776
### opus-mt-ase-fr * source languages: ase * target languages: fr * OPUS readme: [ase-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ase-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/ase-fr/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-fr/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/ase-fr/opus-2020-01-20.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | JW300.ase.fr | 37.8 | 0.553 |