modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-29 06:27:56
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 534
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-29 06:27:11
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
KyanChen/BuildingExtraction
|
KyanChen
| 2022-06-29T02:13:33Z | 0 | 1 | null |
[
"region:us"
] | null | 2022-06-29T01:34:01Z |
# STTNet
Paper: Building Extraction from Remote Sensing Images with Sparse Token Transformers
1. Prepare Data
Prepare data for training, validation, and test phase. All images are with the resolution of $512 \times 512$. Please refer to the directory of **Data**.
For larger images, you can patch the images with labels using **Tools/CutImgSegWithLabel.py**.
2. Get Data List
Please refer to **Tools/GetTrainValTestCSV.py** to get the train, val, and test csv files.
3. Get Imgs Infos
Please refer to **Tools/GetImgMeanStd.py** to get the mean value and standard deviation of the all image pixels in training set.
4. Modify Model Infos
Please modify the model information if you want, or keep the default configuration.
5. Run to Train
Train the model in **Main.py**.
6. [Optional] Run to Test
Test the model with checkpoint in **Test.py**.
We have provided pretrained models on INRIA and WHU Datasets. The pt models are in folder **Pretrain**.
If you have any questions, please refer to [our paper](https://www.mdpi.com/2072-4292/13/21/4441) or contact with us by email.
```
@Article{rs13214441,
AUTHOR = {Chen, Keyan and Zou, Zhengxia and Shi, Zhenwei},
TITLE = {Building Extraction from Remote Sensing Images with Sparse Token Transformers},
JOURNAL = {Remote Sensing},
VOLUME = {13},
YEAR = {2021},
NUMBER = {21},
ARTICLE-NUMBER = {4441},
URL = {https://www.mdpi.com/2072-4292/13/21/4441},
ISSN = {2072-4292},
DOI = {10.3390/rs13214441}
}
```
|
gary109/ai-light-dance_singing2_ft_wav2vec2-large-xlsr-53-5gram-v4-1
|
gary109
| 2022-06-29T01:00:45Z | 3 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"gary109/AI_Light_Dance",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-28T05:51:25Z |
---
tags:
- automatic-speech-recognition
- gary109/AI_Light_Dance
- generated_from_trainer
model-index:
- name: ai-light-dance_singing2_ft_wav2vec2-large-xlsr-53-5gram-v4-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_singing2_ft_wav2vec2-large-xlsr-53-5gram-v4-1
This model is a fine-tuned version of [gary109/ai-light-dance_singing_ft_wav2vec2-large-xlsr-53-5gram-v4](https://huggingface.co/gary109/ai-light-dance_singing_ft_wav2vec2-large-xlsr-53-5gram-v4) on the GARY109/AI_LIGHT_DANCE - ONSET-SINGING2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2143
- Wer: 0.1211
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2609 | 1.0 | 280 | 0.2313 | 0.1376 |
| 0.2297 | 2.0 | 560 | 0.2240 | 0.1397 |
| 0.1951 | 3.0 | 840 | 0.2280 | 0.1361 |
| 0.1816 | 4.0 | 1120 | 0.2215 | 0.1282 |
| 0.1634 | 5.0 | 1400 | 0.2180 | 0.1240 |
| 0.1338 | 6.0 | 1680 | 0.2226 | 0.1241 |
| 0.1411 | 7.0 | 1960 | 0.2143 | 0.1211 |
| 0.1143 | 8.0 | 2240 | 0.2181 | 0.1174 |
| 0.1127 | 9.0 | 2520 | 0.2215 | 0.1167 |
| 0.105 | 10.0 | 2800 | 0.2196 | 0.1160 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
jdang/dummy-model
|
jdang
| 2022-06-29T00:30:36Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"camembert",
"fill-mask",
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1910.01108",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-29T00:15:47Z |
---
language: en
tags:
- exbert
license: apache-2.0
datasets:
- bookcorpus
- wikipedia
---
# DistilBERT base model (dummy test)
This model is a distilled version of the [BERT base model](https://huggingface.co/bert-base-uncased). It was
introduced in [this paper](https://arxiv.org/abs/1910.01108). The code for the distillation process can be found
[here](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation). This model is uncased: it does
not make a difference between english and English.
## Model description
DistilBERT is a transformers model, smaller and faster than BERT, which was pretrained on the same corpus in a
self-supervised fashion, using the BERT base model as a teacher. This means it was pretrained on the raw texts only,
with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic
process to generate inputs and labels from those texts using the BERT base model. More precisely, it was pretrained
with three objectives:
- Distillation loss: the model was trained to return the same probabilities as the BERT base model.
- Masked language modeling (MLM): this is part of the original training loss of the BERT base model. When taking a
sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the
model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that
usually see the words one after the other, or from autoregressive models like GPT which internally mask the future
tokens. It allows the model to learn a bidirectional representation of the sentence.
- Cosine embedding loss: the model was also trained to generate hidden states as close as possible as the BERT base
model.
This way, the model learns the same inner representation of the English language than its teacher model, while being
faster for inference or downstream tasks.
## Intended uses & limitations
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to
be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=distilbert) to look for
fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked)
to make decisions, such as sequence classification, token classification or question answering. For tasks such as text
generation you should look at model like GPT2.
### How to use
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='distilbert-base-uncased')
>>> unmasker("Hello I'm a [MASK] model.")
[{'sequence': "[CLS] hello i'm a role model. [SEP]",
'score': 0.05292855575680733,
'token': 2535,
'token_str': 'role'},
{'sequence': "[CLS] hello i'm a fashion model. [SEP]",
'score': 0.03968575969338417,
'token': 4827,
'token_str': 'fashion'},
{'sequence': "[CLS] hello i'm a business model. [SEP]",
'score': 0.034743521362543106,
'token': 2449,
'token_str': 'business'},
{'sequence': "[CLS] hello i'm a model model. [SEP]",
'score': 0.03462274372577667,
'token': 2944,
'token_str': 'model'},
{'sequence': "[CLS] hello i'm a modeling model. [SEP]",
'score': 0.018145186826586723,
'token': 11643,
'token_str': 'modeling'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import DistilBertTokenizer, DistilBertModel
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
model = DistilBertModel.from_pretrained("distilbert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import DistilBertTokenizer, TFDistilBertModel
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
model = TFDistilBertModel.from_pretrained("distilbert-base-uncased")
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
Even if the training data used for this model could be characterized as fairly neutral, this model can have biased
predictions. It also inherits some of
[the bias of its teacher model](https://huggingface.co/bert-base-uncased#limitations-and-bias).
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='distilbert-base-uncased')
>>> unmasker("The White man worked as a [MASK].")
[{'sequence': '[CLS] the white man worked as a blacksmith. [SEP]',
'score': 0.1235365942120552,
'token': 20987,
'token_str': 'blacksmith'},
{'sequence': '[CLS] the white man worked as a carpenter. [SEP]',
'score': 0.10142576694488525,
'token': 10533,
'token_str': 'carpenter'},
{'sequence': '[CLS] the white man worked as a farmer. [SEP]',
'score': 0.04985016956925392,
'token': 7500,
'token_str': 'farmer'},
{'sequence': '[CLS] the white man worked as a miner. [SEP]',
'score': 0.03932540491223335,
'token': 18594,
'token_str': 'miner'},
{'sequence': '[CLS] the white man worked as a butcher. [SEP]',
'score': 0.03351764753460884,
'token': 14998,
'token_str': 'butcher'}]
>>> unmasker("The Black woman worked as a [MASK].")
[{'sequence': '[CLS] the black woman worked as a waitress. [SEP]',
'score': 0.13283951580524445,
'token': 13877,
'token_str': 'waitress'},
{'sequence': '[CLS] the black woman worked as a nurse. [SEP]',
'score': 0.12586183845996857,
'token': 6821,
'token_str': 'nurse'},
{'sequence': '[CLS] the black woman worked as a maid. [SEP]',
'score': 0.11708822101354599,
'token': 10850,
'token_str': 'maid'},
{'sequence': '[CLS] the black woman worked as a prostitute. [SEP]',
'score': 0.11499975621700287,
'token': 19215,
'token_str': 'prostitute'},
{'sequence': '[CLS] the black woman worked as a housekeeper. [SEP]',
'score': 0.04722772538661957,
'token': 22583,
'token_str': 'housekeeper'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
DistilBERT pretrained on the same data as BERT, which is [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset
consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia)
(excluding lists, tables and headers).
## Training procedure
### Preprocessing
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are
then of the form:
```
[CLS] Sentence A [SEP] Sentence B [SEP]
```
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in
the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a
consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two
"sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by `[MASK]`.
- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is.
### Pretraining
The model was trained on 8 16 GB V100 for 90 hours. See the
[training code](https://github.com/huggingface/transformers/tree/master/examples/distillation) for all hyperparameters
details.
## Evaluation results
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
| Task | MNLI | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE |
|:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|
| | 82.2 | 88.5 | 89.2 | 91.3 | 51.3 | 85.8 | 87.5 | 59.9 |
### BibTeX entry and citation info
```bibtex
@article{Sanh2019DistilBERTAD,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
journal={ArXiv},
year={2019},
volume={abs/1910.01108}
}
```
<a href="https://huggingface.co/exbert/?model=distilbert-base-uncased">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
huggingtweets/elonmusk-mrbeast
|
huggingtweets
| 2022-06-29T00:11:17Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-29T00:09:36Z |
---
language: en
thumbnail: http://www.huggingtweets.com/elonmusk-mrbeast/1656461472374/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1529956155937759233/Nyn1HZWF_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/994592419705274369/RLplF55e_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & MrBeast</div>
<div style="text-align: center; font-size: 14px;">@elonmusk-mrbeast</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Elon Musk & MrBeast.
| Data | Elon Musk | MrBeast |
| --- | --- | --- |
| Tweets downloaded | 3200 | 3246 |
| Retweets | 143 | 155 |
| Short tweets | 972 | 716 |
| Tweets kept | 2085 | 2375 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/w0y9m5al/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elonmusk-mrbeast's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/zbek5pwp) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/zbek5pwp/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/elonmusk-mrbeast')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
workRL/q-Taxi-v3
|
workRL
| 2022-06-28T23:49:57Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-28T23:49:51Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.54 +/- 2.71
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
lucataco/DialogGPT-med-Rick
|
lucataco
| 2022-06-28T23:17:11Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-06T04:54:18Z |
---
tags:
- conversational
---
# Rick Dialog GPT Model Medium 12
# Trained on:
# kaggle rick n morty Tv transcript
|
Aalaa/distilgpt2-finetuned-wikitext2
|
Aalaa
| 2022-06-28T21:26:23Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-28T01:45:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6421
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7602 | 1.0 | 2334 | 3.6669 |
| 3.653 | 2.0 | 4668 | 3.6472 |
| 3.6006 | 3.0 | 7002 | 3.6421 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
13hannes11/master_thesis_models
|
13hannes11
| 2022-06-28T21:14:01Z | 0 | 0 | null |
[
"tensorboard",
"focus-prediction",
"microscopy",
"pytorch",
"license:mit",
"region:us"
] | null | 2022-03-08T16:31:24Z |
---
name: "K-POP"
license: "mit"
metrics:
- MAE
- PLCC
- SRCC
- R2
tags:
- focus-prediction
- microscopy
- pytorch
---
# K-POP: Predicting Distance to Focal Plane for Kato-Katz Prepared Microscopy Slides Using Deep Learning
<a href="https://pytorch.org/get-started/locally/"><img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-ee4c2c?logo=pytorch&logoColor=white"></a><a href="https://pytorchlightning.ai/">
<img alt="Lightning" src="https://img.shields.io/badge/-Lightning-792ee5?logo=pytorchlightning&logoColor=white"></a>
<a href="https://hydra.cc/"><img alt="Config: Hydra" src="https://img.shields.io/badge/Config-Hydra-89b8cd"></a>
## Description
This repository contains the models and training pipeline for my master thesis. The main repository is hosted on [GitHub](https://github.com/13hannes11/master_thesis_code).
The project structure is based on the template by [ashleve](https://github.com/ashleve/lightning-hydra-template).
The metadata is stored in `data/focus150/`. The relevant files are `test_metadata.csv`, `train_metadata.csv` and `validation_metadata.csv`. Image data (of 150 x 150 px images) is not published together with this repository therefore training runs are not possible to do without it. The layout of the metadata files is as follows
```csv
,image_path,scan_uuid,study_id,focus_height,original_filename,stack_id,obj_name
0,31/b0d4005e-57d0-4516-a239-abe02a8d0a67/I02413_X009_Y014_Z5107_750_300.jpg,b0d4005e-57d0-4516-a239-abe02a8d0a67,31,-0.013672000000000017,I02413_X009_Y014_Z5107.jpg,1811661,schistosoma
1,31/274d8969-aa7c-4ac0-be60-e753579393ad/I01981_X019_Y014_Z4931_450_0.jpg,274d8969-aa7c-4ac0-be60-e753579393ad,31,-0.029296999999999962,I01981_X019_Y014_Z4931.jpg,1661371,schistosoma
...
```
## How to run
Train model with chosen experiment configuration from `configs/experiment/`
```bash
python train.py experiment=focusResNet_150
```
Train with hyperparameter search from `configs/hparams_search/`
```bash
python train.py -m hparams_search=focusResNetMSE_150
```
You can override any parameter from command line like this
```bash
python train.py trainer.max_epochs=20 datamodule.batch_size=64
```
## Jupyter notebooks
Figures and other evaluation code was run in Jupyter notebooks. These are available at `notebooks/`
|
rishiyoung/xlm-roberta-base-finetuned-panx-de
|
rishiyoung
| 2022-06-28T20:49:34Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-28T20:26:08Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8620945214069894
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1372
- F1: 0.8621
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2575 | 1.0 | 525 | 0.1621 | 0.8292 |
| 0.1287 | 2.0 | 1050 | 0.1378 | 0.8526 |
| 0.0831 | 3.0 | 1575 | 0.1372 | 0.8621 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
PrimeQA/tydiqa-boolean-question-classifier
|
PrimeQA
| 2022-06-28T20:19:31Z | 5,988 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:1810.04805",
"arxiv:2206.08441",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-09T14:54:52Z |
---
license: apache-2.0
---
## Model description
A question type classification model based on multilingual BERT.
The question type classifier takes as input the question, and returns a label that distinguishes between boolean and short answer extractive questions.
The model was initialized with [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) and fine-tuned on the answerable subset of [TyDiQA](https://huggingface.co/datasets/tydiqa) train questions.
## Intended uses & limitations
You can use the raw model for question classification. Biases associated with the pre-existing language model, bert-base-multilingual-cased, may be present in our fine-tuned model, tydiqa-boolean-question-classifier.
## Usage
You can use this model directly in the the [PrimeQA](https://github.com/primeqa/primeqa) framework for supporting boolean question in reading comprehension as in this [example](https://github.com/primeqa/primeqa/tree/main/examples/boolqa).
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
```bibtex
@misc{https://doi.org/10.48550/arxiv.2206.08441,
author = {McCarley, Scott and
Bornea, Mihaela and
Rosenthal, Sara and
Ferritto, Anthony and
Sultan, Md Arafat and
Sil, Avirup and
Florian, Radu},
title = {GAAMA 2.0: An Integrated System that Answers Boolean and Extractive Questions},
journal = {CoRR},
publisher = {arXiv},
year = {2022},
url = {https://arxiv.org/abs/2206.08441},
}
```
|
syndi-models/article-title-generator
|
syndi-models
| 2022-06-28T20:08:16Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2023-05-09T18:49:29Z |
---
license: mit
---
## Article Title Generator
The model is based on the T5 language model and trained using a large collection of Medium articles.
## Usage
Example code:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("czearing/article-title-generator")
model = AutoModel.from_pretrained("czearing/article-title-generator")
```
## License
MIT
|
PrimeQA/tydiqa-boolean-answer-classifier
|
PrimeQA
| 2022-06-28T19:52:14Z | 36 | 2 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"arxiv:2112.07772",
"arxiv:2206.08441",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-09T17:11:51Z |
---
license: apache-2.0
---
## Model description
An answer classification model for boolean questions based on XLM-RoBERTa.
The answer classifier takes as input a boolean question and a passage, and returns a label (yes, no-answer, no).
The model was initialized with [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) and fine-tuned on the boolean questions from [TyDiQA](https://huggingface.co/datasets/tydiqa), as well as [BoolQ-X](https://arxiv.org/abs/2112.07772#).
## Intended uses & limitations
You can use the raw model for question classification. Biases associated with the pre-existing language model, xlm-roberta-large, may be present in our fine-tuned model, tydiqa-boolean-answer-classifier.
## Usage
You can use this model directly in the the [PrimeQA](https://github.com/primeqa/primeqa) framework for supporting boolean questions in reading comprehension: [examples](https://github.com/primeqa/primeqa/tree/main/examples/boolqa).
### BibTeX entry and citation info
```bibtex
@article{Rosenthal2021DoAT,
title={Do Answers to Boolean Questions Need Explanations? Yes},
author={Sara Rosenthal and Mihaela A. Bornea and Avirup Sil and Radu Florian and Scott McCarley},
journal={ArXiv},
year={2021},
volume={abs/2112.07772}
}
```
```bibtex
@misc{https://doi.org/10.48550/arxiv.2206.08441,
author = {McCarley, Scott and
Bornea, Mihaela and
Rosenthal, Sara and
Ferritto, Anthony and
Sultan, Md Arafat and
Sil, Avirup and
Florian, Radu},
title = {GAAMA 2.0: An Integrated System that Answers Boolean and Extractive Questions},
journal = {CoRR},
publisher = {arXiv},
year = {2022},
url = {https://arxiv.org/abs/2206.08441},
}
```
|
YushiUeda/callhome_adapt_real
|
YushiUeda
| 2022-06-28T19:34:58Z | 5 | 0 |
espnet
|
[
"espnet",
"audio",
"diarization",
"dataset:callhome",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null | 2022-06-28T19:34:35Z |
---
tags:
- espnet
- audio
- diarization
language: noinfo
datasets:
- callhome
license: cc-by-4.0
---
## ESPnet2 DIAR model
### `YushiUeda/callhome_adapt_real`
This model was trained by YushiUeda using callhome recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 0cabe65afd362122e77b04e2e967986a91de0fd8
pip install -e .
cd egs2/callhome/diar1
./run.sh --skip_data_prep false --skip_train true --download_model YushiUeda/callhome_adapt_real
```
<!-- Generated by scripts/utils/show_diar_result.sh -->
# RESULTS
## Environments
- date: `Mon Jun 20 10:30:23 EDT 2022`
- python version: `3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]`
- espnet version: `espnet 202205`
- pytorch version: `pytorch 1.9.1+cu102`
- Git hash: `fc62b1ce3e50c5ef8a2ac8cedb0d92ac41df54ca`
- Commit date: `Thu Jun 9 16:29:52 2022 +0900`
## diar_train_diar_eda_adapt_real_lr0001
### DER
diarized_callhome2_spkall
|threshold_median_collar|DER|
|---|---|
|result_th0.3_med11_collar0.25|22.29|
|result_th0.3_med1_collar0.25|23.27|
|result_th0.4_med11_collar0.25|19.85|
|result_th0.4_med1_collar0.25|20.80|
|result_th0.5_med11_collar0.25|19.26|
|result_th0.5_med1_collar0.25|20.18|
|result_th0.6_med11_collar0.25|20.24|
|result_th0.6_med1_collar0.25|21.08|
|result_th0.7_med11_collar0.25|22.38|
|result_th0.7_med1_collar0.25|23.17|
## DIAR config
<details><summary>expand</summary>
```
config: conf/tuning/train_diar_eda_adapt.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/diar_train_diar_eda_adapt_real_lr0001
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 50
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
- - train
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5
grad_clip_type: 2.0
grad_noise: false
accum_grad: 16
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param:
- exp/diar_train_diar_eda_adapt_simu/latest.pth
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 1
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/diar_stats_8k/train/speech_shape
- exp/diar_stats_8k/train/spk_labels_shape
valid_shape_file:
- exp/diar_stats_8k/valid/speech_shape
- exp/diar_stats_8k/valid/spk_labels_shape
batch_type: sorted
valid_batch_type: null
fold_length:
- 80000
- 800
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/callhome1_spkall/wav.scp
- speech
- sound
- - dump/raw/callhome1_spkall/espnet_rttm
- spk_labels
- rttm
valid_data_path_and_name_and_type:
- - dump/raw/callhome2_spkall/wav.scp
- speech
- sound
- - dump/raw/callhome2_spkall/espnet_rttm
- spk_labels
- rttm
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.001
scheduler: null
scheduler_conf: {}
num_spk: 7
init: null
input_size: null
model_conf:
attractor_weight: 1.0
use_preprocessor: true
frontend: default
frontend_conf:
fs: 8k
hop_length: 128
specaug: specaug
specaug_conf:
apply_time_warp: false
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/diar_stats_8k/train/feats_stats.npz
encoder: transformer
encoder_conf:
input_layer: conv2d
num_blocks: 4
linear_units: 512
dropout_rate: 0.1
output_size: 256
attention_heads: 4
attention_dropout_rate: 0.1
decoder: linear
decoder_conf: {}
label_aggregator: label_aggregator
label_aggregator_conf:
win_length: 1024
hop_length: 512
attractor: rnn
attractor_conf:
unit: 256
layer: 1
dropout: 0.0
attractor_grad: false
required:
- output_dir
version: '202204'
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
YushiUeda/callhome_adapt_simu
|
YushiUeda
| 2022-06-28T19:33:39Z | 3 | 0 |
espnet
|
[
"espnet",
"audio",
"diarization",
"dataset:callhome",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null | 2022-06-28T19:32:41Z |
---
tags:
- espnet
- audio
- diarization
language: noinfo
datasets:
- callhome
license: cc-by-4.0
---
## ESPnet2 DIAR model
### `YushiUeda/callhome_adapt_simu`
This model was trained by YushiUeda using callhome recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 0cabe65afd362122e77b04e2e967986a91de0fd8
pip install -e .
cd egs2/callhome/diar1
./run.sh --skip_data_prep false --skip_train true --download_model YushiUeda/callhome_adapt_simu
```
## DIAR config
<details><summary>expand</summary>
```
config: conf/tuning/train_diar_eda_adapt.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/diar_train_diar_eda_adapt_simu
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 4
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 43777
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 50
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
- - train
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5
grad_clip_type: 2.0
grad_noise: false
accum_grad: 4
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param:
- exp/diar_train_diar_eda_5_raw/latest.pth
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 16
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/diar_stats_8k/train/speech_shape
- exp/diar_stats_8k/train/spk_labels_shape
valid_shape_file:
- exp/diar_stats_8k/valid/speech_shape
- exp/diar_stats_8k/valid/spk_labels_shape
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 800
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/simu/data/swb_sre_tr_ns1n2n3n4_beta2n2n5n9_100000/wav.scp
- speech
- sound
- - dump/raw/simu/data/swb_sre_tr_ns1n2n3n4_beta2n2n5n9_100000/espnet_rttm
- spk_labels
- rttm
valid_data_path_and_name_and_type:
- - dump/raw/simu/data/swb_sre_cv_ns1n2n3n4_beta2n2n5n9_500/wav.scp
- speech
- sound
- - dump/raw/simu/data/swb_sre_cv_ns1n2n3n4_beta2n2n5n9_500/espnet_rttm
- spk_labels
- rttm
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0001
scheduler: null
scheduler_conf: {}
num_spk: 4
init: null
input_size: null
model_conf:
attractor_weight: 1.0
use_preprocessor: true
frontend: default
frontend_conf:
fs: 8k
hop_length: 128
specaug: specaug
specaug_conf:
apply_time_warp: false
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/diar_stats_8k/train/feats_stats.npz
encoder: transformer
encoder_conf:
input_layer: conv2d
num_blocks: 4
linear_units: 512
dropout_rate: 0.1
output_size: 256
attention_heads: 4
attention_dropout_rate: 0.1
decoder: linear
decoder_conf: {}
label_aggregator: label_aggregator
label_aggregator_conf:
win_length: 1024
hop_length: 512
attractor: rnn
attractor_conf:
unit: 256
layer: 1
dropout: 0.0
attractor_grad: true
required:
- output_dir
version: '202204'
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
b3ck1/gpt-neo-125M-finetuned-beer-recipes
|
b3ck1
| 2022-06-28T19:03:17Z | 14 | 3 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"text generation",
"causal-lm",
"en",
"dataset:custom",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language:
- en
tags:
- text generation
- pytorch
- causal-lm
license: apache-2.0
datasets:
- custom
widget:
- text: "style: Pilsner\nbatch_size: 20\nefficiency: 75\nboil_size:"
example_title: "Pilsener"
- text: "style: IPA\nbatch_size: 20\nefficiency: 75\nboil_size:"
example_title: "IPA"
- text: "style: Scottish Ale\nbatch_size: 20\nefficiency: 75\nboil_size:"
example_title: "Scottish Ale"
inference:
parameters:
do_sample: true
top_k: 10
top_p: 0.99
max_length: 500
---
# GPT-Neo 125M finetuned with beer recipes
## Model Description
GPT-Neo 125M is a transformer model based on EleutherAI's replication of the GPT-3 architecture https://huggingface.co/EleutherAI/gpt-neo-125M.
It generates recipes for brewing beer in a YAML-like format which can be easily used for different purposes.
## Training data
This model was trained on a custom dataset of ~ 76,800 beer recipes from the internet. It includes recipes for the following
styles of beer:
* Strong American Ale
* Pale American Ale
* India Pale Ale (IPA)
* Standard American Beer
* Stout
* English Pale Ale
* IPA
* American Porter and Stout
* Sour Ale
* Irish Beer
* Strong British Ale
* Belgian and French Ale
* German Wheat and Rye Beer
* Czech Lager
* Spice/Herb/Vegetable Beer
* Specialty Beer
* American Ale
* Pilsner
* Belgian Ale
* Strong Belgian Ale
* Bock
* Brown British Beer
* German Wheat Beer
* Fruit Beer
* Amber Malty European Lager
* Pale Malty European Lager
* British Bitter
* Amber and Brown American Beer
* Light Hybrid Beer
* Pale Commonwealth Beer
* American Wild Ale
* European Amber Lager
* Belgian Strong Ale
* International Lager
* Amber Bitter European Lager
* Light Lager
* Scottish and Irish Ale
* European Sour Ale
* Trappist Ale
* Strong European Beer
* Porter
* Historical Beer
* Pale Bitter European Beer
* Amber Hybrid Beer
* Smoke Flavored/Wood-Aged Beer
* Spiced Beer
* Dark European Lager
* Alternative Fermentables Beer
* Mead
* Strong Ale
* Dark British Beer
* Scottish Ale
* Smoked Beer
* English Brown Ale
* Dark Lager
* Cider or Perry
* Wood Beer
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different recipe each time it's run:
```py
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='b3ck1/gpt-neo-125M-finetuned-beer-recipes')
>>> generator("style: Pilsner\nbatch_size: 20\nefficiency: 75\nboil_size:", do_sample=True, min_length=50, max_length=500)
>>> print(output[0]['generated_text'])
style: Pilsner
batch_size: 20
efficiency: 70
boil_size: 24
boil_time: 60
fermentables:
- name: Pale Ale
type: Grain
amount: 6.5
hops:
- name: Saaz
alpha: 3.5
use: Boil
time: 60
amount: 0.06
- name: Saaz
alpha: 3.5
use: Boil
time: 30
amount: 0.06
- name: Saaz
alpha: 3.5
use: Boil
time: 10
amount: 0.06
- name: Saaz
alpha: 3.5
use: Boil
time: 0
amount: 0.06
yeasts:
- name: Safale - American Ale Yeast US-05
amount: 0.11
min_temperature: 12
max_temperature: 25
primary_temp: null
mash_steps:
- step_temp: 65
step_time: 60
miscs: []
```
### See this model in action
This model was used to build https://beerai.net.
|
zunicd/finetuning-sentiment-model-3000-samples
|
zunicd
| 2022-06-28T18:12:43Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-28T17:48:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8733333333333333
- name: F1
type: f1
value: 0.8741721854304636
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3349
- Accuracy: 0.8733
- F1: 0.8742
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
DeepPavlov/distilrubert-small-cased-conversational
|
DeepPavlov
| 2022-06-28T17:19:09Z | 28,705 | 3 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"ru",
"arxiv:2205.02340",
"arxiv:1910.01108",
"endpoints_compatible",
"region:us"
] | null | 2022-06-28T17:15:00Z |
---
language:
- ru
---
# distilrubert-small-cased-conversational
Conversational DistilRuBERT-small \(Russian, cased, 2‑layer, 768‑hidden, 12‑heads, 107M parameters\) was trained on OpenSubtitles\[1\], [Dirty](https://d3.ru/), [Pikabu](https://pikabu.ru/), and a Social Media segment of Taiga corpus\[2\] (as [Conversational RuBERT](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational)). It can be considered as small copy of [Conversational DistilRuBERT-base](https://huggingface.co/DeepPavlov/distilrubert-base-cased-conversational).
Our DistilRuBERT-small was highly inspired by \[3\], \[4\]. Namely, we used
* KL loss (between teacher and student output logits)
* MLM loss (between tokens labels and student output logits)
* Cosine embedding loss (between averaged six consecutive hidden states from teacher's encoder and one hidden state of the student)
* MSE loss (between averaged six consecutive attention maps from teacher's encoder and one attention map of the student)
The model was trained for about 80 hrs. on 8 nVIDIA Tesla P100-SXM2.0 16Gb.
To evaluate improvements in the inference speed, we ran teacher and student models on random sequences with seq_len=512, batch_size = 16 (for throughput) and batch_size=1 (for latency).
All tests were performed on Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz and nVIDIA Tesla P100-SXM2.0 16Gb.
| Model | Size, Mb. | CPU latency, sec.| GPU latency, sec. | CPU throughput, samples/sec. | GPU throughput, samples/sec. |
|-------------------------------------------------|------------|------------------|-------------------|------------------------------|------------------------------|
| Teacher (RuBERT-base-cased-conversational) | 679 | 0.655 | 0.031 | 0.3754 | 36.4902 |
| Student (DistilRuBERT-small-cased-conversational)| 409 | 0.1656 | 0.015 | 0.9692 | 71.3553 |
To evaluate model quality, we fine-tuned DistilRuBERT-small on classification, NER and question answering tasks. Scores and archives with fine-tuned models can be found in [DeepPavlov docs](http://docs.deeppavlov.ai/en/master/features/overview.html#models). Also, results could be found in the [paper](https://arxiv.org/abs/2205.02340) Tables 1&2 as well as performance benchmarks and training details.
# Citation
If you found the model useful for your research, we are kindly ask to cite [this](https://arxiv.org/abs/2205.02340) paper:
```
@misc{https://doi.org/10.48550/arxiv.2205.02340,
doi = {10.48550/ARXIV.2205.02340},
url = {https://arxiv.org/abs/2205.02340},
author = {Kolesnikova, Alina and Kuratov, Yuri and Konovalov, Vasily and Burtsev, Mikhail},
keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Knowledge Distillation of Russian Language Models with Reduction of Vocabulary},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
\[1\]: P. Lison and J. Tiedemann, 2016, OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation \(LREC 2016\)
\[2\]: Shavrina T., Shapovalova O. \(2017\) TO THE METHODOLOGY OF CORPUS CONSTRUCTION FOR MACHINE LEARNING: «TAIGA» SYNTAX TREE CORPUS AND PARSER. in proc. of “CORPORA2017”, international conference , Saint-Petersbourg, 2017.
\[3\]: Sanh, V., Debut, L., Chaumond, J., & Wolf, T. \(2019\). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
\[4\]: <https://github.com/huggingface/transformers/tree/master/examples/research_projects/distillation>
|
DeepPavlov/distilrubert-tiny-cased-conversational
|
DeepPavlov
| 2022-06-28T17:10:33Z | 1,401 | 3 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"ru",
"arxiv:2205.02340",
"arxiv:1910.01108",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z |
---
language:
- ru
---
WARNING: This is `distilrubert-small-cased-conversational` model uploaded with wrong name. This one is the same as [distilrubert-small-cased-conversational](https://huggingface.co/DeepPavlov/distilrubert-small-cased-conversational). `distilrubert-tiny-cased-conversational` could be found in [distilrubert-tiny-cased-conversational-v1](https://huggingface.co/DeepPavlov/distilrubert-tiny-cased-conversational-v1).
# distilrubert-small-cased-conversational
Conversational DistilRuBERT-small \(Russian, cased, 2‑layer, 768‑hidden, 12‑heads, 107M parameters\) was trained on OpenSubtitles\[1\], [Dirty](https://d3.ru/), [Pikabu](https://pikabu.ru/), and a Social Media segment of Taiga corpus\[2\] (as [Conversational RuBERT](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational)). It can be considered as small copy of [Conversational DistilRuBERT-base](https://huggingface.co/DeepPavlov/distilrubert-base-cased-conversational).
Our DistilRuBERT-small was highly inspired by \[3\], \[4\]. Namely, we used
* KL loss (between teacher and student output logits)
* MLM loss (between tokens labels and student output logits)
* Cosine embedding loss (between averaged six consecutive hidden states from teacher's encoder and one hidden state of the student)
* MSE loss (between averaged six consecutive attention maps from teacher's encoder and one attention map of the student)
The model was trained for about 80 hrs. on 8 nVIDIA Tesla P100-SXM2.0 16Gb.
To evaluate improvements in the inference speed, we ran teacher and student models on random sequences with seq_len=512, batch_size = 16 (for throughput) and batch_size=1 (for latency).
All tests were performed on Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz and nVIDIA Tesla P100-SXM2.0 16Gb.
| Model | Size, Mb. | CPU latency, sec.| GPU latency, sec. | CPU throughput, samples/sec. | GPU throughput, samples/sec. |
|-------------------------------------------------|------------|------------------|-------------------|------------------------------|------------------------------|
| Teacher (RuBERT-base-cased-conversational) | 679 | 0.655 | 0.031 | 0.3754 | 36.4902 |
| Student (DistilRuBERT-small-cased-conversational)| 409 | 0.1656 | 0.015 | 0.9692 | 71.3553 |
To evaluate model quality, we fine-tuned DistilRuBERT-small on classification, NER and question answering tasks. Scores and archives with fine-tuned models can be found in [DeepPavlov docs](http://docs.deeppavlov.ai/en/master/features/overview.html#models).
# Citation
If you found the model useful for your research, we are kindly ask to cite [this](https://arxiv.org/abs/2205.02340) paper:
```
@misc{https://doi.org/10.48550/arxiv.2205.02340,
doi = {10.48550/ARXIV.2205.02340},
url = {https://arxiv.org/abs/2205.02340},
author = {Kolesnikova, Alina and Kuratov, Yuri and Konovalov, Vasily and Burtsev, Mikhail},
keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Knowledge Distillation of Russian Language Models with Reduction of Vocabulary},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
\[1\]: P. Lison and J. Tiedemann, 2016, OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation \(LREC 2016\)
\[2\]: Shavrina T., Shapovalova O. \(2017\) TO THE METHODOLOGY OF CORPUS CONSTRUCTION FOR MACHINE LEARNING: «TAIGA» SYNTAX TREE CORPUS AND PARSER. in proc. of “CORPORA2017”, international conference , Saint-Petersbourg, 2017.
\[3\]: Sanh, V., Debut, L., Chaumond, J., & Wolf, T. \(2019\). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
\[4\]: <https://github.com/huggingface/transformers/tree/master/examples/research_projects/distillation>
|
DeepPavlov/distilrubert-tiny-cased-conversational-5k
|
DeepPavlov
| 2022-06-28T17:05:02Z | 7 | 1 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"ru",
"arxiv:2205.02340",
"arxiv:1910.01108",
"endpoints_compatible",
"region:us"
] | null | 2022-06-28T16:24:27Z |
---
language:
- ru
---
# distilrubert-tiny-cased-conversational-5k
Conversational DistilRuBERT-tiny-5k \(Russian, cased, 3‑layers, 264‑hidden, 12‑heads, 3.6M parameters, 5k vocab\) was trained on OpenSubtitles\[1\], [Dirty](https://d3.ru/), [Pikabu](https://pikabu.ru/), and a Social Media segment of Taiga corpus\[2\] (as [Conversational RuBERT](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational)).
Our DistilRuBERT-tiny-5k is highly inspired by \[3\], \[4\] and architecture is very close to \[5\]. Namely, we use
* MLM loss (between token labels and student output distribution)
* KL loss (between averaged student and teacher hidden states)
The key feature is:
* reduced vocabulary size (5K vs 30K in *tiny* vs. 100K in *base* and *small*)
Here is comparison between teacher model (`Conversational RuBERT`) and other distilled models.
| Model name | \# params, M | \# vocab, K | Mem., MB |
|---|---|---|---|
| `rubert-base-cased-conversational` | 177.9 | 120 | 679 |
| `distilrubert-base-cased-conversational` | 135.5 | 120 | 517 |
| `distilrubert-small-cased-conversational` | 107.1 | 120 | 409 |
| `cointegrated/rubert-tiny` | 11.8 | 30 | 46 |
| `cointegrated/rubert-tiny2` | 29.3 | 84 | 112 |
| `distilrubert-tiny-cased-conversational-v1` | 10.4 | 31 | 41 |
| `distilrubert-tiny-cased-conversational-5k` | **3.6** | 5 | **14** |
DistilRuBERT-tiny was trained for about 100 hrs. on 7 nVIDIA Tesla P100-SXM2.0 16Gb.
We used `PyTorchBenchmark` from `transformers` to evaluate model's performance and compare it with other pre-trained language models for Russian. All tests were performed on NVIDIA GeForce GTX 1080 Ti and Intel(R) Core(TM) i7-7700K CPU @ 4.20GHz
| Model name | Batch size | Seq len | Time, s || Mem, MB ||
|---|---|---|------||------||
| | | | CPU | GPU | CPU | GPU |
| `rubert-base-cased-conversational` | 16 | 512 | 5.283 | 0.1866 | 1550 | 1938 |
| `distilrubert-base-cased-conversational` | 16 | 512 | 2.335 | 0.0553 | 2177 | 2794 |
| `distilrubert-small-cased-conversational` | 16 | 512 | 0.802 | **0.0015** | 1541 | 1810 |
| `cointegrated/rubert-tiny` | 16 | 512 | 0.942 | 0.0022 | 1308 | 2088 |
| `cointegrated/rubert-tiny2` | 16 | 512 | 1.786 | 0.0023 | 3054 | 3848 |
| `distilrubert-tiny-cased-conversational-v1` | 16 | 512 | **0.374** | **0.002** | **714** | **1158** |
| `distilrubert-tiny-cased-conversational-5k` | 16 | 512 | **0.354** | **0.0018** | **664** | **1126** |
To evaluate model quality, we fine-tuned DistilRuBERT-tiny-5k on classification (RuSentiment, ParaPhraser), NER and question answering data sets for Russian. The results could be found in the [paper](https://arxiv.org/abs/2205.02340) Table 4 as well as performance benchmarks and training details.
# Citation
If you found the model useful for your research, we are kindly ask to cite [this](https://arxiv.org/abs/2205.02340) paper:
```
@misc{https://doi.org/10.48550/arxiv.2205.02340,
doi = {10.48550/ARXIV.2205.02340},
url = {https://arxiv.org/abs/2205.02340},
author = {Kolesnikova, Alina and Kuratov, Yuri and Konovalov, Vasily and Burtsev, Mikhail},
keywords = {Computation and Language (cs.CL), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Knowledge Distillation of Russian Language Models with Reduction of Vocabulary},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
\[1\]: P. Lison and J. Tiedemann, 2016, OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation \(LREC 2016\)
\[2\]: Shavrina T., Shapovalova O. \(2017\) TO THE METHODOLOGY OF CORPUS CONSTRUCTION FOR MACHINE LEARNING: «TAIGA» SYNTAX TREE CORPUS AND PARSER. in proc. of “CORPORA2017”, international conference , Saint-Petersbourg, 2017.
\[3\]: Sanh, V., Debut, L., Chaumond, J., & Wolf, T. \(2019\). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
\[4\]: <https://github.com/huggingface/transformers/tree/master/examples/research_projects/distillation>
\[5\]: <https://habr.com/ru/post/562064/>, <https://huggingface.co/cointegrated/rubert-tiny>
|
fxtentacle/wav2vec2-xls-r-1b-tevr
|
fxtentacle
| 2022-06-28T16:22:18Z | 27 | 14 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"hf-asr-leaderboard",
"de",
"dataset:common_voice",
"arxiv:2206.12693",
"license:apache-2.0",
"model-index",
"region:us"
] |
automatic-speech-recognition
| 2022-06-02T09:09:53Z |
---
language: de
datasets:
- common_voice
inference: false
metrics:
- wer
- cer
tags:
- audio
- automatic-speech-recognition
- speech
- hf-asr-leaderboard
license: apache-2.0
model-index:
- name: wav2vec 2.0 XLS-R 1B + TEVR tokens + 5-gram LM by Hajo Nils Krabbenhöft
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice de
type: common_voice
args: de
metrics:
- name: Test WER
type: wer
value: 3.6433399042523233
- name: Test CER
type: cer
value: 1.5398893560981173
---
## Overview
This folder contains a fully trained German speech recognition pipeline
consisting of an acoustic model using the new wav2vec 2.0 XLS-R 1B **TEVR** architecture
and a 5-gram KenLM language model.
For an explanation of the TEVR enhancements and their motivation, please see our paper:
[TEVR: Improving Speech Recognition by Token Entropy Variance Reduction](https://arxiv.org/abs/2206.12693).
[](https://paperswithcode.com/sota/speech-recognition-on-common-voice-german?p=tevr-improving-speech-recognition-by-token)
This pipeline scores a very competitive (as of June 2022) **word error rate of 3.64%** on CommonVoice German.
The character error rate was 1.54%.
## Citation
If you use this ASR pipeline for research, please cite:
```bibtex
@misc{https://doi.org/10.48550/arxiv.2206.12693,
doi = {10.48550/ARXIV.2206.12693},
url = {https://arxiv.org/abs/2206.12693},
author = {Krabbenhöft, Hajo Nils and Barth, Erhardt},
keywords = {Computation and Language (cs.CL), Sound (cs.SD), Audio and Speech Processing (eess.AS), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering, F.2.1; I.2.6; I.2.7},
title = {TEVR: Improving Speech Recognition by Token Entropy Variance Reduction},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
## TEVR Tokenizer Creation / Testing
See https://huggingface.co/fxtentacle/tevr-token-entropy-predictor-de for:
- our trained ByT5 model used to calculate the entropies in the paper
- a Jupyter Notebook to generate a TEVR Tokenizer from a text corpus
- a Jupyter Notebook to generate the illustration image in the paper
## Evaluation
To evalue this pipeline yourself and/or on your own data, see the `HF Eval Script.ipynb` Jupyter Notebook
or use the following python script:
```python
!pip install --quiet --root-user-action=ignore --upgrade pip
!pip install --quiet --root-user-action=ignore "datasets>=1.18.3" "transformers==4.11.3" librosa jiwer huggingface_hub
!pip install --quiet --root-user-action=ignore https://github.com/kpu/kenlm/archive/master.zip pyctcdecode
!pip install --quiet --root-user-action=ignore --upgrade transformers
!pip install --quiet --root-user-action=ignore torch_audiomentations audiomentations
```
```python
from datasets import load_dataset, Audio, load_metric
from transformers import AutoModelForCTC, Wav2Vec2ProcessorWithLM
import torchaudio.transforms as T
import torch
import unicodedata
import numpy as np
import re
# load testing dataset
testing_dataset = load_dataset("common_voice", "de", split="test")
# replace invisible characters with space
allchars = list(set([c for t in testing_dataset['sentence'] for c in list(t)]))
map_to_space = [c for c in allchars if unicodedata.category(c)[0] in 'PSZ' and c not in 'ʻ-']
replacements = ''.maketrans(''.join(map_to_space), ''.join(' ' for i in range(len(map_to_space))), '\'ʻ')
def text_fix(text):
# change ß to ss
text = text.replace('ß','ss')
# convert dash to space and remove double-space
text = text.replace('-',' ').replace(' ',' ').replace(' ',' ')
# make lowercase
text = text.lower()
# remap all invisible characters to space
text = text.translate(replacements).strip()
# for easier comparison to Zimmermeister, replace unrepresentable characters with ?
text = re.sub("[âşěýňעảנźțãòàǔł̇æồאắîשðșęūāñë生בøúıśžçćńřğ]+","?",text)
# remove multiple spaces (again)
text = ' '.join([w for w in text.split(' ') if w != ''])
return text
# load model
model = AutoModelForCTC.from_pretrained("fxtentacle/wav2vec2-xls-r-1b-tevr")
model.to('cuda')
# load processor
class HajoProcessor(Wav2Vec2ProcessorWithLM):
@staticmethod
def get_missing_alphabet_tokens(decoder, tokenizer):
return []
processor = HajoProcessor.from_pretrained("fxtentacle/wav2vec2-xls-r-1b-tevr")
# this function will be called for each WAV file
def predict_single_audio(batch, image=False):
audio = batch['audio']['array']
# resample, if needed
if batch['audio']['sampling_rate'] != 16000:
audio = T.Resample(orig_freq=batch['audio']['sampling_rate'], new_freq=16000)(torch.from_numpy(audio)).numpy()
# normalize
audio = (audio - audio.mean()) / np.sqrt(audio.var() + 1e-7)
# ask HF processor to prepare audio for GPU eval
input_values = processor(audio, return_tensors="pt", sampling_rate=16_000).input_values
# call model on GPU
with torch.no_grad():
logits = model(input_values.to('cuda')).logits.cpu().numpy()[0]
# ask HF processor to decode logits
decoded = processor.decode(logits, beam_width=500)
# return as dictionary
return { 'groundtruth': text_fix(batch['sentence']), 'prediction': decoded.text }
# process all audio files
all_predictions = testing_dataset.map(predict_single_audio, remove_columns=testing_dataset.column_names)
# print results
print('WER', load_metric("wer").compute(predictions=all_predictions['prediction'], references=all_predictions['groundtruth'])*100.0, '%')
print('CER', load_metric("cer").compute(predictions=all_predictions['prediction'], references=all_predictions['groundtruth'])*100.0, '%')
```
WER 3.6433399042523233 %
CER 1.5398893560981173 %
|
Parkerboys211/IDK
|
Parkerboys211
| 2022-06-28T15:45:54Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-06-28T15:44:55Z |
can someone teach me how to do this pls help me---
license: isc
---
|
Salvatore/bert-finetuned-ner
|
Salvatore
| 2022-06-28T15:24:09Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-16T09:09:26Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0997
- Proteinmutation F1: 0.1309
- Snp F1: 0.1953
- Dnamutation F1: 0.3778
- Precision: 0.2380
- Recall: 0.2416
- F1: 0.2398
- Accuracy: 0.9703
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Proteinmutation F1 | Snp F1 | Dnamutation F1 | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------------------:|:------:|:--------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 324 | 0.0533 | 0.0396 | 0.2830 | 0.4667 | 0.2334 | 0.3221 | 0.2707 | 0.9788 |
| 0.1072 | 2.0 | 648 | 0.0437 | 0.6065 | 0.4906 | 0.5009 | 0.4802 | 0.6348 | 0.5468 | 0.9868 |
| 0.1072 | 3.0 | 972 | 0.0592 | 0.1379 | 0.2485 | 0.2005 | 0.1639 | 0.2228 | 0.1889 | 0.9731 |
| 0.0573 | 4.0 | 1296 | 0.0722 | 0.0749 | 0.2530 | 0.4692 | 0.2705 | 0.2959 | 0.2826 | 0.9749 |
| 0.0431 | 5.0 | 1620 | 0.0766 | 0.1574 | 0.1847 | 0.2540 | 0.1766 | 0.2285 | 0.1992 | 0.9723 |
| 0.0431 | 6.0 | 1944 | 0.0805 | 0.1099 | 0.2202 | 0.2383 | 0.1657 | 0.2097 | 0.1851 | 0.9715 |
| 0.0396 | 7.0 | 2268 | 0.0886 | 0.1337 | 0.2138 | 0.4318 | 0.2683 | 0.2678 | 0.2680 | 0.9724 |
| 0.0354 | 8.0 | 2592 | 0.0927 | 0.1535 | 0.2113 | 0.3769 | 0.2505 | 0.2528 | 0.2516 | 0.9714 |
| 0.0354 | 9.0 | 2916 | 0.0978 | 0.1011 | 0.2540 | 0.3812 | 0.2495 | 0.2528 | 0.2512 | 0.9705 |
| 0.0312 | 10.0 | 3240 | 0.0997 | 0.1309 | 0.1953 | 0.3778 | 0.2380 | 0.2416 | 0.2398 | 0.9703 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2
- Datasets 2.0.0
- Tokenizers 0.12.1
|
mmdjiji/bert-chinese-idioms
|
mmdjiji
| 2022-06-28T14:12:31Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-28T02:02:33Z |
---
license: gpl-3.0
---
For the detail, see [github:mmdjiji/bert-chinese-idioms](https://github.com/mmdjiji/bert-chinese-idioms).
|
huggingtweets/g__j
|
huggingtweets
| 2022-06-28T13:36:16Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-28T13:36:09Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/959389610978742273/jfOMGQ1B_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Greg Jackson</div>
<div style="text-align: center; font-size: 14px;">@g__j</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Greg Jackson.
| Data | Greg Jackson |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 187 |
| Short tweets | 179 |
| Tweets kept | 2884 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2sl53oes/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @g__j's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/stwh74do) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/stwh74do/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/g__j')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
alleniver/my_test_cat
|
alleniver
| 2022-06-28T12:12:49Z | 0 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2022-06-28T12:12:21Z |
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v5
|
gary109
| 2022-06-28T11:49:44Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"gary109/AI_Light_Dance",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-27T14:51:07Z |
---
license: apache-2.0
tags:
- automatic-speech-recognition
- gary109/AI_Light_Dance
- generated_from_trainer
model-index:
- name: ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v5
This model is a fine-tuned version of [gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v4](https://huggingface.co/gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v4) on the GARY109/AI_LIGHT_DANCE - ONSET-STEPMANIA2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0163
- Wer: 0.6622
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.8867 | 1.0 | 376 | 1.0382 | 0.6821 |
| 0.8861 | 2.0 | 752 | 1.0260 | 0.6686 |
| 0.8682 | 3.0 | 1128 | 1.0358 | 0.6604 |
| 0.8662 | 4.0 | 1504 | 1.0234 | 0.6665 |
| 0.8463 | 5.0 | 1880 | 1.0333 | 0.6666 |
| 0.8573 | 6.0 | 2256 | 1.0163 | 0.6622 |
| 0.8628 | 7.0 | 2632 | 1.0209 | 0.6551 |
| 0.8493 | 8.0 | 3008 | 1.0525 | 0.6582 |
| 0.8371 | 9.0 | 3384 | 1.0409 | 0.6515 |
| 0.8229 | 10.0 | 3760 | 1.0597 | 0.6523 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
twieland/MIX3_ja-en_helsinki
|
twieland
| 2022-06-28T11:46:58Z | 113 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-22T00:54:09Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: MIX3_ja-en_helsinki
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MIX3_ja-en_helsinki
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ja-en](https://huggingface.co/Helsinki-NLP/opus-mt-ja-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4832
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 2.8699 | 0.01 | 5000 | 2.3465 |
| 2.6168 | 0.02 | 10000 | 2.2205 |
| 2.5083 | 0.03 | 15000 | 2.2382 |
| 2.4359 | 0.04 | 20000 | 2.1670 |
| 2.3821 | 0.06 | 25000 | 2.1122 |
| 2.3358 | 0.07 | 30000 | 2.0902 |
| 2.3045 | 0.08 | 35000 | 2.0461 |
| 2.2782 | 0.09 | 40000 | 2.0290 |
| 2.2481 | 0.1 | 45000 | 1.9910 |
| 2.2267 | 0.11 | 50000 | 2.0059 |
| 2.2056 | 0.12 | 55000 | 1.9858 |
| 2.1903 | 0.13 | 60000 | 1.9725 |
| 2.173 | 0.15 | 65000 | 1.9797 |
| 2.154 | 0.16 | 70000 | 1.9654 |
| 2.1429 | 0.17 | 75000 | 1.9567 |
| 2.1304 | 0.18 | 80000 | 1.9348 |
| 2.1232 | 0.19 | 85000 | 1.9361 |
| 2.116 | 0.2 | 90000 | 1.9277 |
| 2.1016 | 0.21 | 95000 | 1.9193 |
| 2.0984 | 0.22 | 100000 | 1.9064 |
| 2.0797 | 0.24 | 105000 | 1.9177 |
| 2.0767 | 0.25 | 110000 | 1.8975 |
| 2.0642 | 0.26 | 115000 | 1.8782 |
| 2.0595 | 0.27 | 120000 | 1.9012 |
| 2.0533 | 0.28 | 125000 | 1.8977 |
| 2.044 | 0.29 | 130000 | 1.8984 |
| 2.0374 | 0.3 | 135000 | 1.9221 |
| 2.0305 | 0.31 | 140000 | 1.9243 |
| 2.02 | 0.32 | 145000 | 1.8773 |
| 2.0195 | 0.34 | 150000 | 1.8676 |
| 2.0151 | 0.35 | 155000 | 1.8637 |
| 2.0065 | 0.36 | 160000 | 1.8556 |
| 2.0037 | 0.37 | 165000 | 1.8399 |
| 1.9963 | 0.38 | 170000 | 1.8452 |
| 1.9878 | 0.39 | 175000 | 1.8644 |
| 1.9871 | 0.4 | 180000 | 1.8576 |
| 1.9779 | 0.41 | 185000 | 1.8509 |
| 1.9721 | 0.43 | 190000 | 1.8405 |
| 1.9724 | 0.44 | 195000 | 1.8594 |
| 1.9685 | 0.45 | 200000 | 1.8540 |
| 1.9634 | 0.46 | 205000 | 1.8694 |
| 1.9583 | 0.47 | 210000 | 1.8591 |
| 1.9557 | 0.48 | 215000 | 1.8539 |
| 1.9494 | 0.49 | 220000 | 1.8673 |
| 1.9484 | 0.5 | 225000 | 1.8021 |
| 1.9395 | 0.52 | 230000 | 1.8309 |
| 1.9384 | 0.53 | 235000 | 1.7933 |
| 1.937 | 0.54 | 240000 | 1.8199 |
| 1.9315 | 0.55 | 245000 | 1.8065 |
| 1.9276 | 0.56 | 250000 | 1.7857 |
| 1.9248 | 0.57 | 255000 | 1.8207 |
| 1.9195 | 0.58 | 260000 | 1.7898 |
| 1.9187 | 0.59 | 265000 | 1.8097 |
| 1.9138 | 0.6 | 270000 | 1.7909 |
| 1.9094 | 0.62 | 275000 | 1.7995 |
| 1.9098 | 0.63 | 280000 | 1.8165 |
| 1.9038 | 0.64 | 285000 | 1.8132 |
| 1.9034 | 0.65 | 290000 | 1.7951 |
| 1.899 | 0.66 | 295000 | 1.7880 |
| 1.8965 | 0.67 | 300000 | 1.7953 |
| 1.8941 | 0.68 | 305000 | 1.7986 |
| 1.8919 | 0.69 | 310000 | 1.7964 |
| 1.8875 | 0.71 | 315000 | 1.8041 |
| 1.884 | 0.72 | 320000 | 1.7764 |
| 1.8798 | 0.73 | 325000 | 1.8019 |
| 1.8801 | 0.74 | 330000 | 1.7790 |
| 1.8809 | 0.75 | 335000 | 1.7849 |
| 1.8736 | 0.76 | 340000 | 1.7800 |
| 1.8727 | 0.77 | 345000 | 1.7900 |
| 1.8722 | 0.78 | 350000 | 1.7727 |
| 1.8699 | 0.8 | 355000 | 1.7597 |
| 1.8672 | 0.81 | 360000 | 1.7824 |
| 1.8638 | 0.82 | 365000 | 1.7674 |
| 1.8609 | 0.83 | 370000 | 1.7715 |
| 1.8584 | 0.84 | 375000 | 1.7694 |
| 1.8568 | 0.85 | 380000 | 1.7776 |
| 1.8523 | 0.86 | 385000 | 1.7697 |
| 1.8584 | 0.87 | 390000 | 1.7436 |
| 1.8474 | 0.88 | 395000 | 1.7644 |
| 1.8492 | 0.9 | 400000 | 1.7732 |
| 1.8465 | 0.91 | 405000 | 1.7611 |
| 1.846 | 0.92 | 410000 | 1.7717 |
| 1.8431 | 0.93 | 415000 | 1.7514 |
| 1.8402 | 0.94 | 420000 | 1.7353 |
| 1.8398 | 0.95 | 425000 | 1.7720 |
| 1.8314 | 0.96 | 430000 | 1.7728 |
| 1.8322 | 0.97 | 435000 | 1.7491 |
| 1.8284 | 0.99 | 440000 | 1.7561 |
| 1.8301 | 1.0 | 445000 | 1.7499 |
| 1.8182 | 1.01 | 450000 | 1.7514 |
| 1.8111 | 1.02 | 455000 | 1.7596 |
| 1.8116 | 1.03 | 460000 | 1.7455 |
| 1.8098 | 1.04 | 465000 | 1.7495 |
| 1.809 | 1.05 | 470000 | 1.7446 |
| 1.8088 | 1.06 | 475000 | 1.7290 |
| 1.8127 | 1.08 | 480000 | 1.7453 |
| 1.8051 | 1.09 | 485000 | 1.7495 |
| 1.8026 | 1.1 | 490000 | 1.7453 |
| 1.8028 | 1.11 | 495000 | 1.7615 |
| 1.8046 | 1.12 | 500000 | 1.7491 |
| 1.8052 | 1.13 | 505000 | 1.7280 |
| 1.7997 | 1.14 | 510000 | 1.7482 |
| 1.7976 | 1.15 | 515000 | 1.7368 |
| 1.7981 | 1.16 | 520000 | 1.7354 |
| 1.7949 | 1.18 | 525000 | 1.7076 |
| 1.7943 | 1.19 | 530000 | 1.7020 |
| 1.7911 | 1.2 | 535000 | 1.7121 |
| 1.7909 | 1.21 | 540000 | 1.7170 |
| 1.7926 | 1.22 | 545000 | 1.7310 |
| 1.7856 | 1.23 | 550000 | 1.7218 |
| 1.7875 | 1.24 | 555000 | 1.7362 |
| 1.7801 | 1.25 | 560000 | 1.7484 |
| 1.7854 | 1.27 | 565000 | 1.7466 |
| 1.7799 | 1.28 | 570000 | 1.7248 |
| 1.7823 | 1.29 | 575000 | 1.7355 |
| 1.7765 | 1.3 | 580000 | 1.7188 |
| 1.7779 | 1.31 | 585000 | 1.6993 |
| 1.7751 | 1.32 | 590000 | 1.7154 |
| 1.7762 | 1.33 | 595000 | 1.7348 |
| 1.7725 | 1.34 | 600000 | 1.7272 |
| 1.7701 | 1.36 | 605000 | 1.7157 |
| 1.7644 | 1.37 | 610000 | 1.7161 |
| 1.7707 | 1.38 | 615000 | 1.6961 |
| 1.764 | 1.39 | 620000 | 1.6930 |
| 1.7639 | 1.4 | 625000 | 1.6927 |
| 1.7654 | 1.41 | 630000 | 1.6989 |
| 1.7623 | 1.42 | 635000 | 1.6892 |
| 1.7598 | 1.43 | 640000 | 1.6911 |
| 1.7575 | 1.44 | 645000 | 1.7199 |
| 1.7574 | 1.46 | 650000 | 1.6992 |
| 1.7526 | 1.47 | 655000 | 1.6981 |
| 1.7556 | 1.48 | 660000 | 1.6860 |
| 1.7558 | 1.49 | 665000 | 1.7099 |
| 1.7539 | 1.5 | 670000 | 1.6950 |
| 1.7454 | 1.51 | 675000 | 1.6999 |
| 1.748 | 1.52 | 680000 | 1.6871 |
| 1.7476 | 1.53 | 685000 | 1.6884 |
| 1.7493 | 1.55 | 690000 | 1.6984 |
| 1.745 | 1.56 | 695000 | 1.6999 |
| 1.7397 | 1.57 | 700000 | 1.7036 |
| 1.7429 | 1.58 | 705000 | 1.7223 |
| 1.7367 | 1.59 | 710000 | 1.7111 |
| 1.7403 | 1.6 | 715000 | 1.6691 |
| 1.7361 | 1.61 | 720000 | 1.6693 |
| 1.737 | 1.62 | 725000 | 1.6884 |
| 1.7347 | 1.63 | 730000 | 1.6641 |
| 1.7323 | 1.65 | 735000 | 1.6628 |
| 1.7329 | 1.66 | 740000 | 1.6759 |
| 1.7292 | 1.67 | 745000 | 1.6654 |
| 1.7275 | 1.68 | 750000 | 1.6738 |
| 1.7266 | 1.69 | 755000 | 1.6792 |
| 1.7259 | 1.7 | 760000 | 1.6752 |
| 1.7231 | 1.71 | 765000 | 1.6641 |
| 1.7238 | 1.72 | 770000 | 1.6676 |
| 1.7223 | 1.74 | 775000 | 1.6563 |
| 1.722 | 1.75 | 780000 | 1.6541 |
| 1.7195 | 1.76 | 785000 | 1.6560 |
| 1.7171 | 1.77 | 790000 | 1.6786 |
| 1.7187 | 1.78 | 795000 | 1.6434 |
| 1.7186 | 1.79 | 800000 | 1.6538 |
| 1.7115 | 1.8 | 805000 | 1.6535 |
| 1.7119 | 1.81 | 810000 | 1.6738 |
| 1.7106 | 1.83 | 815000 | 1.6597 |
| 1.7088 | 1.84 | 820000 | 1.6486 |
| 1.7079 | 1.85 | 825000 | 1.6576 |
| 1.7062 | 1.86 | 830000 | 1.6676 |
| 1.7084 | 1.87 | 835000 | 1.6449 |
| 1.7059 | 1.88 | 840000 | 1.6515 |
| 1.7057 | 1.89 | 845000 | 1.6609 |
| 1.7021 | 1.9 | 850000 | 1.6482 |
| 1.7005 | 1.91 | 855000 | 1.6653 |
| 1.6988 | 1.93 | 860000 | 1.6801 |
| 1.6964 | 1.94 | 865000 | 1.6830 |
| 1.6954 | 1.95 | 870000 | 1.6589 |
| 1.693 | 1.96 | 875000 | 1.6553 |
| 1.689 | 1.97 | 880000 | 1.6554 |
| 1.69 | 1.98 | 885000 | 1.6424 |
| 1.6893 | 1.99 | 890000 | 1.6628 |
| 1.6772 | 2.0 | 895000 | 1.6709 |
| 1.6703 | 2.02 | 900000 | 1.6627 |
| 1.6726 | 2.03 | 905000 | 1.6612 |
| 1.669 | 2.04 | 910000 | 1.6595 |
| 1.6696 | 2.05 | 915000 | 1.6427 |
| 1.6672 | 2.06 | 920000 | 1.6497 |
| 1.669 | 2.07 | 925000 | 1.6288 |
| 1.6675 | 2.08 | 930000 | 1.6443 |
| 1.6685 | 2.09 | 935000 | 1.6316 |
| 1.6671 | 2.11 | 940000 | 1.6451 |
| 1.6673 | 2.12 | 945000 | 1.6313 |
| 1.6649 | 2.13 | 950000 | 1.6363 |
| 1.6655 | 2.14 | 955000 | 1.6440 |
| 1.6637 | 2.15 | 960000 | 1.6238 |
| 1.6632 | 2.16 | 965000 | 1.6226 |
| 1.6599 | 2.17 | 970000 | 1.6171 |
| 1.6602 | 2.18 | 975000 | 1.6466 |
| 1.658 | 2.19 | 980000 | 1.6341 |
| 1.6571 | 2.21 | 985000 | 1.6500 |
| 1.6572 | 2.22 | 990000 | 1.6225 |
| 1.6572 | 2.23 | 995000 | 1.6296 |
| 1.6552 | 2.24 | 1000000 | 1.6437 |
| 1.6548 | 2.25 | 1005000 | 1.6162 |
| 1.6552 | 2.26 | 1010000 | 1.6223 |
| 1.6544 | 2.27 | 1015000 | 1.6355 |
| 1.6464 | 2.28 | 1020000 | 1.6250 |
| 1.652 | 2.3 | 1025000 | 1.6217 |
| 1.6481 | 2.31 | 1030000 | 1.6079 |
| 1.6466 | 2.32 | 1035000 | 1.6110 |
| 1.6462 | 2.33 | 1040000 | 1.6210 |
| 1.6448 | 2.34 | 1045000 | 1.5993 |
| 1.6461 | 2.35 | 1050000 | 1.6096 |
| 1.6396 | 2.36 | 1055000 | 1.6137 |
| 1.644 | 2.37 | 1060000 | 1.6189 |
| 1.6396 | 2.39 | 1065000 | 1.6211 |
| 1.639 | 2.4 | 1070000 | 1.6149 |
| 1.6358 | 2.41 | 1075000 | 1.6144 |
| 1.6356 | 2.42 | 1080000 | 1.6018 |
| 1.6364 | 2.43 | 1085000 | 1.5999 |
| 1.6352 | 2.44 | 1090000 | 1.6095 |
| 1.634 | 2.45 | 1095000 | 1.6114 |
| 1.6279 | 2.46 | 1100000 | 1.6156 |
| 1.6272 | 2.47 | 1105000 | 1.6124 |
| 1.6319 | 2.49 | 1110000 | 1.6046 |
| 1.6276 | 2.5 | 1115000 | 1.6152 |
| 1.6285 | 2.51 | 1120000 | 1.6129 |
| 1.6242 | 2.52 | 1125000 | 1.5984 |
| 1.6261 | 2.53 | 1130000 | 1.6116 |
| 1.623 | 2.54 | 1135000 | 1.6061 |
| 1.6203 | 2.55 | 1140000 | 1.6182 |
| 1.62 | 2.56 | 1145000 | 1.5887 |
| 1.6177 | 2.58 | 1150000 | 1.5731 |
| 1.6172 | 2.59 | 1155000 | 1.5990 |
| 1.6179 | 2.6 | 1160000 | 1.5965 |
| 1.6206 | 2.61 | 1165000 | 1.6000 |
| 1.6156 | 2.62 | 1170000 | 1.5873 |
| 1.6124 | 2.63 | 1175000 | 1.5899 |
| 1.613 | 2.64 | 1180000 | 1.5910 |
| 1.6134 | 2.65 | 1185000 | 1.6017 |
| 1.609 | 2.67 | 1190000 | 1.5822 |
| 1.6084 | 2.68 | 1195000 | 1.5906 |
| 1.6101 | 2.69 | 1200000 | 1.6218 |
| 1.6077 | 2.7 | 1205000 | 1.6149 |
| 1.6057 | 2.71 | 1210000 | 1.5994 |
| 1.6018 | 2.72 | 1215000 | 1.5839 |
| 1.6049 | 2.73 | 1220000 | 1.5864 |
| 1.6012 | 2.74 | 1225000 | 1.5994 |
| 1.6013 | 2.75 | 1230000 | 1.5821 |
| 1.5957 | 2.77 | 1235000 | 1.5964 |
| 1.5971 | 2.78 | 1240000 | 1.5897 |
| 1.5967 | 2.79 | 1245000 | 1.5774 |
| 1.5927 | 2.8 | 1250000 | 1.5861 |
| 1.5954 | 2.81 | 1255000 | 1.5789 |
| 1.5937 | 2.82 | 1260000 | 1.5739 |
| 1.5895 | 2.83 | 1265000 | 1.5701 |
| 1.5912 | 2.84 | 1270000 | 1.5622 |
| 1.5922 | 2.86 | 1275000 | 1.5730 |
| 1.5883 | 2.87 | 1280000 | 1.5775 |
| 1.5864 | 2.88 | 1285000 | 1.5726 |
| 1.5837 | 2.89 | 1290000 | 1.5679 |
| 1.5824 | 2.9 | 1295000 | 1.5683 |
| 1.5817 | 2.91 | 1300000 | 1.5508 |
| 1.5778 | 2.92 | 1305000 | 1.5620 |
| 1.5822 | 2.93 | 1310000 | 1.5556 |
| 1.5783 | 2.95 | 1315000 | 1.5693 |
| 1.5751 | 2.96 | 1320000 | 1.5781 |
| 1.5716 | 2.97 | 1325000 | 1.5655 |
| 1.5765 | 2.98 | 1330000 | 1.5528 |
| 1.5728 | 2.99 | 1335000 | 1.5748 |
| 1.5672 | 3.0 | 1340000 | 1.5597 |
| 1.5467 | 3.01 | 1345000 | 1.5461 |
| 1.547 | 3.02 | 1350000 | 1.5516 |
| 1.5462 | 3.03 | 1355000 | 1.5519 |
| 1.5464 | 3.05 | 1360000 | 1.5593 |
| 1.5457 | 3.06 | 1365000 | 1.5576 |
| 1.5441 | 3.07 | 1370000 | 1.5653 |
| 1.544 | 3.08 | 1375000 | 1.5662 |
| 1.5467 | 3.09 | 1380000 | 1.5611 |
| 1.5439 | 3.1 | 1385000 | 1.5635 |
| 1.5449 | 3.11 | 1390000 | 1.5467 |
| 1.5417 | 3.12 | 1395000 | 1.5495 |
| 1.5428 | 3.14 | 1400000 | 1.5552 |
| 1.5432 | 3.15 | 1405000 | 1.5347 |
| 1.5401 | 3.16 | 1410000 | 1.5394 |
| 1.5391 | 3.17 | 1415000 | 1.5497 |
| 1.539 | 3.18 | 1420000 | 1.5431 |
| 1.5368 | 3.19 | 1425000 | 1.5479 |
| 1.5365 | 3.2 | 1430000 | 1.5513 |
| 1.5327 | 3.21 | 1435000 | 1.5467 |
| 1.5337 | 3.23 | 1440000 | 1.5477 |
| 1.5317 | 3.24 | 1445000 | 1.5398 |
| 1.5315 | 3.25 | 1450000 | 1.5481 |
| 1.532 | 3.26 | 1455000 | 1.5385 |
| 1.5312 | 3.27 | 1460000 | 1.5520 |
| 1.5328 | 3.28 | 1465000 | 1.5423 |
| 1.5288 | 3.29 | 1470000 | 1.5489 |
| 1.5271 | 3.3 | 1475000 | 1.5395 |
| 1.5273 | 3.31 | 1480000 | 1.5335 |
| 1.5235 | 3.33 | 1485000 | 1.5381 |
| 1.5224 | 3.34 | 1490000 | 1.5289 |
| 1.5206 | 3.35 | 1495000 | 1.5331 |
| 1.5189 | 3.36 | 1500000 | 1.5343 |
| 1.5152 | 3.37 | 1505000 | 1.5246 |
| 1.5225 | 3.38 | 1510000 | 1.5280 |
| 1.5168 | 3.39 | 1515000 | 1.5315 |
| 1.5161 | 3.4 | 1520000 | 1.5284 |
| 1.5111 | 3.42 | 1525000 | 1.5278 |
| 1.5154 | 3.43 | 1530000 | 1.5148 |
| 1.515 | 3.44 | 1535000 | 1.5286 |
| 1.5117 | 3.45 | 1540000 | 1.5291 |
| 1.5099 | 3.46 | 1545000 | 1.5320 |
| 1.5097 | 3.47 | 1550000 | 1.5323 |
| 1.5075 | 3.48 | 1555000 | 1.5157 |
| 1.5059 | 3.49 | 1560000 | 1.5214 |
| 1.5011 | 3.51 | 1565000 | 1.5199 |
| 1.5074 | 3.52 | 1570000 | 1.5114 |
| 1.5033 | 3.53 | 1575000 | 1.5145 |
| 1.5009 | 3.54 | 1580000 | 1.5184 |
| 1.4994 | 3.55 | 1585000 | 1.5125 |
| 1.5041 | 3.56 | 1590000 | 1.5048 |
| 1.5002 | 3.57 | 1595000 | 1.5156 |
| 1.4967 | 3.58 | 1600000 | 1.5176 |
| 1.4923 | 3.59 | 1605000 | 1.5128 |
| 1.495 | 3.61 | 1610000 | 1.5188 |
| 1.4929 | 3.62 | 1615000 | 1.5149 |
| 1.4921 | 3.63 | 1620000 | 1.5097 |
| 1.4916 | 3.64 | 1625000 | 1.5161 |
| 1.4852 | 3.65 | 1630000 | 1.5134 |
| 1.4881 | 3.66 | 1635000 | 1.5101 |
| 1.4873 | 3.67 | 1640000 | 1.5027 |
| 1.4911 | 3.68 | 1645000 | 1.4968 |
| 1.488 | 3.7 | 1650000 | 1.4962 |
| 1.4842 | 3.71 | 1655000 | 1.5030 |
| 1.4829 | 3.72 | 1660000 | 1.5041 |
| 1.4816 | 3.73 | 1665000 | 1.5076 |
| 1.479 | 3.74 | 1670000 | 1.5029 |
| 1.4768 | 3.75 | 1675000 | 1.5053 |
| 1.4769 | 3.76 | 1680000 | 1.5026 |
| 1.4781 | 3.77 | 1685000 | 1.5016 |
| 1.4781 | 3.79 | 1690000 | 1.5034 |
| 1.4777 | 3.8 | 1695000 | 1.4976 |
| 1.4736 | 3.81 | 1700000 | 1.5002 |
| 1.4715 | 3.82 | 1705000 | 1.4995 |
| 1.4716 | 3.83 | 1710000 | 1.4996 |
| 1.4648 | 3.84 | 1715000 | 1.4952 |
| 1.4711 | 3.85 | 1720000 | 1.4934 |
| 1.4682 | 3.86 | 1725000 | 1.4965 |
| 1.4659 | 3.87 | 1730000 | 1.4932 |
| 1.4689 | 3.89 | 1735000 | 1.4920 |
| 1.4656 | 3.9 | 1740000 | 1.4910 |
| 1.4666 | 3.91 | 1745000 | 1.4893 |
| 1.4611 | 3.92 | 1750000 | 1.4888 |
| 1.4623 | 3.93 | 1755000 | 1.4898 |
| 1.4637 | 3.94 | 1760000 | 1.4909 |
| 1.4585 | 3.95 | 1765000 | 1.4858 |
| 1.4586 | 3.96 | 1770000 | 1.4847 |
| 1.4579 | 3.98 | 1775000 | 1.4841 |
| 1.458 | 3.99 | 1780000 | 1.4840 |
| 1.4572 | 4.0 | 1785000 | 1.4832 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
facebook/regnet-y-032
|
facebook
| 2022-06-28T11:39:30Z | 68 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"regnet",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2003.13678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-18T15:35:16Z |
---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# RegNet
RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls).
Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, RegNetForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/regnet-y-040")
>>> model = RegNetForImageClassification.from_pretrained("zuppif/regnet-y-040")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
'tabby, tabby cat'
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet).
|
facebook/regnet-y-160
|
facebook
| 2022-06-28T11:39:06Z | 86 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"regnet",
"image-classification",
"vision",
"dataset:imagenet-1k",
"arxiv:2003.13678",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-18T15:40:45Z |
---
license: apache-2.0
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# RegNet
RegNet model trained on imagenet-1k. It was introduced in the paper [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678) and first released in [this repository](https://github.com/facebookresearch/pycls).
Disclaimer: The team releasing RegNet did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The authors design search spaces to perform Neural Architecture Search (NAS). They first start from a high dimensional search space and iteratively reduce the search space by empirically applying constraints based on the best-performing models sampled by the current search space.

## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=regnet) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
>>> from transformers import AutoFeatureExtractor, RegNetForImageClassification
>>> import torch
>>> from datasets import load_dataset
>>> dataset = load_dataset("huggingface/cats-image")
>>> image = dataset["test"]["image"][0]
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/regnet-y-040")
>>> model = RegNetForImageClassification.from_pretrained("zuppif/regnet-y-040")
>>> inputs = feature_extractor(image, return_tensors="pt")
>>> with torch.no_grad():
... logits = model(**inputs).logits
>>> # model predicts one of the 1000 ImageNet classes
>>> predicted_label = logits.argmax(-1).item()
>>> print(model.config.id2label[predicted_label])
'tabby, tabby cat'
```
For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/regnet).
|
aspis/swin-finetuned-food101
|
aspis
| 2022-06-28T11:02:36Z | 105 | 5 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:food101",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-06-09T10:48:09Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- food101
metrics:
- accuracy
model-index:
- name: swin-finetuned-food101
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: food101
type: food101
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9210297029702971
- task:
type: image-classification
name: Image Classification
dataset:
name: food101
type: food101
config: default
split: validation
metrics:
- name: Accuracy
type: accuracy
value: 0.9135841584158416
verified: true
- name: Precision Macro
type: precision
value: 0.9151645786633058
verified: true
- name: Precision Micro
type: precision
value: 0.9135841584158416
verified: true
- name: Precision Weighted
type: precision
value: 0.915164578663306
verified: true
- name: Recall Macro
type: recall
value: 0.9135841584158414
verified: true
- name: Recall Micro
type: recall
value: 0.9135841584158416
verified: true
- name: Recall Weighted
type: recall
value: 0.9135841584158416
verified: true
- name: F1 Macro
type: f1
value: 0.9138785016966742
verified: true
- name: F1 Micro
type: f1
value: 0.9135841584158415
verified: true
- name: F1 Weighted
type: f1
value: 0.9138785016966743
verified: true
- name: loss
type: loss
value: 0.30761435627937317
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-finetuned-food101
This model is a fine-tuned version of [microsoft/swin-base-patch4-window7-224](https://huggingface.co/microsoft/swin-base-patch4-window7-224) on the food101 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2772
- Accuracy: 0.9210
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5077 | 1.0 | 1183 | 0.3851 | 0.8893 |
| 0.3523 | 2.0 | 2366 | 0.3124 | 0.9088 |
| 0.1158 | 3.0 | 3549 | 0.2772 | 0.9210 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
muhammedshihebi/test_Model
|
muhammedshihebi
| 2022-06-28T10:32:10Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"xlm-roberta",
"question-answering",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-06-28T10:31:50Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: test_Model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# test_Model
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
SerdarHelli/ThyroidTumorClassificationModel
|
SerdarHelli
| 2022-06-28T09:52:22Z | 92 | 2 |
transformers
|
[
"transformers",
"pytorch",
"convnext",
"image-classification",
"medicalimaging",
"thyroidtumor",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-06-26T10:52:45Z |
---
tags:
- medicalimaging
- thyroidtumor
metrics:
- accuracy
---
Thyroid nodule is one of the most common endocrine carcinomas. Due to its higher reveal ability and ability to distinguish between benign and malignant nodules in pathological features, ultrasonography has become the most widely used modality for finding and diagnosing thyroid cancer when compared to CT and MRI.
In this study, the purpose is the classification of thyroid tumors on ultrasound images with 2 different categories:
- Malign(1)
- Benign(0)
This study was made using HF Transformers :
- [ On Google Colab](https://colab.research.google.com/drive/1ueSq8Y_NmFr7NGdtS8FStI3d2HR-43LD?usp=sharing)
- [On Github](https://github.com/SerdarHelli/The-Classification-of-Thyroid-Tumors-on-UltraSound-Images-using-Deep-Learning-Methods)
- [ Using Keras and GradCam With MultiClasses Medium Article](https://serdarhelli.medium.com/the-basic-classification-of-thyroid-tumors-on-ultrasound-images-using-deep-learning-methods-46f812d859ea)
The Dataset:
[Colombia National University presented an open access database of thyroid ultrasound images.](http://cimalab.unal.edu.co/?lang=es&mod=program&id=5)
Ref : Pedraza, Lina & Vargas, Carlos & Narváez, Fabián & Durán, Oscar & Muñoz, Emma & Romero, Eduardo. (2015). An open access thyroid ultrasound-image Database. Progress in Biomedical Optics and Imaging — Proceedings of SPIE. 9287. 10.1117/12.2073532.
|
rtorrero/my-first-model
|
rtorrero
| 2022-06-28T08:44:52Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-06-28T07:41:49Z |
This is just me playing around with Hugging Face :-)
|
Nabby/PPO-LunarLander-v2
|
Nabby
| 2022-06-28T04:21:46Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-28T04:21:21Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 276.91 +/- 22.39
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
JeremiahZ/reproduce-sup-roberta-base-avg
|
JeremiahZ
| 2022-06-28T04:10:25Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"generated_from_trainer",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-06-27T08:38:05Z |
---
language:
- en
license: mit
tags:
- generated_from_trainer
model-index:
- name: reproduce-sup-roberta-base-avg
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# reproduce-sup-roberta-base-avg
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
jcmc/dqn-SpaceInvadersNoFrameskip-v4
|
jcmc
| 2022-06-28T03:41:05Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-28T03:40:33Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 416.50 +/- 122.17
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jcmc -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga jcmc
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Aalaa/opt-125m-finetuned-wikitext2
|
Aalaa
| 2022-06-28T03:30:55Z | 55 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"opt",
"text-generation",
"generated_from_trainer",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-28T02:41:03Z |
---
license: other
tags:
- generated_from_trainer
model-index:
- name: opt-125m-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-125m-finetuned-wikitext2
This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3409
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.4123 | 1.0 | 2370 | 3.3621 |
| 3.2096 | 2.0 | 4740 | 3.3452 |
| 3.0822 | 3.0 | 7110 | 3.3409 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
lingchensanwen/distilbert-base-uncased-finetuned-squad
|
lingchensanwen
| 2022-06-28T02:57:31Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-06-27T00:42:11Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0337
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 46 | 0.4284 |
| No log | 2.0 | 92 | 0.0573 |
| No log | 3.0 | 138 | 0.0337 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
sohomghosh/LIPI_FinSim4_ESG_task2
|
sohomghosh
| 2022-06-28T01:50:57Z | 0 | 0 | null |
[
"pytorch",
"license:mit",
"region:us"
] | null | 2022-06-26T13:02:54Z |
---
license: mit
---
How to use ths model?
Download the pytorch_model.bin file and execute the following:
```python
import pandas as pd
import torch
import transformers
from torch.utils.data import Dataset, DataLoader
from transformers import RobertaModel, RobertaTokenizer, BertModel, BertTokenizer
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
MAX_LEN = 128
BATCH_SIZE = 20
text_col_name = 'sentence'
category_col = 'label_text'
#Input should be one dataframe having one column with header as 'sentence' : test_df (do reset_index() if needed)
test_df = pd.DataFrame({"sentence":['We are striving to reduce the amount of waste we produce, and to reduce water as well as paper consumption.']})
def scoring_data_prep(dataset):
out = []
target = []
mask = []
for i in range(len(dataset)):
rec = dataset[i]
out.append(rec['ids'].reshape(-1,MAX_LEN))
mask.append(rec['mask'].reshape(-1,MAX_LEN))
out_stack = torch.cat(out, dim = 0)
mask_stack = torch.cat(mask, dim =0 )
out_stack = out_stack.to(device, dtype = torch.long)
mask_stack = mask_stack.to(device, dtype = torch.long)
return out_stack, mask_stack
class Triage(Dataset):
"""
This is a subclass of torch packages Dataset class. It processes input to create ids, masks and targets required for model training.
"""
def __init__(self, dataframe, tokenizer, max_len, text_col_name):
self.len = len(dataframe)
self.data = dataframe
self.tokenizer = tokenizer
self.max_len = max_len
self.text_col_name = text_col_name
def __getitem__(self, index):
title = str(self.data[self.text_col_name][index])
title = " ".join(title.split())
inputs = self.tokenizer.encode_plus(
title,
None,
add_special_tokens=True,
max_length=self.max_len,
pad_to_max_length=True,
return_token_type_ids=True,
truncation=True,
)
ids = inputs["input_ids"]
mask = inputs["attention_mask"]
return {
"ids": torch.tensor(ids, dtype=torch.long),
"mask": torch.tensor(mask, dtype=torch.long),
}
def __len__(self):
return self.len
class BERTClass(torch.nn.Module):
def __init__(self, num_class):
super(BERTClass, self).__init__()
self.num_class = num_class
self.l1 = RobertaModel.from_pretrained("roberta-base")
self.pre_classifier = torch.nn.Linear(768, 768)
self.dropout = torch.nn.Dropout(0.3)
self.classifier = torch.nn.Linear(768, self.num_class)
self.history = dict()
def forward(self, input_ids, attention_mask):
output_1 = self.l1(input_ids=input_ids, attention_mask=attention_mask)
hidden_state = output_1[0]
pooler = hidden_state[:, 0]
pooler = self.pre_classifier(pooler)
pooler = torch.nn.ReLU()(pooler)
pooler = self.dropout(pooler)
output = self.classifier(pooler)
return output
def do_predict(model, tokenizer, test_df):
test_set = Triage(test_df, tokenizer, MAX_LEN, text_col_name)
test_params = {'batch_size' : BATCH_SIZE, 'shuffle': False, 'num_workers':0}
test_loader = DataLoader(test_set, **test_params)
out_stack, mask_stack = scoring_data_prep(dataset = test_set)
n = 0
combined_output = []
model.eval()
with torch.no_grad():
while n < test_df.shape[0]:
output = model(out_stack[n:n+BATCH_SIZE,:],mask_stack[n:n+BATCH_SIZE,:])
n = n + BATCH_SIZE
combined_output.append(output)
combined_output = torch.cat(combined_output, dim = 0)
preds = torch.argsort(combined_output, axis = 1, descending = True)
preds = preds.to('cpu')
actual_predictions = [i[0] for i in preds.tolist()]
return actual_predictions
model_sustain = BERTClass(2)
model_sustain.to(device)
model_sustain.load_state_dict(torch.load('pytorch_model.bin', map_location=device)['model_state_dict'])
tokenizer_sus = BertTokenizer.from_pretrained('roberta-base')
actual_predictions_sus = do_predict(model_sustain, tokenizer_sus, test_df)
test_df['sustainability'] = ['sustainable' if i==0 else 'unsustainable' for i in actual_predictions_read]
```
Our work can be cited as follows:
```bibtex
@inproceedings{ghosh-2022-finsim-esg,
title = "Ranking Environment, Social And Governance Related Concepts And Assessing Sustainability Aspect Of Financial Texts",
author={Ghosh, Sohom and Naskar, Sudip Kumar},
booktitle = "Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP@IJCAI-ECAI 2022)",
month = "July" ,
year = "2022",
address = "Vienna, Austria",
publisher = "-",
url = "https://mx.nthu.edu.tw/~chungchichen/FinNLP2022_IJCAI/14.pdf",
pages = "87--92",
}
```
|
KaliYuga/spritesheetdiffusion
|
KaliYuga
| 2022-06-28T00:52:09Z | 0 | 4 | null |
[
"license:cc-by-4.0",
"region:us"
] | null | 2022-06-23T01:10:50Z |
---
license: cc-by-4.0
---
This model is really only supposed to be for my [patreon patrons](https://www.patreon.com/kaliyuga_ai). I ask that, unless you *truly* can't afford to pay $5 to access this model, you not use it without being a patron. Regardless, you must give attribution if you use this model in any product/app/game, etc
|
CodeIvy/distilgpt2-finetuned-wikitext2
|
CodeIvy
| 2022-06-27T23:40:38Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-18T22:24:37Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.9.1
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Abdelmageed95/distilgpt2-finetuned-wikitext2
|
Abdelmageed95
| 2022-06-27T22:58:48Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-27T22:27:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6421
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7602 | 1.0 | 2334 | 3.6669 |
| 3.653 | 2.0 | 4668 | 3.6472 |
| 3.6006 | 3.0 | 7002 | 3.6421 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
hidude562/discordgpt2mini
|
hidude562
| 2022-06-27T21:19:20Z | 0 | 1 | null |
[
"generated_from_trainer",
"license:mit",
"region:us"
] | null | 2022-05-05T09:56:43Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-discordgpt2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-discordgpt2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 5.3032
- eval_runtime: 59.2004
- eval_samples_per_second: 274.542
- eval_steps_per_second: 34.324
- epoch: 0.26
- step: 25500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Dizzykong/charles-dickens
|
Dizzykong
| 2022-06-27T21:13:14Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-27T19:27:02Z |
---
tags:
- generated_from_trainer
model-index:
- name: charles-dickens
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# charles-dickens
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
BeardedJohn/bert-finetuned-ner-ubb-conll
|
BeardedJohn
| 2022-06-27T16:24:46Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-24T12:42:22Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: BeardedJohn/bert-finetuned-ner-ubb-conll
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# BeardedJohn/bert-finetuned-ner-ubb-conll
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0351
- Validation Loss: 0.0581
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1317, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.2302 | 0.0731 | 0 |
| 0.0556 | 0.0593 | 1 |
| 0.0351 | 0.0581 | 2 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
microsoft/deberta-xlarge-mnli
|
microsoft
| 2022-06-27T15:47:33Z | 504,931 | 16 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"deberta",
"text-classification",
"deberta-v1",
"deberta-mnli",
"en",
"arxiv:2006.03654",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language: en
tags:
- deberta-v1
- deberta-mnli
tasks: mnli
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
license: mit
widget:
- text: "[CLS] I love you. [SEP] I like you. [SEP]"
---
## DeBERTa: Decoding-enhanced BERT with Disentangled Attention
[DeBERTa](https://arxiv.org/abs/2006.03654) improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data.
Please check the [official repository](https://github.com/microsoft/DeBERTa) for more details and updates.
This the DeBERTa xlarge model(750M) fine-tuned with mnli task.
### Fine-tuning on NLU tasks
We present the dev results on SQuAD 1.1/2.0 and several GLUE benchmark tasks.
| Model | SQuAD 1.1 | SQuAD 2.0 | MNLI-m/mm | SST-2 | QNLI | CoLA | RTE | MRPC | QQP |STS-B |
|---------------------------|-----------|-----------|-------------|-------|------|------|--------|-------|-------|------|
| | F1/EM | F1/EM | Acc | Acc | Acc | MCC | Acc |Acc/F1 |Acc/F1 |P/S |
| BERT-Large | 90.9/84.1 | 81.8/79.0 | 86.6/- | 93.2 | 92.3 | 60.6 | 70.4 | 88.0/- | 91.3/- |90.0/- |
| RoBERTa-Large | 94.6/88.9 | 89.4/86.5 | 90.2/- | 96.4 | 93.9 | 68.0 | 86.6 | 90.9/- | 92.2/- |92.4/- |
| XLNet-Large | 95.1/89.7 | 90.6/87.9 | 90.8/- | 97.0 | 94.9 | 69.0 | 85.9 | 90.8/- | 92.3/- |92.5/- |
| [DeBERTa-Large](https://huggingface.co/microsoft/deberta-large)<sup>1</sup> | 95.5/90.1 | 90.7/88.0 | 91.3/91.1| 96.5|95.3| 69.5| 91.0| 92.6/94.6| 92.3/- |92.8/92.5 |
| [DeBERTa-XLarge](https://huggingface.co/microsoft/deberta-xlarge)<sup>1</sup> | -/- | -/- | 91.5/91.2| 97.0 | - | - | 93.1 | 92.1/94.3 | - |92.9/92.7|
| [DeBERTa-V2-XLarge](https://huggingface.co/microsoft/deberta-v2-xlarge)<sup>1</sup>|95.8/90.8| 91.4/88.9|91.7/91.6| **97.5**| 95.8|71.1|**93.9**|92.0/94.2|92.3/89.8|92.9/92.9|
|**[DeBERTa-V2-XXLarge](https://huggingface.co/microsoft/deberta-v2-xxlarge)<sup>1,2</sup>**|**96.1/91.4**|**92.2/89.7**|**91.7/91.9**|97.2|**96.0**|**72.0**| 93.5| **93.1/94.9**|**92.7/90.3** |**93.2/93.1** |
--------
#### Notes.
- <sup>1</sup> Following RoBERTa, for RTE, MRPC, STS-B, we fine-tune the tasks based on [DeBERTa-Large-MNLI](https://huggingface.co/microsoft/deberta-large-mnli), [DeBERTa-XLarge-MNLI](https://huggingface.co/microsoft/deberta-xlarge-mnli), [DeBERTa-V2-XLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xlarge-mnli), [DeBERTa-V2-XXLarge-MNLI](https://huggingface.co/microsoft/deberta-v2-xxlarge-mnli). The results of SST-2/QQP/QNLI/SQuADv2 will also be slightly improved when start from MNLI fine-tuned models, however, we only report the numbers fine-tuned from pretrained base models for those 4 tasks.
- <sup>2</sup> To try the **XXLarge** model with **[HF transformers](https://huggingface.co/transformers/main_classes/trainer.html)**, you need to specify **--sharded_ddp**
```bash
cd transformers/examples/text-classification/
export TASK_NAME=mrpc
python -m torch.distributed.launch --nproc_per_node=8 run_glue.py --model_name_or_path microsoft/deberta-v2-xxlarge \\
--task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 4 \\
--learning_rate 3e-6 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --sharded_ddp --fp16
```
### Citation
If you find DeBERTa useful for your work, please cite the following paper:
``` latex
@inproceedings{
he2021deberta,
title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION},
author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=XPZIaotutsD}
}
```
|
jcastanyo/dqn-SpaceInvadersNoFrameskip-v4
|
jcastanyo
| 2022-06-27T15:43:15Z | 4 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-27T15:42:39Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 644.00 +/- 281.09
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jcastanyo -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga jcastanyo
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
BukaByaka/opus-mt-ru-en-finetuned-ru-to-en
|
BukaByaka
| 2022-06-27T14:05:53Z | 43 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-26T14:26:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wmt16
metrics:
- bleu
model-index:
- name: opus-mt-ru-en-finetuned-ru-to-en
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: wmt16
type: wmt16
args: ru-en
metrics:
- name: Bleu
type: bleu
value: 30.4049
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-ru-en-finetuned-ru-to-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ru-en](https://huggingface.co/Helsinki-NLP/opus-mt-ru-en) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4092
- Bleu: 30.4049
- Gen Len: 26.3911
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 2.2606 | 1.0 | 94761 | 1.4092 | 30.4049 | 26.3911 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0.post202
- Datasets 2.3.2
- Tokenizers 0.12.1
|
kktoto/tiny_focal_alpah
|
kktoto
| 2022-06-27T13:47:19Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-27T01:31:29Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: tiny_focal_alpah
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny_focal_alpah
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0492
- Precision: 0.6951
- Recall: 0.6796
- F1: 0.6873
- Accuracy: 0.9512
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0588 | 1.0 | 5561 | 0.0548 | 0.6801 | 0.6235 | 0.6506 | 0.9453 |
| 0.054 | 2.0 | 11122 | 0.0521 | 0.6850 | 0.6478 | 0.6659 | 0.9476 |
| 0.0525 | 3.0 | 16683 | 0.0509 | 0.6834 | 0.6676 | 0.6754 | 0.9486 |
| 0.0492 | 4.0 | 22244 | 0.0503 | 0.6829 | 0.6754 | 0.6791 | 0.9491 |
| 0.0482 | 5.0 | 27805 | 0.0500 | 0.6917 | 0.6727 | 0.6820 | 0.9501 |
| 0.0471 | 6.0 | 33366 | 0.0491 | 0.7085 | 0.6546 | 0.6805 | 0.9510 |
| 0.0459 | 7.0 | 38927 | 0.0486 | 0.6964 | 0.6746 | 0.6853 | 0.9510 |
| 0.0448 | 8.0 | 44488 | 0.0495 | 0.6922 | 0.6813 | 0.6867 | 0.9509 |
| 0.044 | 9.0 | 50049 | 0.0491 | 0.6961 | 0.6755 | 0.6857 | 0.9511 |
| 0.0433 | 10.0 | 55610 | 0.0492 | 0.6951 | 0.6796 | 0.6873 | 0.9512 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v4
|
gary109
| 2022-06-27T13:34:30Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"gary109/AI_Light_Dance",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-26T14:19:49Z |
---
license: apache-2.0
tags:
- automatic-speech-recognition
- gary109/AI_Light_Dance
- generated_from_trainer
model-index:
- name: ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v4
This model is a fine-tuned version of [gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v3](https://huggingface.co/gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v3) on the GARY109/AI_LIGHT_DANCE - ONSET-STEPMANIA2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0298
- Wer: 0.6642
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.9218 | 1.0 | 188 | 1.0718 | 0.6958 |
| 0.9194 | 2.0 | 376 | 1.0354 | 0.6937 |
| 0.9077 | 3.0 | 564 | 1.0365 | 0.6730 |
| 0.8956 | 4.0 | 752 | 1.0497 | 0.6727 |
| 0.877 | 5.0 | 940 | 1.0299 | 0.6694 |
| 0.8736 | 6.0 | 1128 | 1.0298 | 0.6642 |
| 0.8769 | 7.0 | 1316 | 1.0348 | 0.6584 |
| 0.8571 | 8.0 | 1504 | 1.0689 | 0.6602 |
| 0.8573 | 9.0 | 1692 | 1.0559 | 0.6549 |
| 0.8458 | 10.0 | 1880 | 1.0706 | 0.6588 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
gopalkalpande/t5-small-finetuned-bbc-news-summarization
|
gopalkalpande
| 2022-06-27T13:15:58Z | 5 | 1 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-27T13:12:49Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: gopalkalpande/t5-small-finetuned-bbc-news-summarization
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# gopalkalpande/t5-small-finetuned-bbc-news-summarization
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7637
- Validation Loss: 0.3528
- Train Rouge1: 19.4783
- Train Rouge2: 13.2994
- Train Rougel: 17.4791
- Train Rougelsum: 17.6204
- Train Gen Len: 19.0
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 4e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.001}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch |
|:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:|
| 0.7637 | 0.3528 | 19.4783 | 13.2994 | 17.4791 | 17.6204 | 19.0 | 0 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
douwekiela/resnet-18-finetuned-dogfood
|
douwekiela
| 2022-06-27T12:38:50Z | 77 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"resnet",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"dataset:lewtun/dog_food",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-06-26T09:42:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
- lewtun/dog_food
metrics:
- accuracy
model-index:
- name: resnet-18-finetuned-dogfood
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: lewtun/dog_food
type: lewtun/dog_food
args: lewtun--dog_food
metrics:
- name: Accuracy
type: accuracy
value: 0.896
- task:
type: image-classification
name: Image Classification
dataset:
name: lewtun/dog_food
type: lewtun/dog_food
config: lewtun--dog_food
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.8466666666666667
verified: true
- name: Precision Macro
type: precision
value: 0.8850127293141284
verified: true
- name: Precision Micro
type: precision
value: 0.8466666666666667
verified: true
- name: Precision Weighted
type: precision
value: 0.8939157698241645
verified: true
- name: Recall Macro
type: recall
value: 0.8555113273379528
verified: true
- name: Recall Micro
type: recall
value: 0.8466666666666667
verified: true
- name: Recall Weighted
type: recall
value: 0.8466666666666667
verified: true
- name: F1 Macro
type: f1
value: 0.8431399312051647
verified: true
- name: F1 Micro
type: f1
value: 0.8466666666666667
verified: true
- name: F1 Weighted
type: f1
value: 0.8430272582865614
verified: true
- name: loss
type: loss
value: 0.3633290231227875
verified: true
- name: matthews_correlation
type: matthews_correlation
value: 0.7973101366252381
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-18-finetuned-dogfood
This model is a fine-tuned version of [microsoft/resnet-18](https://huggingface.co/microsoft/resnet-18) on the lewtun/dog_food dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2991
- Accuracy: 0.896
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.846 | 1.0 | 16 | 0.2662 | 0.9156 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
zezafa/q-Taxi-v3
|
zezafa
| 2022-06-27T11:52:15Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-27T11:52:09Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.38 +/- 2.77
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="zezafa/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Davlan/bert-base-multilingual-cased-finetuned-yoruba
|
Davlan
| 2022-06-27T11:50:30Z | 21 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
Hugging Face's logo
---
language: yo
datasets:
---
# bert-base-multilingual-cased-finetuned-yoruba
## Model description
**bert-base-multilingual-cased-finetuned-yoruba** is a **Yoruba BERT** model obtained by fine-tuning **bert-base-multilingual-cased** model on Yorùbá language texts. It provides **better performance** than the multilingual BERT on text classification and named entity recognition datasets.
Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on Yorùbá corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/bert-base-multilingual-cased-finetuned-yoruba')
>>> unmasker("Arẹmọ Phillip to jẹ ọkọ [MASK] Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun")
[{'sequence': '[CLS] Arẹmọ Phillip to jẹ ọkọ Mary Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun [SEP]', 'score': 0.1738305538892746,
'token': 12176,
'token_str': 'Mary'},
{'sequence': '[CLS] Arẹmọ Phillip to jẹ ọkọ Queen Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun [SEP]', 'score': 0.16382873058319092,
'token': 13704,
'token_str': 'Queen'},
{'sequence': '[CLS] Arẹmọ Phillip to jẹ ọkọ ti Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun [SEP]', 'score': 0.13272495567798615,
'token': 14382,
'token_str': 'ti'},
{'sequence': '[CLS] Arẹmọ Phillip to jẹ ọkọ King Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun [SEP]', 'score': 0.12823280692100525,
'token': 11515,
'token_str': 'King'},
{'sequence': '[CLS] Arẹmọ Phillip to jẹ ọkọ Lady Elizabeth to ti wa lori aisan ti dagbere faye lẹni ọdun mọkandilọgọrun [SEP]', 'score': 0.07841219753026962,
'token': 14005,
'token_str': 'Lady'}]
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on Bible, JW300, [Menyo-20k](https://huggingface.co/datasets/menyo20k_mt), [Yoruba Embedding corpus](https://huggingface.co/datasets/yoruba_text_c3) and [CC-Aligned](https://opus.nlpl.eu/), Wikipedia, news corpora (BBC Yoruba, VON Yoruba, Asejere, Alaroye), and other small datasets curated from friends.
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| mBERT F1 | yo_bert F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 78.97 | 82.58
[BBC Yorùbá Textclass](https://huggingface.co/datasets/yoruba_bbc_topics) | 75.13 | 79.11
### BibTeX entry and citation info
By David Adelani
```
```
|
Davlan/bert-base-multilingual-cased-masakhaner
|
Davlan
| 2022-06-27T11:50:04Z | 14 | 3 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"bert",
"token-classification",
"arxiv:2103.11811",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:04Z |
Hugging Face's logo
---
language:
- ha
- ig
- rw
- lg
- luo
- pcm
- sw
- wo
- yo
- multilingual
datasets:
- masakhaner
---
# bert-base-multilingual-cased-masakhaner
## Model description
**bert-base-multilingual-cased-masakhaner** is the first **Named Entity Recognition** model for 9 African languages (Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahilu, Wolof, and Yorùbá) based on a fine-tuned mBERT base model. It achieves the **state-of-the-art performance** for the NER task. It has been trained to recognize four types of entities: dates & times (DATE), location (LOC), organizations (ORG), and person (PER).
Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on an aggregation of African language datasets obtained from Masakhane [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("Davlan/bert-base-multilingual-cased-masakhaner")
model = AutoModelForTokenClassification.from_pretrained("Davlan/bert-base-multilingual-cased-masakhaner")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Emir of Kano turban Zhang wey don spend 18 years for Nigeria"
ner_results = nlp(example)
print(ner_results)
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on 9 African NER datasets (Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahilu, Wolof, and Yorùbá) Masakhane [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset
The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
## Training procedure
This model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the [original MasakhaNER paper](https://arxiv.org/abs/2103.11811) which trained & evaluated the model on MasakhaNER corpus.
## Eval results on Test set (F-score)
language|F1-score
-|-
hau |88.66
ibo |85.72
kin |71.94
lug |81.73
luo |77.39
pcm |88.96
swa |88.23
wol |66.27
yor |80.09
### BibTeX entry and citation info
```
@article{adelani21tacl,
title = {Masakha{NER}: Named Entity Recognition for African Languages},
author = {David Ifeoluwa Adelani and Jade Abbott and Graham Neubig and Daniel D'souza and Julia Kreutzer and Constantine Lignos and Chester Palen-Michel and Happy Buzaaba and Shruti Rijhwani and Sebastian Ruder and Stephen Mayhew and Israel Abebe Azime and Shamsuddeen Muhammad and Chris Chinenye Emezue and Joyce Nakatumba-Nabende and Perez Ogayo and Anuoluwapo Aremu and Catherine Gitau and Derguene Mbaye and Jesujoba Alabi and Seid Muhie Yimam and Tajuddeen Gwadabe and Ignatius Ezeani and Rubungo Andre Niyongabo and Jonathan Mukiibi and Verrah Otiende and Iroro Orife and Davis David and Samba Ngom and Tosin Adewumi and Paul Rayson and Mofetoluwa Adeyemi and Gerald Muriuki and Emmanuel Anebi and Chiamaka Chukwuneke and Nkiruka Odu and Eric Peter Wairagala and Samuel Oyerinde and Clemencia Siro and Tobius Saul Bateesa and Temilola Oloyede and Yvonne Wambui and Victor Akinode and Deborah Nabagereka and Maurice Katusiime and Ayodele Awokoya and Mouhamadane MBOUP and Dibora Gebreyohannes and Henok Tilaye and Kelechi Nwaike and Degaga Wolde and Abdoulaye Faye and Blessing Sibanda and Orevaoghene Ahia and Bonaventure F. P. Dossou and Kelechi Ogueji and Thierno Ibrahima DIOP and Abdoulaye Diallo and Adewale Akinfaderin and Tendai Marengereke and Salomey Osei},
journal = {Transactions of the Association for Computational Linguistics (TACL)},
month = {},
url = {https://arxiv.org/abs/2103.11811},
year = {2021}
}
```
|
zezafa/q-FrozenLake-v1-4x4-noSlippery
|
zezafa
| 2022-06-27T11:47:10Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-27T11:47:03Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="zezafa/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Davlan/distilbert-base-multilingual-cased-masakhaner
|
Davlan
| 2022-06-27T10:57:26Z | 27 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"distilbert",
"token-classification",
"arxiv:2103.11811",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:04Z |
Hugging Face's logo
---
language:
- ha
- ig
- rw
- lg
- luo
- pcm
- sw
- wo
- yo
- multilingual
datasets:
- masakhaner
---
# bert-base-multilingual-cased-masakhaner
## Model description
**distilbert-base-multilingual-cased-masakhaner** is the first **Named Entity Recognition** model for 9 African languages (Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahilu, Wolof, and Yorùbá) based on a fine-tuned BERT base model. It has been trained to recognize four types of entities: dates & times (DATE), location (LOC), organizations (ORG), and person (PER).
Specifically, this model is a *distilbert-base-multilingual-cased* model that was fine-tuned on an aggregation of African language datasets obtained from Masakhane [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for NER.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("Davlan/distilbert-base-multilingual-cased-masakhaner")
model = AutoModelForTokenClassification.from_pretrained("Davlan/distilbert-base-multilingual-cased-masakhaner")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Emir of Kano turban Zhang wey don spend 18 years for Nigeria"
ner_results = nlp(example)
print(ner_results)
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on 9 African NER datasets (Hausa, Igbo, Kinyarwanda, Luganda, Nigerian Pidgin, Swahilu, Wolof, and Yorùbá) Masakhane [MasakhaNER](https://github.com/masakhane-io/masakhane-ner) dataset
The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
Abbreviation|Description
-|-
O|Outside of a named entity
B-DATE |Beginning of a DATE entity right after another DATE entity
I-DATE |DATE entity
B-PER |Beginning of a person’s name right after another person’s name
I-PER |Person’s name
B-ORG |Beginning of an organisation right after another organisation
I-ORG |Organisation
B-LOC |Beginning of a location right after another location
I-LOC |Location
## Training procedure
This model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the [original MasakhaNER paper](https://arxiv.org/abs/2103.11811) which trained & evaluated the model on MasakhaNER corpus.
## Eval results on Test set (F-score)
language|F1-score
-|-
hau |88.88
ibo |84.87
kin |74.19
lug |78.43
luo |73.32
pcm |87.98
swa |86.20
wol |64.67
yor |78.10
### BibTeX entry and citation info
```
@article{adelani21tacl,
title = {Masakha{NER}: Named Entity Recognition for African Languages},
author = {David Ifeoluwa Adelani and Jade Abbott and Graham Neubig and Daniel D'souza and Julia Kreutzer and Constantine Lignos and Chester Palen-Michel and Happy Buzaaba and Shruti Rijhwani and Sebastian Ruder and Stephen Mayhew and Israel Abebe Azime and Shamsuddeen Muhammad and Chris Chinenye Emezue and Joyce Nakatumba-Nabende and Perez Ogayo and Anuoluwapo Aremu and Catherine Gitau and Derguene Mbaye and Jesujoba Alabi and Seid Muhie Yimam and Tajuddeen Gwadabe and Ignatius Ezeani and Rubungo Andre Niyongabo and Jonathan Mukiibi and Verrah Otiende and Iroro Orife and Davis David and Samba Ngom and Tosin Adewumi and Paul Rayson and Mofetoluwa Adeyemi and Gerald Muriuki and Emmanuel Anebi and Chiamaka Chukwuneke and Nkiruka Odu and Eric Peter Wairagala and Samuel Oyerinde and Clemencia Siro and Tobius Saul Bateesa and Temilola Oloyede and Yvonne Wambui and Victor Akinode and Deborah Nabagereka and Maurice Katusiime and Ayodele Awokoya and Mouhamadane MBOUP and Dibora Gebreyohannes and Henok Tilaye and Kelechi Nwaike and Degaga Wolde and Abdoulaye Faye and Blessing Sibanda and Orevaoghene Ahia and Bonaventure F. P. Dossou and Kelechi Ogueji and Thierno Ibrahima DIOP and Abdoulaye Diallo and Adewale Akinfaderin and Tendai Marengereke and Salomey Osei},
journal = {Transactions of the Association for Computational Linguistics (TACL)},
month = {},
url = {https://arxiv.org/abs/2103.11811},
year = {2021}
}
```
|
Davlan/bert-base-multilingual-cased-finetuned-hausa
|
Davlan
| 2022-06-27T10:56:44Z | 108 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:04Z |
Hugging Face's logo
---
language: ha
datasets:
---
# bert-base-multilingual-cased-finetuned-hausa
## Model description
**bert-base-multilingual-cased-finetuned-hausa** is a **Hausa BERT** model obtained by fine-tuning **bert-base-multilingual-cased** model on Hausa language texts. It provides **better performance** than the multilingual BERT on text classification and named entity recognition datasets.
Specifically, this model is a *bert-base-multilingual-cased* model that was fine-tuned on Hausa corpus.
## Intended uses & limitations
#### How to use
You can use this model with Transformers *pipeline* for masked token prediction.
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='Davlan/bert-base-multilingual-cased-finetuned-hausa')
>>> unmasker("Shugaban [MASK] Muhammadu Buhari ya amince da shawarar da ma’aikatar sufuri karkashin jagoranci")
[{'sequence':
'[CLS] Shugaban Nigeria Muhammadu Buhari ya amince da shawarar da ma [UNK] aikatar sufuri karkashin jagoranci [SEP]',
'score': 0.9762618541717529,
'token': 22045,
'token_str': 'Nigeria'},
{'sequence': '[CLS] Shugaban Ka Muhammadu Buhari ya amince da shawarar da ma [UNK] aikatar sufuri karkashin jagoranci [SEP]', 'score': 0.007239189930260181,
'token': 25444,
'token_str': 'Ka'},
{'sequence': '[CLS] Shugaban, Muhammadu Buhari ya amince da shawarar da ma [UNK] aikatar sufuri karkashin jagoranci [SEP]', 'score': 0.001990817254409194,
'token': 117,
'token_str': ','},
{'sequence': '[CLS] Shugaban Ghana Muhammadu Buhari ya amince da shawarar da ma [UNK] aikatar sufuri karkashin jagoranci [SEP]', 'score': 0.001566368737258017,
'token': 28682,
'token_str': 'Ghana'},
{'sequence': '[CLS] Shugabanmu Muhammadu Buhari ya amince da shawarar da ma [UNK] aikatar sufuri karkashin jagoranci [SEP]', 'score': 0.0009375187801197171,
'token': 11717,
'token_str': '##mu'}]
```
#### Limitations and bias
This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains.
## Training data
This model was fine-tuned on [Hausa CC-100](http://data.statmt.org/cc-100/)
## Training procedure
This model was trained on a single NVIDIA V100 GPU
## Eval results on Test set (F-score, average over 5 runs)
Dataset| mBERT F1 | ha_bert F1
-|-|-
[MasakhaNER](https://github.com/masakhane-io/masakhane-ner) | 86.65 | 91.31
[VOA Hausa Textclass](https://huggingface.co/datasets/hausa_voa_topics) | 84.76 | 90.98
### BibTeX entry and citation info
By David Adelani
```
```
|
Rahulrr/language_model_en_de
|
Rahulrr
| 2022-06-27T10:42:46Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"en",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-06-27T10:09:17Z |
---
language:
- en
- de
tags:
- translation
license: apache-2.0
---
### en-de
* source group: English
* target group: German
* OPUS readme: [eng-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-deu/README.md)
* model: transformer-big
* source language(s): eng
* target language(s): deu
* raw source language(s): eng
* raw target language(s): deu
* model: transformer-big
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opusTCv20210807+bt-2021-12-08.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-deu/opusTCv20210807+bt-2021-12-08.zip)
* test set translations: [opusTCv20210807+bt-2021-12-08.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-deu/opusTCv20210807+bt-2021-12-08.test.txt)
* test set scores: [opusTCv20210807+bt-2021-12-08.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-deu/opusTCv20210807+bt-2021-12-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F | #sent | #words | BP |
|---------|-------|-------|-------|--------|----|
| newssyscomb2009.eng-deu | 24.3 | 0.5462 | 502 | 11271 | 0.993 |
| news-test2008.eng-deu | 24.7 | 0.5412 | 2051 | 47427 | 1.000 |
| newstest2009.eng-deu | 23.6 | 0.5385 | 2525 | 62816 | 0.999 |
| newstest2010.eng-deu | 26.9 | 0.5589 | 2489 | 61511 | 0.966 |
| newstest2011.eng-deu | 24.1 | 0.5364 | 3003 | 72981 | 0.990 |
| newstest2012.eng-deu | 24.6 | 0.5375 | 3003 | 72886 | 0.972 |
| newstest2013.eng-deu | 28.3 | 0.5636 | 3000 | 63737 | 0.988 |
| newstest2014-deen.eng-deu | 30.9 | 0.6084 | 3003 | 62964 | 1.000 |
| newstest2015-ende.eng-deu | 33.2 | 0.6106 | 2169 | 44260 | 1.000 |
| newstest2016-ende.eng-deu | 39.8 | 0.6595 | 2999 | 62670 | 0.993 |
| newstest2017-ende.eng-deu | 32.0 | 0.6047 | 3004 | 61291 | 1.000 |
| newstest2018-ende.eng-deu | 48.8 | 0.7146 | 2998 | 64276 | 1.000 |
| newstest2019-ende.eng-deu | 45.0 | 0.6821 | 1997 | 48969 | 0.995 |
| Tatoeba-test-v2021-08-07.eng-deu | 43.7 | 0.6442 | 10000 | 85728 | 1.000 |
### System Info:
- hf_name: en-de
- source_languages: eng
- target_languages: deu
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-deu/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'de']
- src_constituents: ('English', {'eng'})
- tgt_constituents: ('German', {'deu'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: eng-deu
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-deu/opusTCv20210807+bt-2021-12-08.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-deu/opusTCv20210807+bt-2021-12-08.test.txt
- src_alpha3: eng
- tgt_alpha3: deu
- chrF2_score: 0.6442
- bleu: 43.7
- src_name: English
- tgt_name: German
- train_date: 2021-12-08 00:00:00
- src_alpha2: en
- tgt_alpha2: de
- prefer_old: False
- short_pair: en-de
- helsinki_git_sha: c4e978d8de47875b482653b423dcfe968979d7d5
- transformers_git_sha: 56b83cf049823ed074a655eceb28f31e2077c6eb
- port_machine: LAPIN4GLQ2G3
- port_time: 2022-06-27-16:10
|
JeremiahZ/reproduce-unsup-roberta-base-avg
|
JeremiahZ
| 2022-06-27T10:19:27Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"generated_from_trainer",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-06-27T08:09:54Z |
---
language:
- en
license: mit
tags:
- generated_from_trainer
model-index:
- name: reproduce-unsup-roberta-base-avg
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# reproduce-unsup-roberta-base-avg
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 512
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
danielmantisnlp/autotrain-oms-ner-bi-1044135953
|
danielmantisnlp
| 2022-06-27T09:39:42Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain",
"en",
"dataset:danielmantisnlp/autotrain-data-oms-ner-bi",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-27T09:38:38Z |
---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- danielmantisnlp/autotrain-data-oms-ner-bi
co2_eq_emissions: 1.425282392185522
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 1044135953
- CO2 Emissions (in grams): 1.425282392185522
## Validation Metrics
- Loss: 0.4587894678115845
- Accuracy: 0.8957797220792589
- Precision: 0.553921568627451
- Recall: 0.6793587174348698
- F1: 0.6102610261026103
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/danielmantisnlp/autotrain-oms-ner-bi-1044135953
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("danielmantisnlp/autotrain-oms-ner-bi-1044135953", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("danielmantisnlp/autotrain-oms-ner-bi-1044135953", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
YuanWellspring/wav2vec2-nsc-final_1-google-colab
|
YuanWellspring
| 2022-06-27T09:21:32Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-27T07:57:07Z |
---
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-nsc-final_1-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-nsc-final_1-google-colab
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.10.3
|
dasolj/wav2vec2-base-timit-demo-google-colab
|
dasolj
| 2022-06-27T08:50:22Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-27T05:22:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5501
- Wer: 0.3424
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5448 | 1.0 | 500 | 2.5044 | 1.0 |
| 1.0167 | 2.01 | 1000 | 0.5435 | 0.5278 |
| 0.4453 | 3.01 | 1500 | 0.4450 | 0.4534 |
| 0.3 | 4.02 | 2000 | 0.4401 | 0.4245 |
| 0.2304 | 5.02 | 2500 | 0.4146 | 0.4022 |
| 0.1889 | 6.02 | 3000 | 0.4241 | 0.3927 |
| 0.1573 | 7.03 | 3500 | 0.4545 | 0.3878 |
| 0.1363 | 8.03 | 4000 | 0.4936 | 0.3940 |
| 0.1213 | 9.04 | 4500 | 0.4964 | 0.3806 |
| 0.108 | 10.04 | 5000 | 0.4931 | 0.3826 |
| 0.0982 | 11.04 | 5500 | 0.5373 | 0.3778 |
| 0.0883 | 12.05 | 6000 | 0.4978 | 0.3733 |
| 0.0835 | 13.05 | 6500 | 0.5189 | 0.3728 |
| 0.0748 | 14.06 | 7000 | 0.4608 | 0.3692 |
| 0.068 | 15.06 | 7500 | 0.4827 | 0.3608 |
| 0.0596 | 16.06 | 8000 | 0.5022 | 0.3661 |
| 0.056 | 17.07 | 8500 | 0.5482 | 0.3646 |
| 0.0565 | 18.07 | 9000 | 0.5158 | 0.3573 |
| 0.0487 | 19.08 | 9500 | 0.4910 | 0.3513 |
| 0.0444 | 20.08 | 10000 | 0.5771 | 0.3580 |
| 0.045 | 21.08 | 10500 | 0.5160 | 0.3539 |
| 0.0363 | 22.09 | 11000 | 0.5367 | 0.3503 |
| 0.0313 | 23.09 | 11500 | 0.5773 | 0.3500 |
| 0.0329 | 24.1 | 12000 | 0.5683 | 0.3508 |
| 0.0297 | 25.1 | 12500 | 0.5355 | 0.3464 |
| 0.0272 | 26.1 | 13000 | 0.5317 | 0.3450 |
| 0.0256 | 27.11 | 13500 | 0.5602 | 0.3443 |
| 0.0242 | 28.11 | 14000 | 0.5586 | 0.3419 |
| 0.0239 | 29.12 | 14500 | 0.5501 | 0.3424 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
hustvl/yolos-small-dwr
|
hustvl
| 2022-06-27T08:38:00Z | 11 | 4 |
transformers
|
[
"transformers",
"pytorch",
"yolos",
"object-detection",
"vision",
"dataset:coco",
"arxiv:2106.00666",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
object-detection
| 2022-04-26T10:15:57Z |
---
license: apache-2.0
tags:
- object-detection
- vision
datasets:
- coco
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/savanna.jpg
example_title: Savanna
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/football-match.jpg
example_title: Football Match
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/airport.jpg
example_title: Airport
---
# YOLOS (small-sized, fast model scaling) model
YOLOS model fine-tuned on COCO 2017 object detection (118k annotated images). It was introduced in the paper [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://arxiv.org/abs/2106.00666) by Fang et al. and first released in [this repository](https://github.com/hustvl/YOLOS).
Disclaimer: The team releasing YOLOS did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
YOLOS is a Vision Transformer (ViT) trained using the DETR loss. Despite its simplicity, a base-sized YOLOS model is able to achieve 42 AP on COCO validation 2017 (similar to DETR and more complex frameworks such as Faster R-CNN).
The model is trained using a "bipartite matching loss": one compares the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a "no object" as class and "no bounding box" as bounding box). The Hungarian matching algorithm is used to create an optimal one-to-one mapping between each of the N queries and each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model.
## Intended uses & limitations
You can use the raw model for object detection. See the [model hub](https://huggingface.co/models?search=hustvl/yolos) to look for all available YOLOS models.
### How to use
Here is how to use this model:
```python
from transformers import YolosFeatureExtractor, YolosForObjectDetection
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = YolosFeatureExtractor.from_pretrained('hustvl/yolos-small-dwr')
model = YolosForObjectDetection.from_pretrained('hustvl/yolos-small-dwr')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
# model predicts bounding boxes and corresponding COCO classes
logits = outputs.logits
bboxes = outputs.pred_boxes
```
Currently, both the feature extractor and model support PyTorch.
## Training data
The YOLOS model was pre-trained on [ImageNet-1k](https://huggingface.co/datasets/imagenet2012) and fine-tuned on [COCO 2017 object detection](https://cocodataset.org/#download), a dataset consisting of 118k/5k annotated images for training/validation respectively.
### Training
The model was pre-trained for 300 epochs on ImageNet-1k and fine-tuned for 150 epochs on COCO.
## Evaluation results
This model achieves an AP (average precision) of **37.6** on COCO 2017 validation. For more details regarding evaluation results, we refer to table 1 of the original paper.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2106-00666,
author = {Yuxin Fang and
Bencheng Liao and
Xinggang Wang and
Jiemin Fang and
Jiyang Qi and
Rui Wu and
Jianwei Niu and
Wenyu Liu},
title = {You Only Look at One Sequence: Rethinking Transformer in Vision through
Object Detection},
journal = {CoRR},
volume = {abs/2106.00666},
year = {2021},
url = {https://arxiv.org/abs/2106.00666},
eprinttype = {arXiv},
eprint = {2106.00666},
timestamp = {Fri, 29 Apr 2022 19:49:16 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-00666.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
Corianas/ppo-SpaceInvadersNoFrameskip-v4
|
Corianas
| 2022-06-27T08:23:21Z | 6 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-27T08:22:35Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 1025.50 +/- 612.32
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **PPO** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **PPO** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo ppo --env SpaceInvadersNoFrameskip-v4 -orga Corianas -f logs/
python enjoy.py --algo ppo --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo ppo --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo ppo --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Corianas
```
## Hyperparameters
```python
OrderedDict([('batch_size', 256),
('clip_range', 'lin_0.1'),
('ent_coef', 0.01),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('frame_stack', 4),
('learning_rate', 'lin_2.5e-4'),
('n_envs', 8),
('n_epochs', 4),
('n_steps', 128),
('n_timesteps', 10000000.0),
('policy', 'CnnPolicy'),
('vf_coef', 0.5),
('normalize', False)])
```
|
TheRensselaerIDEA/gpt2-large-covid-tweet-response
|
TheRensselaerIDEA
| 2022-06-27T07:26:54Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"arxiv:2204.04353",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-26T19:56:35Z |
---
license: mit
---
Base model: [gpt2-large](https://huggingface.co/gpt2-large)
Fine-tuned to generate responses on a dataset of [COVID-19 public health tweets](https://github.com/TheRensselaerIDEA/generative-response-modeling). For more information about the dataset, task and training, see [our paper](https://arxiv.org/abs/2204.04353). This checkpoint corresponds to the lowest validation perplexity (3.36 at 2 epochs) seen during training. See Training metrics for Tensorboard logs.
Also see: our [Vaccine public health tweet response model](https://huggingface.co/TheRensselaerIDEA/gpt2-large-vaccine-tweet-response).
**Data input format:** <span style="color:red"><|message|></span>public health message<span style="color:red"><|author|></span>public health Twitter handle<span style="color:red"><|response|></span>
Example:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from transformers.trainer_utils import set_seed
import torch
tokenizer = AutoTokenizer.from_pretrained("TheRensselaerIDEA/gpt2-large-covid-tweet-response")
model = AutoModelForCausalLM.from_pretrained("TheRensselaerIDEA/gpt2-large-covid-tweet-response")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
set_seed(33)
message = "Is your child worried about #COVID19? Learn the facts so you can answer your children’s questions."
author = "CDCgov"
num_responses = 2
author_token, message_token, response_token = tokenizer.additional_special_tokens
input_str = f"{message_token}{message}{author_token}{author}{response_token}"
inputs = tokenizer(input_str, return_tensors="pt").to(device)
responses_ids = model.generate(**inputs,
max_new_tokens=100,
pad_token_id=tokenizer.pad_token_id,
do_sample=True,
top_p=0.95,
temperature=1.5,
num_beams=3,
early_stopping=True,
num_return_sequences=num_responses)
responses = [tokenizer.decode(r[inputs.input_ids.shape[-1]:], skip_special_tokens=True) for r in responses_ids]
for i, resp in enumerate(responses):
print(f"Response {i}: {resp}\n")
```
Output:
```
Response 0: @CDCgov I'm not worried. I don't know who needs to hear this, but I have a feeling I know who will be listening.
It is not the virus. It is the media. I know you and CDC have been lying for months now, but the media will keep pushing this lie.
Response 1: #WashYourHands to help #StopTheSpread of #COVID19 and other diseases. Learn more about hand washing: #HandWashing
```
|
osanseviero/distilbert-base-uncased-finetuned-squad-d5716d28
|
osanseviero
| 2022-06-27T07:23:07Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"question-answering",
"en",
"dataset:squad",
"arxiv:1910.01108",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-16T13:49:18Z |
---
language:
- en
thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg
tags:
- question-answering
license: apache-2.0
datasets:
- squad
metrics:
- squad
model-index:
- name: osanseviero/distilbert-base-uncased-finetuned-squad-d5716d28
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: adversarial_qa
type: adversarial_qa
config: adversarialQA
split: train
metrics:
- name: Loss
type: loss
value: 4.052208423614502
verified: true
---
# DistilBERT with a second step of distillation
## Model description
This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation.
In this version, the following pre-trained models were used:
* Student: `distilbert-base-uncased`
* Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1`
## Training data
This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows:
```python
from datasets import load_dataset
squad = load_dataset('squad')
```
## Training procedure
## Eval results
| | Exact Match | F1 |
|------------------|-------------|------|
| DistilBERT paper | 79.1 | 86.9 |
| Ours | 78.4 | 86.5 |
The scores were calculated using the `squad` metric from `datasets`.
### BibTeX entry and citation info
```bibtex
@misc{sanh2020distilbert,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
year={2020},
eprint={1910.01108},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
vebie91/dqn-SpaceInvadersNoFrameskip-v4
|
vebie91
| 2022-06-27T05:49:36Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-26T02:29:12Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 784.00 +/- 298.23
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga vebie91 -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga vebie91
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 3000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
robingeibel/longformer-large-finetuned-big_patent
|
robingeibel
| 2022-06-27T05:04:39Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"longformer",
"fill-mask",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-21T07:29:34Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: robingeibel/longformer-large-finetuned-big_patent
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# robingeibel/longformer-large-finetuned-big_patent
This model is a fine-tuned version of [robingeibel/longformer-large-finetuned-big_patent](https://huggingface.co/robingeibel/longformer-large-finetuned-big_patent) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.1706
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 79030, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 1.1706 | 0 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jcmc/q-Taxi-v3
|
jcmc
| 2022-06-27T04:21:20Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-27T04:21:13Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- metrics:
- type: mean_reward
value: 7.46 +/- 2.70
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
---
# **Q-Learning** Agent playing **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="jcmc/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
TheRensselaerIDEA/gpt2-large-vaccine-tweet-response
|
TheRensselaerIDEA
| 2022-06-27T03:22:42Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"arxiv:2204.04353",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-27T03:03:38Z |
---
license: mit
---
Base model: [gpt2-large](https://huggingface.co/gpt2-large)
Fine-tuned to generate responses on a dataset of [Vaccine public health tweets](https://github.com/TheRensselaerIDEA/generative-response-modeling). For more information about the dataset, task and training, see [our paper](https://arxiv.org/abs/2204.04353). This checkpoint corresponds to the lowest validation perplexity (2.82 at 2 epochs) seen during training. See Training metrics for Tensorboard logs.
For input format and usage examples, see our [COVID-19 public health tweet response model](https://huggingface.co/TheRensselaerIDEA/gpt2-large-covid-tweet-response).
|
neweasterns/wav2vec2-base-timit-demo-google-colab
|
neweasterns
| 2022-06-27T02:49:23Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-27T00:01:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5206
- Wer: 0.3388
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.5597 | 1.0 | 500 | 2.3415 | 0.9991 |
| 0.9759 | 2.01 | 1000 | 0.5556 | 0.5382 |
| 0.4587 | 3.01 | 1500 | 0.7690 | 0.4781 |
| 0.3156 | 4.02 | 2000 | 0.7994 | 0.4412 |
| 0.2272 | 5.02 | 2500 | 0.8948 | 0.4120 |
| 0.1921 | 6.02 | 3000 | 0.7065 | 0.3940 |
| 0.1618 | 7.03 | 3500 | 0.4333 | 0.3855 |
| 0.1483 | 8.03 | 4000 | 0.4232 | 0.3872 |
| 0.156 | 9.04 | 4500 | 0.4172 | 0.3749 |
| 0.1138 | 10.04 | 5000 | 0.4084 | 0.3758 |
| 0.1045 | 11.04 | 5500 | 0.4665 | 0.3623 |
| 0.0908 | 12.05 | 6000 | 0.4416 | 0.3684 |
| 0.0788 | 13.05 | 6500 | 0.4801 | 0.3659 |
| 0.0773 | 14.06 | 7000 | 0.4560 | 0.3583 |
| 0.0684 | 15.06 | 7500 | 0.4878 | 0.3610 |
| 0.0645 | 16.06 | 8000 | 0.4635 | 0.3567 |
| 0.0577 | 17.07 | 8500 | 0.5245 | 0.3548 |
| 0.0547 | 18.07 | 9000 | 0.5265 | 0.3639 |
| 0.0466 | 19.08 | 9500 | 0.5161 | 0.3546 |
| 0.0432 | 20.08 | 10000 | 0.5263 | 0.3558 |
| 0.0414 | 21.08 | 10500 | 0.4874 | 0.3500 |
| 0.0365 | 22.09 | 11000 | 0.5266 | 0.3472 |
| 0.0321 | 23.09 | 11500 | 0.5422 | 0.3458 |
| 0.0325 | 24.1 | 12000 | 0.5201 | 0.3428 |
| 0.0262 | 25.1 | 12500 | 0.5208 | 0.3398 |
| 0.0249 | 26.1 | 13000 | 0.5034 | 0.3429 |
| 0.0262 | 27.11 | 13500 | 0.5055 | 0.3396 |
| 0.0248 | 28.11 | 14000 | 0.5164 | 0.3404 |
| 0.0222 | 29.12 | 14500 | 0.5206 | 0.3388 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
ra-XOr/Unity-Pyramids
|
ra-XOr
| 2022-06-27T02:36:12Z | 33 | 1 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2022-06-27T02:16:39Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: ra-XOr/Unity-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
RUCAIBox/mvp-task-dialog
|
RUCAIBox
| 2022-06-27T02:28:25Z | 2 | 3 |
transformers
|
[
"transformers",
"pytorch",
"mvp",
"text-generation",
"text2text-generation",
"en",
"arxiv:2206.12131",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-02T11:53:57Z |
---
license: apache-2.0
language:
- en
tags:
- text-generation
- text2text-generation
pipeline_tag: text2text-generation
widget:
- text: "Given the task dialog: Belief state [X_SEP] I'm looking for a affordable BBQ restaurant in Dallas for a large group of guest."
example_title: "Example1"
- text: "Given the task dialog: Dialogue action [X_SEP] I'm looking for a affordable BBQ restaurant in Dallas for a large group of guest."
example_title: "Example2"
- text: "Given the task dialog: System response [X_SEP] I'm looking for a affordable BBQ restaurant in Dallas for a large group of guest."
example_title: "Example3"
---
# MVP-task-dialog
The MVP-task-dialog model was proposed in [**MVP: Multi-task Supervised Pre-training for Natural Language Generation**](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
The detailed information and instructions can be found [https://github.com/RUCAIBox/MVP](https://github.com/RUCAIBox/MVP).
## Model Description
MVP-task-dialog is a prompt-based model that MVP is further equipped with prompts pre-trained using labeled task-oriented system datasets. It is a variant (MVP+S) of our main [MVP](https://huggingface.co/RUCAIBox/mvp) model. It follows a Transformer encoder-decoder architecture with layer-wise prompts.
MVP-task-dialog is specially designed for task-oriented tasks, such as MultiWOZ.
## Example
```python
>>> from transformers import MvpTokenizer, MvpForConditionalGeneration
>>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp")
>>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp-task-dialog")
>>> inputs = tokenizer(
... "Given the task dialog: System response [X_SEP] I'm looking for a affordable BBQ restaurant in Dallas for a large group of guest.",
... return_tensors="pt",
... )
>>> generated_ids = model.generate(**inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
['What date and time would you like to go?']
```
## Related Models
**MVP**: [https://huggingface.co/RUCAIBox/mvp](https://huggingface.co/RUCAIBox/mvp).
**Prompt-based models**:
- MVP-multi-task: [https://huggingface.co/RUCAIBox/mvp-multi-task](https://huggingface.co/RUCAIBox/mvp-multi-task).
- MVP-summarization: [https://huggingface.co/RUCAIBox/mvp-summarization](https://huggingface.co/RUCAIBox/mvp-summarization).
- MVP-open-dialog: [https://huggingface.co/RUCAIBox/mvp-open-dialog](https://huggingface.co/RUCAIBox/mvp-open-dialog).
- MVP-data-to-text: [https://huggingface.co/RUCAIBox/mvp-data-to-text](https://huggingface.co/RUCAIBox/mvp-data-to-text).
- MVP-story: [https://huggingface.co/RUCAIBox/mvp-story](https://huggingface.co/RUCAIBox/mvp-story).
- MVP-question-answering: [https://huggingface.co/RUCAIBox/mvp-question-answering](https://huggingface.co/RUCAIBox/mvp-question-answering).
- MVP-question-generation: [https://huggingface.co/RUCAIBox/mvp-question-generation](https://huggingface.co/RUCAIBox/mvp-question-generation).
- MVP-task-dialog: [https://huggingface.co/RUCAIBox/mvp-task-dialog](https://huggingface.co/RUCAIBox/mvp-task-dialog).
**Multi-task models**:
- MTL-summarization: [https://huggingface.co/RUCAIBox/mtl-summarization](https://huggingface.co/RUCAIBox/mtl-summarization).
- MTL-open-dialog: [https://huggingface.co/RUCAIBox/mtl-open-dialog](https://huggingface.co/RUCAIBox/mtl-open-dialog).
- MTL-data-to-text: [https://huggingface.co/RUCAIBox/mtl-data-to-text](https://huggingface.co/RUCAIBox/mtl-data-to-text).
- MTL-story: [https://huggingface.co/RUCAIBox/mtl-story](https://huggingface.co/RUCAIBox/mtl-story).
- MTL-question-answering: [https://huggingface.co/RUCAIBox/mtl-question-answering](https://huggingface.co/RUCAIBox/mtl-question-answering).
- MTL-question-generation: [https://huggingface.co/RUCAIBox/mtl-question-generation](https://huggingface.co/RUCAIBox/mtl-question-generation).
- MTL-task-dialog: [https://huggingface.co/RUCAIBox/mtl-task-dialog](https://huggingface.co/RUCAIBox/mtl-task-dialog).
## Citation
```bibtex
@article{tang2022mvp,
title={MVP: Multi-task Supervised Pre-training for Natural Language Generation},
author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong},
journal={arXiv preprint arXiv:2206.12131},
year={2022},
url={https://arxiv.org/abs/2206.12131},
}
```
|
RUCAIBox/mvp-summarization
|
RUCAIBox
| 2022-06-27T02:28:20Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mvp",
"text-generation",
"text2text-generation",
"summarization",
"en",
"arxiv:2206.12131",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-02T11:49:40Z |
---
license: apache-2.0
language:
- en
tags:
- text-generation
- text2text-generation
- summarization
pipeline_tag: text2text-generation
widget:
- text: "Summarize: You may want to stick it to your boss and leave your job, but don't do it if these are your reasons."
example_title: "Example1"
- text: "Summarize: Jorge Alfaro drove in two runs, Aaron Nola pitched seven innings of two-hit ball and the Philadelphia Phillies beat the Los Angeles Dodgers 2-1 Thursday, spoiling Clayton Kershaw's first start in almost a month. Hitting out of the No. 8 spot in the ..."
example_title: "Example2"
---
# MVP-summarization
The MVP-summarization model was proposed in [**MVP: Multi-task Supervised Pre-training for Natural Language Generation**](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
The detailed information and instructions can be found [https://github.com/RUCAIBox/MVP](https://github.com/RUCAIBox/MVP).
## Model Description
MVP-summarization is a prompt-based model that MVP is further equipped with prompts pre-trained using labeled summarization datasets. It is a variant (MVP+S) of our main [MVP](https://huggingface.co/RUCAIBox/mvp) model. It follows a Transformer encoder-decoder architecture with layer-wise prompts.
MVP-summarization is specially designed for summarization tasks, such as new summarization (CNN/DailyMail, XSum) and dialog summarization (SAMSum).
## Example
```python
>>> from transformers import MvpTokenizer, MvpForConditionalGeneration
>>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp")
>>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp-summarization")
>>> inputs = tokenizer(
... "Summarize: You may want to stick it to your boss and leave your job, but don't do it if these are your reasons.",
... return_tensors="pt",
... )
>>> generated_ids = model.generate(**inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
["Don't do it if these are your reasons"]
```
## Related Models
**MVP**: [https://huggingface.co/RUCAIBox/mvp](https://huggingface.co/RUCAIBox/mvp).
**Prompt-based models**:
- MVP-multi-task: [https://huggingface.co/RUCAIBox/mvp-multi-task](https://huggingface.co/RUCAIBox/mvp-multi-task).
- MVP-summarization: [https://huggingface.co/RUCAIBox/mvp-summarization](https://huggingface.co/RUCAIBox/mvp-summarization).
- MVP-open-dialog: [https://huggingface.co/RUCAIBox/mvp-open-dialog](https://huggingface.co/RUCAIBox/mvp-open-dialog).
- MVP-data-to-text: [https://huggingface.co/RUCAIBox/mvp-data-to-text](https://huggingface.co/RUCAIBox/mvp-data-to-text).
- MVP-story: [https://huggingface.co/RUCAIBox/mvp-story](https://huggingface.co/RUCAIBox/mvp-story).
- MVP-question-answering: [https://huggingface.co/RUCAIBox/mvp-question-answering](https://huggingface.co/RUCAIBox/mvp-question-answering).
- MVP-question-generation: [https://huggingface.co/RUCAIBox/mvp-question-generation](https://huggingface.co/RUCAIBox/mvp-question-generation).
- MVP-task-dialog: [https://huggingface.co/RUCAIBox/mvp-task-dialog](https://huggingface.co/RUCAIBox/mvp-task-dialog).
**Multi-task models**:
- MTL-summarization: [https://huggingface.co/RUCAIBox/mtl-summarization](https://huggingface.co/RUCAIBox/mtl-summarization).
- MTL-open-dialog: [https://huggingface.co/RUCAIBox/mtl-open-dialog](https://huggingface.co/RUCAIBox/mtl-open-dialog).
- MTL-data-to-text: [https://huggingface.co/RUCAIBox/mtl-data-to-text](https://huggingface.co/RUCAIBox/mtl-data-to-text).
- MTL-story: [https://huggingface.co/RUCAIBox/mtl-story](https://huggingface.co/RUCAIBox/mtl-story).
- MTL-question-answering: [https://huggingface.co/RUCAIBox/mtl-question-answering](https://huggingface.co/RUCAIBox/mtl-question-answering).
- MTL-question-generation: [https://huggingface.co/RUCAIBox/mtl-question-generation](https://huggingface.co/RUCAIBox/mtl-question-generation).
- MTL-task-dialog: [https://huggingface.co/RUCAIBox/mtl-task-dialog](https://huggingface.co/RUCAIBox/mtl-task-dialog).
## Citation
```bibtex
@article{tang2022mvp,
title={MVP: Multi-task Supervised Pre-training for Natural Language Generation},
author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong},
journal={arXiv preprint arXiv:2206.12131},
year={2022},
url={https://arxiv.org/abs/2206.12131},
}
```
|
RUCAIBox/mvp-story
|
RUCAIBox
| 2022-06-27T02:28:15Z | 9 | 3 |
transformers
|
[
"transformers",
"pytorch",
"mvp",
"text-generation",
"text2text-generation",
"en",
"arxiv:2206.12131",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-02T11:55:25Z |
---
license: apache-2.0
language:
- en
tags:
- text-generation
- text2text-generation
pipeline_tag: text2text-generation
widget:
- text: "Given the story title: I think all public schools should have a uniform dress code."
example_title: "Example1"
- text: "Given the story title: My girlfriend and I decided to move to a new state. We packed everything in our cars and drove there."
example_title: "Example2"
---
# MVP-story
The MVP-story model was proposed in [**MVP: Multi-task Supervised Pre-training for Natural Language Generation**](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
The detailed information and instructions can be found [https://github.com/RUCAIBox/MVP](https://github.com/RUCAIBox/MVP).
## Model Description
MVP-story is a prompt-based model that MVP is further equipped with prompts pre-trained using labeled story generation datasets. It is a variant (MVP+S) of our main [MVP](https://huggingface.co/RUCAIBox/mvp) model. It follows a Transformer encoder-decoder architecture with layer-wise prompts.
MVP-story is specially designed for story generation tasks, such as ROCStories and WritingPrompts.
## Example
```python
>>> from transformers import MvpTokenizer, MvpForConditionalGeneration
>>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp")
>>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp-story")
>>> inputs = tokenizer(
... "Given the story title: I think all public schools should have a uniform dress code.",
... return_tensors="pt",
... )
>>> generated_ids = model.generate(**inputs, max_length=1024)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
['I think it would be a good idea to have uniform dress codes for all public schools. It would make it easier for students to dress appropriately.']
```
## Related Models
**MVP**: [https://huggingface.co/RUCAIBox/mvp](https://huggingface.co/RUCAIBox/mvp).
**Prompt-based models**:
- MVP-multi-task: [https://huggingface.co/RUCAIBox/mvp-multi-task](https://huggingface.co/RUCAIBox/mvp-multi-task).
- MVP-summarization: [https://huggingface.co/RUCAIBox/mvp-summarization](https://huggingface.co/RUCAIBox/mvp-summarization).
- MVP-open-dialog: [https://huggingface.co/RUCAIBox/mvp-open-dialog](https://huggingface.co/RUCAIBox/mvp-open-dialog).
- MVP-data-to-text: [https://huggingface.co/RUCAIBox/mvp-data-to-text](https://huggingface.co/RUCAIBox/mvp-data-to-text).
- MVP-story: [https://huggingface.co/RUCAIBox/mvp-story](https://huggingface.co/RUCAIBox/mvp-story).
- MVP-question-answering: [https://huggingface.co/RUCAIBox/mvp-question-answering](https://huggingface.co/RUCAIBox/mvp-question-answering).
- MVP-question-generation: [https://huggingface.co/RUCAIBox/mvp-question-generation](https://huggingface.co/RUCAIBox/mvp-question-generation).
- MVP-task-dialog: [https://huggingface.co/RUCAIBox/mvp-task-dialog](https://huggingface.co/RUCAIBox/mvp-task-dialog).
**Multi-task models**:
- MTL-summarization: [https://huggingface.co/RUCAIBox/mtl-summarization](https://huggingface.co/RUCAIBox/mtl-summarization).
- MTL-open-dialog: [https://huggingface.co/RUCAIBox/mtl-open-dialog](https://huggingface.co/RUCAIBox/mtl-open-dialog).
- MTL-data-to-text: [https://huggingface.co/RUCAIBox/mtl-data-to-text](https://huggingface.co/RUCAIBox/mtl-data-to-text).
- MTL-story: [https://huggingface.co/RUCAIBox/mtl-story](https://huggingface.co/RUCAIBox/mtl-story).
- MTL-question-answering: [https://huggingface.co/RUCAIBox/mtl-question-answering](https://huggingface.co/RUCAIBox/mtl-question-answering).
- MTL-question-generation: [https://huggingface.co/RUCAIBox/mtl-question-generation](https://huggingface.co/RUCAIBox/mtl-question-generation).
- MTL-task-dialog: [https://huggingface.co/RUCAIBox/mtl-task-dialog](https://huggingface.co/RUCAIBox/mtl-task-dialog).
## Citation
```bibtex
@article{tang2022mvp,
title={MVP: Multi-task Supervised Pre-training for Natural Language Generation},
author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong},
journal={arXiv preprint arXiv:2206.12131},
year={2022},
url={https://arxiv.org/abs/2206.12131},
}
```
|
RUCAIBox/mvp-question-generation
|
RUCAIBox
| 2022-06-27T02:28:10Z | 7 | 1 |
transformers
|
[
"transformers",
"pytorch",
"mvp",
"text-generation",
"text2text-generation",
"en",
"arxiv:2206.12131",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-02T11:54:39Z |
---
license: apache-2.0
language:
- en
tags:
- text-generation
- text2text-generation
pipeline_tag: text2text-generation
widget:
- text: "Generate the question based on the answer: boxing [X_SEP] A bolo punch is a punch used in martial arts . A hook is a punch in boxing ."
example_title: "Example1"
- text: "Generate the question based on the answer: Arthur 's Magazine [X_SEP] Arthur 's Magazine ( 1844–1846 ) was an American literary periodical published in Philadelphia in the 19th century . First for Women is a woman 's magazine published by Bauer Media Group in the USA ."
example_title: "Example2"
---
# MVP-question-generation
The MVP-question-generation model was proposed in [**MVP: Multi-task Supervised Pre-training for Natural Language Generation**](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
The detailed information and instructions can be found [https://github.com/RUCAIBox/MVP](https://github.com/RUCAIBox/MVP).
## Model Description
MVP-question-generation is a prompt-based model that MVP is further equipped with prompts pre-trained using labeled question generation datasets. It is a variant (MVP+S) of our main [MVP](https://huggingface.co/RUCAIBox/mvp) model. It follows a Transformer encoder-decoder architecture with layer-wise prompts.
MVP-question-generation is specially designed for question generation tasks, such as SQuAD and CoQA.
## Example
```python
>>> from transformers import MvpTokenizer, MvpForConditionalGeneration
>>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp")
>>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp-question-generation")
>>> inputs = tokenizer(
... "Generate the question based on the answer: boxing [X_SEP] A bolo punch is a punch used in martial arts . A hook is a punch in boxing .",
... return_tensors="pt",
... )
>>> generated_ids = model.generate(**inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
['A bolo punch and a hook are both punches used in what sport?']
```
## Related Models
**MVP**: [https://huggingface.co/RUCAIBox/mvp](https://huggingface.co/RUCAIBox/mvp).
**Prompt-based models**:
- MVP-multi-task: [https://huggingface.co/RUCAIBox/mvp-multi-task](https://huggingface.co/RUCAIBox/mvp-multi-task).
- MVP-summarization: [https://huggingface.co/RUCAIBox/mvp-summarization](https://huggingface.co/RUCAIBox/mvp-summarization).
- MVP-open-dialog: [https://huggingface.co/RUCAIBox/mvp-open-dialog](https://huggingface.co/RUCAIBox/mvp-open-dialog).
- MVP-data-to-text: [https://huggingface.co/RUCAIBox/mvp-data-to-text](https://huggingface.co/RUCAIBox/mvp-data-to-text).
- MVP-story: [https://huggingface.co/RUCAIBox/mvp-story](https://huggingface.co/RUCAIBox/mvp-story).
- MVP-question-answering: [https://huggingface.co/RUCAIBox/mvp-question-answering](https://huggingface.co/RUCAIBox/mvp-question-answering).
- MVP-question-generation: [https://huggingface.co/RUCAIBox/mvp-question-generation](https://huggingface.co/RUCAIBox/mvp-question-generation).
- MVP-task-dialog: [https://huggingface.co/RUCAIBox/mvp-task-dialog](https://huggingface.co/RUCAIBox/mvp-task-dialog).
**Multi-task models**:
- MTL-summarization: [https://huggingface.co/RUCAIBox/mtl-summarization](https://huggingface.co/RUCAIBox/mtl-summarization).
- MTL-open-dialog: [https://huggingface.co/RUCAIBox/mtl-open-dialog](https://huggingface.co/RUCAIBox/mtl-open-dialog).
- MTL-data-to-text: [https://huggingface.co/RUCAIBox/mtl-data-to-text](https://huggingface.co/RUCAIBox/mtl-data-to-text).
- MTL-story: [https://huggingface.co/RUCAIBox/mtl-story](https://huggingface.co/RUCAIBox/mtl-story).
- MTL-question-answering: [https://huggingface.co/RUCAIBox/mtl-question-answering](https://huggingface.co/RUCAIBox/mtl-question-answering).
- MTL-question-generation: [https://huggingface.co/RUCAIBox/mtl-question-generation](https://huggingface.co/RUCAIBox/mtl-question-generation).
- MTL-task-dialog: [https://huggingface.co/RUCAIBox/mtl-task-dialog](https://huggingface.co/RUCAIBox/mtl-task-dialog).
## Citation
```bibtex
@article{tang2022mvp,
title={MVP: Multi-task Supervised Pre-training for Natural Language Generation},
author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong},
journal={arXiv preprint arXiv:2206.12131},
year={2022},
url={https://arxiv.org/abs/2206.12131},
}
```
|
RUCAIBox/mvp-question-answering
|
RUCAIBox
| 2022-06-27T02:28:05Z | 4 | 2 |
transformers
|
[
"transformers",
"pytorch",
"mvp",
"text-generation",
"text2text-generation",
"en",
"arxiv:2206.12131",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-02T11:54:54Z |
---
license: apache-2.0
language:
- en
tags:
- text-generation
- text2text-generation
pipeline_tag: text2text-generation
widget:
- text: "Answer the following question: From which country did Angola achieve independence in 1975?"
example_title: "Example1"
- text: "Answer the following question: what is ce certified [X_SEP] The CE marking is the manufacturer's declaration that the product meets the requirements of the applicable EC directives. Officially, CE is an abbreviation of Conformite Conformité, europeenne Européenne Meaning. european conformity"
example_title: "Example2"
---
# MVP-question-answering
The MVP-question-answering model was proposed in [**MVP: Multi-task Supervised Pre-training for Natural Language Generation**](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
The detailed information and instructions can be found [https://github.com/RUCAIBox/MVP](https://github.com/RUCAIBox/MVP).
## Model Description
MVP-question-answering is a prompt-based model that MVP is further equipped with prompts pre-trained using labeled question answering datasets. It is a variant (MVP+S) of our [MVP](https://huggingface.co/RUCAIBox/mvp) [MVP](https://huggingface.co/RUCAIBox/mvp) model. It follows a Transformer encoder-decoder architecture with layer-wise prompts.
MVP-question-answering is specially designed for question answering tasks, such as reading comprehension (SQuAD), conversational question answering (CoQA) and closed-book question-answering (Natural Questions).
## Example
```python
>>> from transformers import MvpTokenizer, MvpForConditionalGeneration
>>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp")
>>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp-question-answering")
>>> inputs = tokenizer(
... "Answer the following question: From which country did Angola achieve independence in 1975?",
... return_tensors="pt",
... )
>>> generated_ids = model.generate(**inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
['Portugal']
```
## Related Models
**MVP**: [https://huggingface.co/RUCAIBox/mvp](https://huggingface.co/RUCAIBox/mvp).
**Prompt-based models**:
- MVP-multi-task: [https://huggingface.co/RUCAIBox/mvp-multi-task](https://huggingface.co/RUCAIBox/mvp-multi-task).
- MVP-summarization: [https://huggingface.co/RUCAIBox/mvp-summarization](https://huggingface.co/RUCAIBox/mvp-summarization).
- MVP-open-dialog: [https://huggingface.co/RUCAIBox/mvp-open-dialog](https://huggingface.co/RUCAIBox/mvp-open-dialog).
- MVP-data-to-text: [https://huggingface.co/RUCAIBox/mvp-data-to-text](https://huggingface.co/RUCAIBox/mvp-data-to-text).
- MVP-story: [https://huggingface.co/RUCAIBox/mvp-story](https://huggingface.co/RUCAIBox/mvp-story).
- MVP-question-answering: [https://huggingface.co/RUCAIBox/mvp-question-answering](https://huggingface.co/RUCAIBox/mvp-question-answering).
- MVP-question-generation: [https://huggingface.co/RUCAIBox/mvp-question-generation](https://huggingface.co/RUCAIBox/mvp-question-generation).
- MVP-task-dialog: [https://huggingface.co/RUCAIBox/mvp-task-dialog](https://huggingface.co/RUCAIBox/mvp-task-dialog).
**Multi-task models**:
- MTL-summarization: [https://huggingface.co/RUCAIBox/mtl-summarization](https://huggingface.co/RUCAIBox/mtl-summarization).
- MTL-open-dialog: [https://huggingface.co/RUCAIBox/mtl-open-dialog](https://huggingface.co/RUCAIBox/mtl-open-dialog).
- MTL-data-to-text: [https://huggingface.co/RUCAIBox/mtl-data-to-text](https://huggingface.co/RUCAIBox/mtl-data-to-text).
- MTL-story: [https://huggingface.co/RUCAIBox/mtl-story](https://huggingface.co/RUCAIBox/mtl-story).
- MTL-question-answering: [https://huggingface.co/RUCAIBox/mtl-question-answering](https://huggingface.co/RUCAIBox/mtl-question-answering).
- MTL-question-generation: [https://huggingface.co/RUCAIBox/mtl-question-generation](https://huggingface.co/RUCAIBox/mtl-question-generation).
- MTL-task-dialog: [https://huggingface.co/RUCAIBox/mtl-task-dialog](https://huggingface.co/RUCAIBox/mtl-task-dialog).
## Citation
```bibtex
@article{tang2022mvp,
title={MVP: Multi-task Supervised Pre-training for Natural Language Generation},
author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong},
journal={arXiv preprint arXiv:2206.12131},
year={2022},
url={https://arxiv.org/abs/2206.12131},
}
```
|
RUCAIBox/mvp-open-dialog
|
RUCAIBox
| 2022-06-27T02:28:00Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"mvp",
"text-generation",
"text2text-generation",
"conversational",
"en",
"arxiv:2206.12131",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-02T11:53:44Z |
---
license: apache-2.0
language:
- en
tags:
- text-generation
- text2text-generation
- conversational
pipeline_tag: text2text-generation
widget:
- text: "Given the dialog: do you like dance? [SEP] Yes I do. Did you know Bruce Lee was a cha cha dancer?"
example_title: "Example1"
- text: "Given the dialog: i used to scare for darkness [X_SEP] it feels like hitting to blank wall when i see the darkness [SEP] Oh ya? I don't really see how [SEP] dont you feel so.. its a wonder [SEP] I do actually hit blank walls a lot of times but i get by"
example_title: "Example2"
---
# MVP-open-dialog
The MVP-open-dialog model was proposed in [**MVP: Multi-task Supervised Pre-training for Natural Language Generation**](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
The detailed information and instructions can be found [https://github.com/RUCAIBox/MVP](https://github.com/RUCAIBox/MVP).
## Model Description
MVP-open-dialog is a prompt-based model that MVP is further equipped with prompts pre-trained using labeled open dialogue system datasets. It is a variant (MVP+S) of our main [MVP](https://huggingface.co/RUCAIBox/mvp) model. It follows a Transformer encoder-decoder architecture with layer-wise prompts.
MVP-open-dialog is specially designed for open dialogue system (conversation) tasks, such as chitchat (PersonaChat, DailyDialog), knowledge grounded conversation (Topical-Chat, Wizard of Wikipedia) and visual dialog (DSTC7-AVSD).
## Example
```python
>>> from transformers import MvpTokenizer, MvpForConditionalGeneration
>>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp")
>>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp-open-dialog")
>>> inputs = tokenizer(
... "Given the dialog: do you like dance? [SEP] Yes I do. Did you know Bruce Lee was a cha cha dancer?",
... return_tensors="pt",
... )
>>> generated_ids = model.generate(**inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
['I did not know that. I did know that Tupac danced ballet in high school.']
```
## Related Models
**MVP**: [https://huggingface.co/RUCAIBox/mvp](https://huggingface.co/RUCAIBox/mvp).
**Prompt-based models**:
- MVP-multi-task: [https://huggingface.co/RUCAIBox/mvp-multi-task](https://huggingface.co/RUCAIBox/mvp-multi-task).
- MVP-summarization: [https://huggingface.co/RUCAIBox/mvp-summarization](https://huggingface.co/RUCAIBox/mvp-summarization).
- MVP-open-dialog: [https://huggingface.co/RUCAIBox/mvp-open-dialog](https://huggingface.co/RUCAIBox/mvp-open-dialog).
- MVP-data-to-text: [https://huggingface.co/RUCAIBox/mvp-data-to-text](https://huggingface.co/RUCAIBox/mvp-data-to-text).
- MVP-story: [https://huggingface.co/RUCAIBox/mvp-story](https://huggingface.co/RUCAIBox/mvp-story).
- MVP-question-answering: [https://huggingface.co/RUCAIBox/mvp-question-answering](https://huggingface.co/RUCAIBox/mvp-question-answering).
- MVP-question-generation: [https://huggingface.co/RUCAIBox/mvp-question-generation](https://huggingface.co/RUCAIBox/mvp-question-generation).
- MVP-task-dialog: [https://huggingface.co/RUCAIBox/mvp-task-dialog](https://huggingface.co/RUCAIBox/mvp-task-dialog).
**Multi-task models**:
- MTL-summarization: [https://huggingface.co/RUCAIBox/mtl-summarization](https://huggingface.co/RUCAIBox/mtl-summarization).
- MTL-open-dialog: [https://huggingface.co/RUCAIBox/mtl-open-dialog](https://huggingface.co/RUCAIBox/mtl-open-dialog).
- MTL-data-to-text: [https://huggingface.co/RUCAIBox/mtl-data-to-text](https://huggingface.co/RUCAIBox/mtl-data-to-text).
- MTL-story: [https://huggingface.co/RUCAIBox/mtl-story](https://huggingface.co/RUCAIBox/mtl-story).
- MTL-question-answering: [https://huggingface.co/RUCAIBox/mtl-question-answering](https://huggingface.co/RUCAIBox/mtl-question-answering).
- MTL-question-generation: [https://huggingface.co/RUCAIBox/mtl-question-generation](https://huggingface.co/RUCAIBox/mtl-question-generation).
- MTL-task-dialog: [https://huggingface.co/RUCAIBox/mtl-task-dialog](https://huggingface.co/RUCAIBox/mtl-task-dialog).
## Citation
```bibtex
@article{tang2022mvp,
title={MVP: Multi-task Supervised Pre-training for Natural Language Generation},
author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong},
journal={arXiv preprint arXiv:2206.12131},
year={2022},
url={https://arxiv.org/abs/2206.12131},
}
```
|
RUCAIBox/mvp-multi-task
|
RUCAIBox
| 2022-06-27T02:27:55Z | 5 | 2 |
transformers
|
[
"transformers",
"pytorch",
"mvp",
"text-generation",
"text2text-generation",
"summarization",
"conversational",
"en",
"arxiv:2206.12131",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-02T16:04:07Z |
---
license: apache-2.0
language:
- en
tags:
- text-generation
- text2text-generation
- summarization
- conversational
pipeline_tag: text2text-generation
widget:
- text: "Summarize: You may want to stick it to your boss and leave your job, but don't do it if these are your reasons."
example_title: "Summarization"
- text: "Given the dialog: do you like dance? [SEP] Yes I do. Did you know Bruce Lee was a cha cha dancer?"
example_title: "Dialog"
- text: "Describe the following data: Iron Man | instance of | Superhero [SEP] Stan Lee | creator | Iron Man"
example_title: "Data-to-text"
- text: "Given the story title: I think all public schools should have a uniform dress code."
example_title: "Story Generation"
- text: "Answer the following question: From which country did Angola achieve independence in 1975?"
example_title: "Question Answering"
- text: "Generate the question based on the answer: boxing [X_SEP] A bolo punch is a punch used in martial arts . A hook is a punch in boxing ."
example_title: "Question Generaion"
---
# MVP-multi-task
The MVP-multi-task model was proposed in [**MVP: Multi-task Supervised Pre-training for Natural Language Generation**](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
The detailed information and instructions can be found [https://github.com/RUCAIBox/MVP](https://github.com/RUCAIBox/MVP).
## Model Description
MVP-multi-task is a prompt-based model that MVP is further equipped with prompts pre-trained using a mixture of labeled datasets. It is a variant (MVP+M) of our main [MVP](https://huggingface.co/RUCAIBox/mvp) model. It follows a Transformer encoder-decoder architecture with layer-wise prompts.
MVP is specially designed for natural language generation and can be adapted to a wide range of generation tasks, including but not limited to summarization, data-to-text generation, open-ended dialogue system, story generation, question answering, question generation, task-oriented dialogue system, commonsense generation, paraphrase generation, text style transfer, and text simplification. Our model can also be adapted to natural language understanding tasks such as sequence classification and (extractive) question answering.
## Example
For summarization:
```python
>>> from transformers import MvpTokenizer, MvpForConditionalGeneration
>>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp")
>>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp-multi-task")
>>> inputs = tokenizer(
... "Summarize: You may want to stick it to your boss and leave your job, but don't do it if these are your reasons.",
... return_tensors="pt",
... )
>>> generated_ids = model.generate(**inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
["Why You Shouldn't Quit Your Job"]
```
For data-to-text generation:
```python
>>> from transformers import MvpTokenizerFast, MvpForConditionalGeneration
>>> tokenizer = MvpTokenizerFast.from_pretrained("RUCAIBox/mvp")
>>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp-multi-task")
>>> inputs = tokenizer(
... "Describe the following data: Iron Man | instance of | Superhero [SEP] Stan Lee | creator | Iron Man",
... return_tensors="pt",
... )
>>> generated_ids = model.generate(**inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
['Iron Man is a fictional superhero appearing in American comic books published by Marvel Comics.']
```
## Related Models
**MVP**: [https://huggingface.co/RUCAIBox/mvp](https://huggingface.co/RUCAIBox/mvp).
**Prompt-based models**:
- MVP-multi-task: [https://huggingface.co/RUCAIBox/mvp-multi-task](https://huggingface.co/RUCAIBox/mvp-multi-task).
- MVP-summarization: [https://huggingface.co/RUCAIBox/mvp-summarization](https://huggingface.co/RUCAIBox/mvp-summarization).
- MVP-open-dialog: [https://huggingface.co/RUCAIBox/mvp-open-dialog](https://huggingface.co/RUCAIBox/mvp-open-dialog).
- MVP-data-to-text: [https://huggingface.co/RUCAIBox/mvp-data-to-text](https://huggingface.co/RUCAIBox/mvp-data-to-text).
- MVP-story: [https://huggingface.co/RUCAIBox/mvp-story](https://huggingface.co/RUCAIBox/mvp-story).
- MVP-question-answering: [https://huggingface.co/RUCAIBox/mvp-question-answering](https://huggingface.co/RUCAIBox/mvp-question-answering).
- MVP-question-generation: [https://huggingface.co/RUCAIBox/mvp-question-generation](https://huggingface.co/RUCAIBox/mvp-question-generation).
- MVP-task-dialog: [https://huggingface.co/RUCAIBox/mvp-task-dialog](https://huggingface.co/RUCAIBox/mvp-task-dialog).
**Multi-task models**:
- MTL-summarization: [https://huggingface.co/RUCAIBox/mtl-summarization](https://huggingface.co/RUCAIBox/mtl-summarization).
- MTL-open-dialog: [https://huggingface.co/RUCAIBox/mtl-open-dialog](https://huggingface.co/RUCAIBox/mtl-open-dialog).
- MTL-data-to-text: [https://huggingface.co/RUCAIBox/mtl-data-to-text](https://huggingface.co/RUCAIBox/mtl-data-to-text).
- MTL-story: [https://huggingface.co/RUCAIBox/mtl-story](https://huggingface.co/RUCAIBox/mtl-story).
- MTL-question-answering: [https://huggingface.co/RUCAIBox/mtl-question-answering](https://huggingface.co/RUCAIBox/mtl-question-answering).
- MTL-question-generation: [https://huggingface.co/RUCAIBox/mtl-question-generation](https://huggingface.co/RUCAIBox/mtl-question-generation).
- MTL-task-dialog: [https://huggingface.co/RUCAIBox/mtl-task-dialog](https://huggingface.co/RUCAIBox/mtl-task-dialog).
## Citation
```bibtex
@article{tang2022mvp,
title={MVP: Multi-task Supervised Pre-training for Natural Language Generation},
author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong},
journal={arXiv preprint arXiv:2206.12131},
year={2022},
url={https://arxiv.org/abs/2206.12131},
}
```
|
RUCAIBox/mvp-data-to-text
|
RUCAIBox
| 2022-06-27T02:27:50Z | 38 | 4 |
transformers
|
[
"transformers",
"pytorch",
"mvp",
"text-generation",
"text2text-generation",
"en",
"arxiv:2206.12131",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-02T11:53:26Z |
---
license: apache-2.0
language:
- en
tags:
- text-generation
- text2text-generation
pipeline_tag: text2text-generation
widget:
- text: "Describe the following data: Iron Man | instance of | Superhero [SEP] Stan Lee | creator | Iron Man"
example_title: "Example1"
- text: "Describe the following data: First Clearing | LOCATION | On NYS 52 1 Mi. Youngsville [SEP] On NYS 52 1 Mi. Youngsville | CITY_OR_TOWN | Callicoon, New York"
example_title: "Example2"
---
# MVP-data-to-text
The MVP-data-to-text model was proposed in [**MVP: Multi-task Supervised Pre-training for Natural Language Generation**](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
The detailed information and instructions can be found [https://github.com/RUCAIBox/MVP](https://github.com/RUCAIBox/MVP).
## Model Description
MVP-data-to-text is a prompt-based model that MVP is further equipped with prompts pre-trained using labeled data-to-text datasets. It is a variant (MVP+S) of our main [MVP](https://huggingface.co/RUCAIBox/mvp) model. It follows a Transformer encoder-decoder architecture with layer-wise prompts.
MVP-data-to-text is specially designed for data-to-text generation tasks, such as KG-to-text generation (WebNLG, DART), table-to-text generation (WikiBio, ToTTo) and MR-to-text generation (E2E).
## Example
```python
>>> from transformers import MvpTokenizer, MvpForConditionalGeneration
>>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp")
>>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mvp-data-to-text")
>>> inputs = tokenizer(
... "Describe the following data: Iron Man | instance of | Superhero [SEP] Stan Lee | creator | Iron Man",
... return_tensors="pt",
... )
>>> generated_ids = model.generate(**inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
['Iron Man is a fictional superhero appearing in American comic books published by Marvel Comics.']
```
## Related Models
**MVP**: [https://huggingface.co/RUCAIBox/mvp](https://huggingface.co/RUCAIBox/mvp).
**Prompt-based models**:
- MVP-multi-task: [https://huggingface.co/RUCAIBox/mvp-multi-task](https://huggingface.co/RUCAIBox/mvp-multi-task).
- MVP-summarization: [https://huggingface.co/RUCAIBox/mvp-summarization](https://huggingface.co/RUCAIBox/mvp-summarization).
- MVP-open-dialog: [https://huggingface.co/RUCAIBox/mvp-open-dialog](https://huggingface.co/RUCAIBox/mvp-open-dialog).
- MVP-data-to-text: [https://huggingface.co/RUCAIBox/mvp-data-to-text](https://huggingface.co/RUCAIBox/mvp-data-to-text).
- MVP-story: [https://huggingface.co/RUCAIBox/mvp-story](https://huggingface.co/RUCAIBox/mvp-story).
- MVP-question-answering: [https://huggingface.co/RUCAIBox/mvp-question-answering](https://huggingface.co/RUCAIBox/mvp-question-answering).
- MVP-question-generation: [https://huggingface.co/RUCAIBox/mvp-question-generation](https://huggingface.co/RUCAIBox/mvp-question-generation).
- MVP-task-dialog: [https://huggingface.co/RUCAIBox/mvp-task-dialog](https://huggingface.co/RUCAIBox/mvp-task-dialog).
**Multi-task models**:
- MTL-summarization: [https://huggingface.co/RUCAIBox/mtl-summarization](https://huggingface.co/RUCAIBox/mtl-summarization).
- MTL-open-dialog: [https://huggingface.co/RUCAIBox/mtl-open-dialog](https://huggingface.co/RUCAIBox/mtl-open-dialog).
- MTL-data-to-text: [https://huggingface.co/RUCAIBox/mtl-data-to-text](https://huggingface.co/RUCAIBox/mtl-data-to-text).
- MTL-story: [https://huggingface.co/RUCAIBox/mtl-story](https://huggingface.co/RUCAIBox/mtl-story).
- MTL-question-answering: [https://huggingface.co/RUCAIBox/mtl-question-answering](https://huggingface.co/RUCAIBox/mtl-question-answering).
- MTL-question-generation: [https://huggingface.co/RUCAIBox/mtl-question-generation](https://huggingface.co/RUCAIBox/mtl-question-generation).
- MTL-task-dialog: [https://huggingface.co/RUCAIBox/mtl-task-dialog](https://huggingface.co/RUCAIBox/mtl-task-dialog).
## Citation
```bibtex
@article{tang2022mvp,
title={MVP: Multi-task Supervised Pre-training for Natural Language Generation},
author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong},
journal={arXiv preprint arXiv:2206.12131},
year={2022},
url={https://arxiv.org/abs/2206.12131},
}
```
|
RUCAIBox/mtl-summarization
|
RUCAIBox
| 2022-06-27T02:27:34Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mvp",
"text-generation",
"text2text-generation",
"summarization",
"en",
"arxiv:2206.12131",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-02T12:01:19Z |
---
license: apache-2.0
language:
- en
tags:
- text-generation
- text2text-generation
- summarization
pipeline_tag: text2text-generation
widget:
- text: "Summarize: You may want to stick it to your boss and leave your job, but don't do it if these are your reasons."
example_title: "Example1"
- text: "Summarize: Jorge Alfaro drove in two runs, Aaron Nola pitched seven innings of two-hit ball and the Philadelphia Phillies beat the Los Angeles Dodgers 2-1 Thursday, spoiling Clayton Kershaw's first start in almost a month. Hitting out of the No. 8 spot in the ..."
example_title: "Example2"
---
# MTL-summarization
The MTL-summarization model was proposed in [**MVP: Multi-task Supervised Pre-training for Natural Language Generation**](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
The detailed information and instructions can be found [https://github.com/RUCAIBox/MVP](https://github.com/RUCAIBox/MVP).
## Model Description
MTL-summarization is supervised pre-trained using a mixture of labeled summarization datasets. It is a variant (Single) of our main [MVP](https://huggingface.co/RUCAIBox/mvp) model. It follows a standard Transformer encoder-decoder architecture.
MTL-summarization is specially designed for summarization tasks, such as new summarization (CNN/DailyMail, XSum) and dialog summarization (SAMSum).
## Example
```python
>>> from transformers import MvpTokenizer, MvpForConditionalGeneration
>>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp")
>>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mtl-summarization")
>>> inputs = tokenizer(
... "Summarize: You may want to stick it to your boss and leave your job, but don't do it if these are your reasons.",
... return_tensors="pt",
... )
>>> generated_ids = model.generate(**inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
["Don't do it if these are your reasons"]
```
## Related Models
**MVP**: [https://huggingface.co/RUCAIBox/mvp](https://huggingface.co/RUCAIBox/mvp).
**Prompt-based models**:
- MVP-multi-task: [https://huggingface.co/RUCAIBox/mvp-multi-task](https://huggingface.co/RUCAIBox/mvp-multi-task).
- MVP-summarization: [https://huggingface.co/RUCAIBox/mvp-summarization](https://huggingface.co/RUCAIBox/mvp-summarization).
- MVP-open-dialog: [https://huggingface.co/RUCAIBox/mvp-open-dialog](https://huggingface.co/RUCAIBox/mvp-open-dialog).
- MVP-data-to-text: [https://huggingface.co/RUCAIBox/mvp-data-to-text](https://huggingface.co/RUCAIBox/mvp-data-to-text).
- MVP-story: [https://huggingface.co/RUCAIBox/mvp-story](https://huggingface.co/RUCAIBox/mvp-story).
- MVP-question-answering: [https://huggingface.co/RUCAIBox/mvp-question-answering](https://huggingface.co/RUCAIBox/mvp-question-answering).
- MVP-question-generation: [https://huggingface.co/RUCAIBox/mvp-question-generation](https://huggingface.co/RUCAIBox/mvp-question-generation).
- MVP-task-dialog: [https://huggingface.co/RUCAIBox/mvp-task-dialog](https://huggingface.co/RUCAIBox/mvp-task-dialog).
**Multi-task models**:
- MTL-summarization: [https://huggingface.co/RUCAIBox/mtl-summarization](https://huggingface.co/RUCAIBox/mtl-summarization).
- MTL-open-dialog: [https://huggingface.co/RUCAIBox/mtl-open-dialog](https://huggingface.co/RUCAIBox/mtl-open-dialog).
- MTL-data-to-text: [https://huggingface.co/RUCAIBox/mtl-data-to-text](https://huggingface.co/RUCAIBox/mtl-data-to-text).
- MTL-story: [https://huggingface.co/RUCAIBox/mtl-story](https://huggingface.co/RUCAIBox/mtl-story).
- MTL-question-answering: [https://huggingface.co/RUCAIBox/mtl-question-answering](https://huggingface.co/RUCAIBox/mtl-question-answering).
- MTL-question-generation: [https://huggingface.co/RUCAIBox/mtl-question-generation](https://huggingface.co/RUCAIBox/mtl-question-generation).
- MTL-task-dialog: [https://huggingface.co/RUCAIBox/mtl-task-dialog](https://huggingface.co/RUCAIBox/mtl-task-dialog).
## Citation
```bibtex
@article{tang2022mvp,
title={MVP: Multi-task Supervised Pre-training for Natural Language Generation},
author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong},
journal={arXiv preprint arXiv:2206.12131},
year={2022},
url={https://arxiv.org/abs/2206.12131},
}
```
|
RUCAIBox/mtl-story
|
RUCAIBox
| 2022-06-27T02:27:29Z | 1 | 1 |
transformers
|
[
"transformers",
"pytorch",
"mvp",
"text-generation",
"text2text-generation",
"en",
"arxiv:2206.12131",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-02T12:00:10Z |
---
license: apache-2.0
language:
- en
tags:
- text-generation
- text2text-generation
pipeline_tag: text2text-generation
widget:
- text: "Given the story title: I think all public schools should have a uniform dress code."
example_title: "Example1"
- text: "Given the story title: My girlfriend and I decided to move to a new state. We packed everything in our cars and drove there."
example_title: "Example2"
---
# MTL-story
The MTL-story model was proposed in [**MVP: Multi-task Supervised Pre-training for Natural Language Generation**](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
The detailed information and instructions can be found [https://github.com/RUCAIBox/MVP](https://github.com/RUCAIBox/MVP).
## Model Description
MTL-story is supervised pre-trained using a mixture of labeled story generation datasets. It is a variant (Single) of our main [MVP](https://huggingface.co/RUCAIBox/mvp) model. It follows a standard Transformer encoder-decoder architecture.
MTL-story is specially designed for story generation tasks, such as ROCStories and WritingPrompts.
## Example
```python
>>> from transformers import MvpTokenizer, MvpForConditionalGeneration
>>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp")
>>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mtl-story")
>>> inputs = tokenizer(
... "Given the story title: I think all public schools should have a uniform dress code.",
... return_tensors="pt",
... )
>>> generated_ids = model.generate(**inputs, max_length=1024)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
["I don't know about you, but I don't think it would be a good idea to have a uniform dress code in public schools. I think it's a waste of time and money. If you're going to have uniform dress codes, you need to make sure that the uniforms are appropriate for the school and that the students are comfortable in them. If they're not comfortable, then they shouldn't be allowed to wear them."]
```
## Related Models
**MVP**: [https://huggingface.co/RUCAIBox/mvp](https://huggingface.co/RUCAIBox/mvp).
**Prompt-based models**:
- MVP-multi-task: [https://huggingface.co/RUCAIBox/mvp-multi-task](https://huggingface.co/RUCAIBox/mvp-multi-task).
- MVP-summarization: [https://huggingface.co/RUCAIBox/mvp-summarization](https://huggingface.co/RUCAIBox/mvp-summarization).
- MVP-open-dialog: [https://huggingface.co/RUCAIBox/mvp-open-dialog](https://huggingface.co/RUCAIBox/mvp-open-dialog).
- MVP-data-to-text: [https://huggingface.co/RUCAIBox/mvp-data-to-text](https://huggingface.co/RUCAIBox/mvp-data-to-text).
- MVP-story: [https://huggingface.co/RUCAIBox/mvp-story](https://huggingface.co/RUCAIBox/mvp-story).
- MVP-question-answering: [https://huggingface.co/RUCAIBox/mvp-question-answering](https://huggingface.co/RUCAIBox/mvp-question-answering).
- MVP-question-generation: [https://huggingface.co/RUCAIBox/mvp-question-generation](https://huggingface.co/RUCAIBox/mvp-question-generation).
- MVP-task-dialog: [https://huggingface.co/RUCAIBox/mvp-task-dialog](https://huggingface.co/RUCAIBox/mvp-task-dialog).
**Multi-task models**:
- MTL-summarization: [https://huggingface.co/RUCAIBox/mtl-summarization](https://huggingface.co/RUCAIBox/mtl-summarization).
- MTL-open-dialog: [https://huggingface.co/RUCAIBox/mtl-open-dialog](https://huggingface.co/RUCAIBox/mtl-open-dialog).
- MTL-data-to-text: [https://huggingface.co/RUCAIBox/mtl-data-to-text](https://huggingface.co/RUCAIBox/mtl-data-to-text).
- MTL-story: [https://huggingface.co/RUCAIBox/mtl-story](https://huggingface.co/RUCAIBox/mtl-story).
- MTL-question-answering: [https://huggingface.co/RUCAIBox/mtl-question-answering](https://huggingface.co/RUCAIBox/mtl-question-answering).
- MTL-question-generation: [https://huggingface.co/RUCAIBox/mtl-question-generation](https://huggingface.co/RUCAIBox/mtl-question-generation).
- MTL-task-dialog: [https://huggingface.co/RUCAIBox/mtl-task-dialog](https://huggingface.co/RUCAIBox/mtl-task-dialog).
## Citation
```bibtex
@article{tang2022mvp,
title={MVP: Multi-task Supervised Pre-training for Natural Language Generation},
author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong},
journal={arXiv preprint arXiv:2206.12131},
year={2022},
url={https://arxiv.org/abs/2206.12131},
}
```
|
RUCAIBox/mtl-question-generation
|
RUCAIBox
| 2022-06-27T02:27:24Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"mvp",
"text-generation",
"text2text-generation",
"en",
"arxiv:2206.12131",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-02T12:00:54Z |
---
license: apache-2.0
language:
- en
tags:
- text-generation
- text2text-generation
pipeline_tag: text2text-generation
widget:
- text: "Generate the question based on the answer: boxing [X_SEP] A bolo punch is a punch used in martial arts . A hook is a punch in boxing ."
example_title: "Example1"
- text: "Generate the question based on the answer: Arthur 's Magazine [X_SEP] Arthur 's Magazine ( 1844–1846 ) was an American literary periodical published in Philadelphia in the 19th century . First for Women is a woman 's magazine published by Bauer Media Group in the USA ."
example_title: "Example2"
---
# MTL-question-generation
The MTL-question-generation model was proposed in [**MVP: Multi-task Supervised Pre-training for Natural Language Generation**](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
The detailed information and instructions can be found [https://github.com/RUCAIBox/MVP](https://github.com/RUCAIBox/MVP).
## Model Description
MTL-question-generation is supervised pre-trained using a mixture of labeled question generation datasets. It is a variant (Single) of our main [MVP](https://huggingface.co/RUCAIBox/mvp) model. It follows a standard Transformer encoder-decoder architecture.
MTL-question-generation is specially designed for question generation tasks, such as SQuAD and CoQA.
## Example
```python
>>> from transformers import MvpTokenizer, MvpForConditionalGeneration
>>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp")
>>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mtl-question-generation")
>>> inputs = tokenizer(
... "Generate the question based on the answer: boxing [X_SEP] A bolo punch is a punch used in martial arts . A hook is a punch in boxing .",
... return_tensors="pt",
... )
>>> generated_ids = model.generate(**inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
['A bolo punch and a hook are both punches used in what sport?]
```
## Related Models
**MVP**: [https://huggingface.co/RUCAIBox/mvp](https://huggingface.co/RUCAIBox/mvp).
**Prompt-based models**:
- MVP-multi-task: [https://huggingface.co/RUCAIBox/mvp-multi-task](https://huggingface.co/RUCAIBox/mvp-multi-task).
- MVP-summarization: [https://huggingface.co/RUCAIBox/mvp-summarization](https://huggingface.co/RUCAIBox/mvp-summarization).
- MVP-open-dialog: [https://huggingface.co/RUCAIBox/mvp-open-dialog](https://huggingface.co/RUCAIBox/mvp-open-dialog).
- MVP-data-to-text: [https://huggingface.co/RUCAIBox/mvp-data-to-text](https://huggingface.co/RUCAIBox/mvp-data-to-text).
- MVP-story: [https://huggingface.co/RUCAIBox/mvp-story](https://huggingface.co/RUCAIBox/mvp-story).
- MVP-question-answering: [https://huggingface.co/RUCAIBox/mvp-question-answering](https://huggingface.co/RUCAIBox/mvp-question-answering).
- MVP-question-generation: [https://huggingface.co/RUCAIBox/mvp-question-generation](https://huggingface.co/RUCAIBox/mvp-question-generation).
- MVP-task-dialog: [https://huggingface.co/RUCAIBox/mvp-task-dialog](https://huggingface.co/RUCAIBox/mvp-task-dialog).
**Multi-task models**:
- MTL-summarization: [https://huggingface.co/RUCAIBox/mtl-summarization](https://huggingface.co/RUCAIBox/mtl-summarization).
- MTL-open-dialog: [https://huggingface.co/RUCAIBox/mtl-open-dialog](https://huggingface.co/RUCAIBox/mtl-open-dialog).
- MTL-data-to-text: [https://huggingface.co/RUCAIBox/mtl-data-to-text](https://huggingface.co/RUCAIBox/mtl-data-to-text).
- MTL-story: [https://huggingface.co/RUCAIBox/mtl-story](https://huggingface.co/RUCAIBox/mtl-story).
- MTL-question-answering: [https://huggingface.co/RUCAIBox/mtl-question-answering](https://huggingface.co/RUCAIBox/mtl-question-answering).
- MTL-question-generation: [https://huggingface.co/RUCAIBox/mtl-question-generation](https://huggingface.co/RUCAIBox/mtl-question-generation).
- MTL-task-dialog: [https://huggingface.co/RUCAIBox/mtl-task-dialog](https://huggingface.co/RUCAIBox/mtl-task-dialog).
## Citation
```bibtex
@article{tang2022mvp,
title={MVP: Multi-task Supervised Pre-training for Natural Language Generation},
author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong},
journal={arXiv preprint arXiv:2206.12131},
year={2022},
url={https://arxiv.org/abs/2206.12131},
}
```
|
RUCAIBox/mtl-question-answering
|
RUCAIBox
| 2022-06-27T02:27:20Z | 29 | 1 |
transformers
|
[
"transformers",
"pytorch",
"mvp",
"text-generation",
"text2text-generation",
"en",
"arxiv:2206.12131",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-02T12:00:27Z |
---
license: apache-2.0
language:
- en
tags:
- text-generation
- text2text-generation
pipeline_tag: text2text-generation
widget:
- text: "Answer the following question: From which country did Angola achieve independence in 1975?"
example_title: "Example1"
- text: "Answer the following question: what is ce certified [X_SEP] The CE marking is the manufacturer's declaration that the product meets the requirements of the applicable EC directives. Officially, CE is an abbreviation of Conformite Conformité, europeenne Européenne Meaning. european conformity"
example_title: "Example2"
---
# MTL-question-answering
The MTL-question-answering model was proposed in [**MVP: Multi-task Supervised Pre-training for Natural Language Generation**](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
The detailed information and instructions can be found [https://github.com/RUCAIBox/MVP](https://github.com/RUCAIBox/MVP).
## Model Description
MTL-question-answering is supervised pre-trained using a mixture of labeled question answering datasets. It is a variant (Single) of our main [MVP](https://huggingface.co/RUCAIBox/mvp) model. It follows a standard Transformer encoder-decoder architecture.
MTL-question-answering is specially designed for question answering tasks, such as reading comprehension (SQuAD), conversational question answering (CoQA) and closed-book question-answering (Natural Questions).
## Example
```python
>>> from transformers import MvpTokenizer, MvpForConditionalGeneration
>>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp")
>>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mtl-question-answering")
>>> inputs = tokenizer(
... "Answer the following question: From which country did Angola achieve independence in 1975?",
... return_tensors="pt",
... )
>>> generated_ids = model.generate(**inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
['Portugal']
```
## Related Models
**MVP**: [https://huggingface.co/RUCAIBox/mvp](https://huggingface.co/RUCAIBox/mvp).
**Prompt-based models**:
- MVP-multi-task: [https://huggingface.co/RUCAIBox/mvp-multi-task](https://huggingface.co/RUCAIBox/mvp-multi-task).
- MVP-summarization: [https://huggingface.co/RUCAIBox/mvp-summarization](https://huggingface.co/RUCAIBox/mvp-summarization).
- MVP-open-dialog: [https://huggingface.co/RUCAIBox/mvp-open-dialog](https://huggingface.co/RUCAIBox/mvp-open-dialog).
- MVP-data-to-text: [https://huggingface.co/RUCAIBox/mvp-data-to-text](https://huggingface.co/RUCAIBox/mvp-data-to-text).
- MVP-story: [https://huggingface.co/RUCAIBox/mvp-story](https://huggingface.co/RUCAIBox/mvp-story).
- MVP-question-answering: [https://huggingface.co/RUCAIBox/mvp-question-answering](https://huggingface.co/RUCAIBox/mvp-question-answering).
- MVP-question-generation: [https://huggingface.co/RUCAIBox/mvp-question-generation](https://huggingface.co/RUCAIBox/mvp-question-generation).
- MVP-task-dialog: [https://huggingface.co/RUCAIBox/mvp-task-dialog](https://huggingface.co/RUCAIBox/mvp-task-dialog).
**Multi-task models**:
- MTL-summarization: [https://huggingface.co/RUCAIBox/mtl-summarization](https://huggingface.co/RUCAIBox/mtl-summarization).
- MTL-open-dialog: [https://huggingface.co/RUCAIBox/mtl-open-dialog](https://huggingface.co/RUCAIBox/mtl-open-dialog).
- MTL-data-to-text: [https://huggingface.co/RUCAIBox/mtl-data-to-text](https://huggingface.co/RUCAIBox/mtl-data-to-text).
- MTL-story: [https://huggingface.co/RUCAIBox/mtl-story](https://huggingface.co/RUCAIBox/mtl-story).
- MTL-question-answering: [https://huggingface.co/RUCAIBox/mtl-question-answering](https://huggingface.co/RUCAIBox/mtl-question-answering).
- MTL-question-generation: [https://huggingface.co/RUCAIBox/mtl-question-generation](https://huggingface.co/RUCAIBox/mtl-question-generation).
- MTL-task-dialog: [https://huggingface.co/RUCAIBox/mtl-task-dialog](https://huggingface.co/RUCAIBox/mtl-task-dialog).
## Citation
```bibtex
@article{tang2022mvp,
title={MVP: Multi-task Supervised Pre-training for Natural Language Generation},
author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong},
journal={arXiv preprint arXiv:2206.12131},
year={2022},
url={https://arxiv.org/abs/2206.12131},
}
```
|
RUCAIBox/mtl-open-dialog
|
RUCAIBox
| 2022-06-27T02:27:15Z | 3 | 1 |
transformers
|
[
"transformers",
"pytorch",
"mvp",
"text-generation",
"text2text-generation",
"conversational",
"en",
"arxiv:2206.12131",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-02T12:02:35Z |
---
license: apache-2.0
language:
- en
tags:
- text-generation
- text2text-generation
- conversational
pipeline_tag: text2text-generation
widget:
- text: "Given the dialog: do you like dance? [SEP] Yes I do. Did you know Bruce Lee was a cha cha dancer?"
example_title: "Example1"
- text: "Given the dialog: i used to scare for darkness [X_SEP] it feels like hitting to blank wall when i see the darkness [SEP] Oh ya? I don't really see how [SEP] dont you feel so.. its a wonder [SEP] I do actually hit blank walls a lot of times but i get by"
example_title: "Example2"
---
# MTL-open-dialog
The MTL-open-dialog model was proposed in [**MVP: Multi-task Supervised Pre-training for Natural Language Generation**](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
The detailed information and instructions can be found [https://github.com/RUCAIBox/MVP](https://github.com/RUCAIBox/MVP).
## Model Description
MTL-open-dialog is supervised pre-trained using a mixture of labeled open dialogue system datasets. It is a variant (Single) of our main [MVP](https://huggingface.co/RUCAIBox/mvp) model. It follows a standard Transformer encoder-decoder architecture.
MTL-open-dialog is specially designed for open dialogue system (conversation) tasks, such as chitchat (PersonaChat, DailyDialog), knowledge grounded conversation (Topical-Chat, Wizard of Wikipedia) and visual dialog (DSTC7-AVSD).
## Example
```python
>>> from transformers import MvpTokenizer, MvpForConditionalGeneration
>>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp")
>>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mtl-open-dialog")
>>> inputs = tokenizer(
... "Given the dialog: do you like dance? [SEP] Yes I do. Did you know Bruce Lee was a cha cha dancer?",
... return_tensors="pt",
... )
>>> generated_ids = model.generate(**inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
['Yes he won the Hong Kong Cha Cha championship in 1958']
```
## Related Models
**MVP**: [https://huggingface.co/RUCAIBox/mvp](https://huggingface.co/RUCAIBox/mvp).
**Prompt-based models**:
- MVP-multi-task: [https://huggingface.co/RUCAIBox/mvp-multi-task](https://huggingface.co/RUCAIBox/mvp-multi-task).
- MVP-summarization: [https://huggingface.co/RUCAIBox/mvp-summarization](https://huggingface.co/RUCAIBox/mvp-summarization).
- MVP-open-dialog: [https://huggingface.co/RUCAIBox/mvp-open-dialog](https://huggingface.co/RUCAIBox/mvp-open-dialog).
- MVP-data-to-text: [https://huggingface.co/RUCAIBox/mvp-data-to-text](https://huggingface.co/RUCAIBox/mvp-data-to-text).
- MVP-story: [https://huggingface.co/RUCAIBox/mvp-story](https://huggingface.co/RUCAIBox/mvp-story).
- MVP-question-answering: [https://huggingface.co/RUCAIBox/mvp-question-answering](https://huggingface.co/RUCAIBox/mvp-question-answering).
- MVP-question-generation: [https://huggingface.co/RUCAIBox/mvp-question-generation](https://huggingface.co/RUCAIBox/mvp-question-generation).
- MVP-task-dialog: [https://huggingface.co/RUCAIBox/mvp-task-dialog](https://huggingface.co/RUCAIBox/mvp-task-dialog).
**Multi-task models**:
- MTL-summarization: [https://huggingface.co/RUCAIBox/mtl-summarization](https://huggingface.co/RUCAIBox/mtl-summarization).
- MTL-open-dialog: [https://huggingface.co/RUCAIBox/mtl-open-dialog](https://huggingface.co/RUCAIBox/mtl-open-dialog).
- MTL-data-to-text: [https://huggingface.co/RUCAIBox/mtl-data-to-text](https://huggingface.co/RUCAIBox/mtl-data-to-text).
- MTL-story: [https://huggingface.co/RUCAIBox/mtl-story](https://huggingface.co/RUCAIBox/mtl-story).
- MTL-question-answering: [https://huggingface.co/RUCAIBox/mtl-question-answering](https://huggingface.co/RUCAIBox/mtl-question-answering).
- MTL-question-generation: [https://huggingface.co/RUCAIBox/mtl-question-generation](https://huggingface.co/RUCAIBox/mtl-question-generation).
- MTL-task-dialog: [https://huggingface.co/RUCAIBox/mtl-task-dialog](https://huggingface.co/RUCAIBox/mtl-task-dialog).
## Citation
```bibtex
@article{tang2022mvp,
title={MVP: Multi-task Supervised Pre-training for Natural Language Generation},
author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong},
journal={arXiv preprint arXiv:2206.12131},
year={2022},
url={https://arxiv.org/abs/2206.12131},
}
```
|
RUCAIBox/mtl-data-to-text
|
RUCAIBox
| 2022-06-27T02:27:10Z | 259 | 28 |
transformers
|
[
"transformers",
"pytorch",
"mvp",
"text-generation",
"text2text-generation",
"en",
"arxiv:2206.12131",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-02T12:01:55Z |
---
license: apache-2.0
language:
- en
tags:
- text-generation
- text2text-generation
pipeline_tag: text2text-generation
widget:
- text: "Describe the following data: Iron Man | instance of | Superhero [SEP] Stan Lee | creator | Iron Man"
example_title: "Example1"
- text: "Describe the following data: First Clearing | LOCATION | On NYS 52 1 Mi. Youngsville [SEP] On NYS 52 1 Mi. Youngsville | CITY_OR_TOWN | Callicoon, New York"
example_title: "Example2"
---
# MTL-data-to-text
The MTL-data-to-text model was proposed in [**MVP: Multi-task Supervised Pre-training for Natural Language Generation**](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
The detailed information and instructions can be found [https://github.com/RUCAIBox/MVP](https://github.com/RUCAIBox/MVP).
## Model Description
MTL-data-to-text is supervised pre-trained using a mixture of labeled data-to-text datasets. It is a variant (Single) of our main [MVP](https://huggingface.co/RUCAIBox/mvp) model. It follows a standard Transformer encoder-decoder architecture.
MTL-data-to-text is specially designed for data-to-text generation tasks, such as KG-to-text generation (WebNLG, DART), table-to-text generation (WikiBio, ToTTo) and MR-to-text generation (E2E).
## Example
```python
>>> from transformers import MvpTokenizer, MvpForConditionalGeneration
>>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp")
>>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mtl-data-to-text")
>>> inputs = tokenizer(
... "Describe the following data: Iron Man | instance of | Superhero [SEP] Stan Lee | creator | Iron Man",
... return_tensors="pt",
... )
>>> generated_ids = model.generate(**inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
['Iron Man is a fictional superhero appearing in American comic books published by Marvel Comics.']
```
## Related Models
**MVP**: [https://huggingface.co/RUCAIBox/mvp](https://huggingface.co/RUCAIBox/mvp).
**Prompt-based models**:
- MVP-multi-task: [https://huggingface.co/RUCAIBox/mvp-multi-task](https://huggingface.co/RUCAIBox/mvp-multi-task).
- MVP-summarization: [https://huggingface.co/RUCAIBox/mvp-summarization](https://huggingface.co/RUCAIBox/mvp-summarization).
- MVP-open-dialog: [https://huggingface.co/RUCAIBox/mvp-open-dialog](https://huggingface.co/RUCAIBox/mvp-open-dialog).
- MVP-data-to-text: [https://huggingface.co/RUCAIBox/mvp-data-to-text](https://huggingface.co/RUCAIBox/mvp-data-to-text).
- MVP-story: [https://huggingface.co/RUCAIBox/mvp-story](https://huggingface.co/RUCAIBox/mvp-story).
- MVP-question-answering: [https://huggingface.co/RUCAIBox/mvp-question-answering](https://huggingface.co/RUCAIBox/mvp-question-answering).
- MVP-question-generation: [https://huggingface.co/RUCAIBox/mvp-question-generation](https://huggingface.co/RUCAIBox/mvp-question-generation).
- MVP-task-dialog: [https://huggingface.co/RUCAIBox/mvp-task-dialog](https://huggingface.co/RUCAIBox/mvp-task-dialog).
**Multi-task models**:
- MTL-summarization: [https://huggingface.co/RUCAIBox/mtl-summarization](https://huggingface.co/RUCAIBox/mtl-summarization).
- MTL-open-dialog: [https://huggingface.co/RUCAIBox/mtl-open-dialog](https://huggingface.co/RUCAIBox/mtl-open-dialog).
- MTL-data-to-text: [https://huggingface.co/RUCAIBox/mtl-data-to-text](https://huggingface.co/RUCAIBox/mtl-data-to-text).
- MTL-story: [https://huggingface.co/RUCAIBox/mtl-story](https://huggingface.co/RUCAIBox/mtl-story).
- MTL-question-answering: [https://huggingface.co/RUCAIBox/mtl-question-answering](https://huggingface.co/RUCAIBox/mtl-question-answering).
- MTL-question-generation: [https://huggingface.co/RUCAIBox/mtl-question-generation](https://huggingface.co/RUCAIBox/mtl-question-generation).
- MTL-task-dialog: [https://huggingface.co/RUCAIBox/mtl-task-dialog](https://huggingface.co/RUCAIBox/mtl-task-dialog).
## Citation
```bibtex
@article{tang2022mvp,
title={MVP: Multi-task Supervised Pre-training for Natural Language Generation},
author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong},
journal={arXiv preprint arXiv:2206.12131},
year={2022},
url={https://arxiv.org/abs/2206.12131},
}
```
|
luomingshuang/icefall_asr_tal-csasr_pruned_transducer_stateless5
|
luomingshuang
| 2022-06-27T01:54:36Z | 0 | 3 | null |
[
"tensorboard",
"region:us"
] | null | 2022-06-23T04:14:43Z |
Note: This recipe is trained with the codes from this PR https://github.com/k2-fsa/icefall/pull/428
# Pre-trained Transducer-Stateless5 models for the TAL_CSASR dataset with icefall.
The model was trained on the far data of [TAL_CSASR](https://ai.100tal.com/dataset) with the scripts in [icefall](https://github.com/k2-fsa/icefall) based on the latest version k2.
## Training procedure
The main repositories are list below, we will update the training and decoding scripts with the update of version.
k2: https://github.com/k2-fsa/k2
icefall: https://github.com/k2-fsa/icefall
lhotse: https://github.com/lhotse-speech/lhotse
* Install k2 and lhotse, k2 installation guide refers to https://k2.readthedocs.io/en/latest/installation/index.html, lhotse refers to https://lhotse.readthedocs.io/en/latest/getting-started.html#installation. I think the latest version would be ok. And please also install the requirements listed in icefall.
* Clone icefall(https://github.com/k2-fsa/icefall) and check to the commit showed above.
```
git clone https://github.com/k2-fsa/icefall
cd icefall
```
* Preparing data.
```
cd egs/tal_csasr/ASR
bash ./prepare.sh
```
* Training
```
export CUDA_VISIBLE_DEVICES="0,1,2,3,4,5"
./pruned_transducer_stateless5/train.py \
--world-size 6 \
--num-epochs 30 \
--start-epoch 1 \
--exp-dir pruned_transducer_stateless5/exp \
--lang-dir data/lang_char \
--max-duration 90
```
## Evaluation results
The decoding results (CER%) on TAL_CSASR(dev and test) are listed below:
|decoding-method | epoch(iter) | avg | dev | test |
|--|--|--|--|--|
|greedy_search | 30 | 24 | 7.49 | 7.58|
|modified_beam_search | 30 | 24 | 7.33 | 7.38|
|fast_beam_search | 30 | 24 | 7.32 | 7.42|
|greedy_search(use-averaged-model=True) | 30 | 24 | 7.30 | 7.39|
|modified_beam_search(use-averaged-model=True) | 30 | 24 | 7.15 | 7.22|
|fast_beam_search(use-averaged-model=True) | 30 | 24 | 7.18 | 7.27|
|greedy_search | 348000 | 30 | 7.46 | 7.54|
|modified_beam_search | 348000 | 30 | 7.24 | 7.36|
|fast_beam_search | 348000 | 30 | 7.25 | 7.39 |
The results (CER(%) and WER(%)) for Chinese CER and English WER respectivly (zh: Chinese, en: English):
|decoding-method | epoch(iter) | avg | dev | dev_zh | dev_en | test | test_zh | test_en |
|--|--|--|--|--|--|--|--|--|
|greedy_search(use-averaged-model=True) | 30 | 24 | 7.30 | 6.48 | 19.19 |7.39| 6.66 | 19.13|
|modified_beam_search(use-averaged-model=True) | 30 | 24 | 7.15 | 6.35 | 18.95 | 7.22| 6.50 | 18.70 |
|fast_beam_search(use-averaged-model=True) | 30 | 24 | 7.18 | 6.39| 18.90 | 7.27| 6.55 | 18.77|
|
tjscollins/atari-dqn
|
tjscollins
| 2022-06-27T01:45:40Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-27T01:44:58Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 565.50 +/- 141.39
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga tjscollins -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga tjscollins
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Samiul/wav2vec2-large-xls-r-300m-turkish-colab
|
Samiul
| 2022-06-26T23:31:16Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-26T19:31:57Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-turkish-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-turkish-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3821
- Wer: 0.3208
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.9162 | 3.67 | 400 | 0.6340 | 0.6360 |
| 0.4033 | 7.34 | 800 | 0.4588 | 0.4911 |
| 0.1919 | 11.01 | 1200 | 0.4392 | 0.4460 |
| 0.1315 | 14.68 | 1600 | 0.4269 | 0.4270 |
| 0.0963 | 18.35 | 2000 | 0.4327 | 0.3834 |
| 0.0801 | 22.02 | 2400 | 0.3867 | 0.3643 |
| 0.0631 | 25.69 | 2800 | 0.3854 | 0.3441 |
| 0.0492 | 29.36 | 3200 | 0.3821 | 0.3208 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.10.3
|
sudo-s/exper_batch_16_e8
|
sudo-s
| 2022-06-26T22:18:36Z | 51 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-06-26T20:43:39Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: exper_batch_16_e8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# exper_batch_16_e8
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the sudo-s/herbier_mesuem1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3951
- Accuracy: 0.9129
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Apex, opt level O1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.8115 | 0.16 | 100 | 3.7948 | 0.1862 |
| 3.1194 | 0.31 | 200 | 3.0120 | 0.3281 |
| 2.3703 | 0.47 | 300 | 2.4791 | 0.4426 |
| 2.07 | 0.63 | 400 | 2.1720 | 0.5 |
| 1.6847 | 0.78 | 500 | 1.7291 | 0.5956 |
| 1.3821 | 0.94 | 600 | 1.4777 | 0.6299 |
| 0.9498 | 1.1 | 700 | 1.2935 | 0.6681 |
| 0.8741 | 1.25 | 800 | 1.1353 | 0.7051 |
| 0.8875 | 1.41 | 900 | 0.9951 | 0.7448 |
| 0.7233 | 1.56 | 1000 | 0.9265 | 0.7487 |
| 0.6696 | 1.72 | 1100 | 0.8660 | 0.7625 |
| 0.7364 | 1.88 | 1200 | 0.8710 | 0.7579 |
| 0.3933 | 2.03 | 1300 | 0.7162 | 0.8038 |
| 0.3443 | 2.19 | 1400 | 0.6305 | 0.8300 |
| 0.3376 | 2.35 | 1500 | 0.6273 | 0.8315 |
| 0.3071 | 2.5 | 1600 | 0.5988 | 0.8319 |
| 0.2863 | 2.66 | 1700 | 0.6731 | 0.8153 |
| 0.3017 | 2.82 | 1800 | 0.6042 | 0.8315 |
| 0.2382 | 2.97 | 1900 | 0.5118 | 0.8712 |
| 0.1578 | 3.13 | 2000 | 0.4917 | 0.8736 |
| 0.1794 | 3.29 | 2100 | 0.5302 | 0.8631 |
| 0.1093 | 3.44 | 2200 | 0.5035 | 0.8635 |
| 0.1076 | 3.6 | 2300 | 0.5186 | 0.8674 |
| 0.1219 | 3.76 | 2400 | 0.4723 | 0.8801 |
| 0.1017 | 3.91 | 2500 | 0.5132 | 0.8712 |
| 0.0351 | 4.07 | 2600 | 0.4709 | 0.8728 |
| 0.0295 | 4.23 | 2700 | 0.4674 | 0.8824 |
| 0.0416 | 4.38 | 2800 | 0.4836 | 0.8805 |
| 0.0386 | 4.54 | 2900 | 0.4663 | 0.8828 |
| 0.0392 | 4.69 | 3000 | 0.4003 | 0.8990 |
| 0.0383 | 4.85 | 3100 | 0.4187 | 0.8948 |
| 0.0624 | 5.01 | 3200 | 0.4460 | 0.8874 |
| 0.0188 | 5.16 | 3300 | 0.4169 | 0.9029 |
| 0.0174 | 5.32 | 3400 | 0.4098 | 0.8951 |
| 0.0257 | 5.48 | 3500 | 0.4289 | 0.8951 |
| 0.0123 | 5.63 | 3600 | 0.4295 | 0.9029 |
| 0.0052 | 5.79 | 3700 | 0.4395 | 0.8994 |
| 0.0081 | 5.95 | 3800 | 0.4217 | 0.9082 |
| 0.0032 | 6.1 | 3900 | 0.4216 | 0.9056 |
| 0.0033 | 6.26 | 4000 | 0.4113 | 0.9082 |
| 0.0024 | 6.42 | 4100 | 0.4060 | 0.9102 |
| 0.0022 | 6.57 | 4200 | 0.4067 | 0.9090 |
| 0.0031 | 6.73 | 4300 | 0.4005 | 0.9113 |
| 0.0021 | 6.89 | 4400 | 0.4008 | 0.9129 |
| 0.0021 | 7.04 | 4500 | 0.3967 | 0.9113 |
| 0.0043 | 7.2 | 4600 | 0.3960 | 0.9121 |
| 0.0022 | 7.36 | 4700 | 0.3962 | 0.9125 |
| 0.0021 | 7.51 | 4800 | 0.3992 | 0.9121 |
| 0.002 | 7.67 | 4900 | 0.3951 | 0.9129 |
| 0.0023 | 7.82 | 5000 | 0.3952 | 0.9125 |
| 0.0021 | 7.98 | 5100 | 0.3952 | 0.9129 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.5.1
- Datasets 2.3.2
- Tokenizers 0.12.1
|
kaisuke/finetuning-sentiment-model-3000-samples
|
kaisuke
| 2022-06-26T21:39:32Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-26T21:27:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.87
- name: F1
type: f1
value: 0.8695652173913044
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3120
- Accuracy: 0.87
- F1: 0.8696
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
sudo-s/exper_batch_16_e4
|
sudo-s
| 2022-06-26T20:38:11Z | 55 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-06-26T19:53:10Z |
---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: exper_batch_16_e4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# exper_batch_16_e4
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the sudo-s/herbier_mesuem1 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3598
- Accuracy: 0.9059
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Apex, opt level O1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.7606 | 0.16 | 100 | 3.7839 | 0.1989 |
| 3.1072 | 0.31 | 200 | 3.0251 | 0.3285 |
| 2.4068 | 0.47 | 300 | 2.4380 | 0.4719 |
| 2.0881 | 0.63 | 400 | 2.0489 | 0.5412 |
| 1.6817 | 0.78 | 500 | 1.7968 | 0.6025 |
| 1.342 | 0.94 | 600 | 1.5044 | 0.6249 |
| 0.9343 | 1.1 | 700 | 1.1881 | 0.7132 |
| 0.9552 | 1.25 | 800 | 1.1064 | 0.7224 |
| 0.7265 | 1.41 | 900 | 0.9189 | 0.7768 |
| 0.6732 | 1.56 | 1000 | 0.9227 | 0.7606 |
| 0.5587 | 1.72 | 1100 | 0.7912 | 0.7903 |
| 0.6332 | 1.88 | 1200 | 0.7606 | 0.7945 |
| 0.3188 | 2.03 | 1300 | 0.6535 | 0.8288 |
| 0.3079 | 2.19 | 1400 | 0.5686 | 0.8577 |
| 0.2518 | 2.35 | 1500 | 0.5517 | 0.8577 |
| 0.2 | 2.5 | 1600 | 0.5277 | 0.8631 |
| 0.2032 | 2.66 | 1700 | 0.4841 | 0.8701 |
| 0.1555 | 2.82 | 1800 | 0.4578 | 0.8793 |
| 0.145 | 2.97 | 1900 | 0.4466 | 0.8755 |
| 0.0985 | 3.13 | 2000 | 0.4249 | 0.8867 |
| 0.0955 | 3.29 | 2100 | 0.3977 | 0.8932 |
| 0.0438 | 3.44 | 2200 | 0.3785 | 0.9036 |
| 0.0589 | 3.6 | 2300 | 0.3717 | 0.9017 |
| 0.0709 | 3.76 | 2400 | 0.3609 | 0.9052 |
| 0.0706 | 3.91 | 2500 | 0.3598 | 0.9059 |
### Framework versions
- Transformers 4.19.4
- Pytorch 1.5.1
- Datasets 2.3.2
- Tokenizers 0.12.1
|
shubhamsalokhe/distilgpt2-finetuned-wikitext2
|
shubhamsalokhe
| 2022-06-26T18:38:27Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-26T17:50:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6421
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7602 | 1.0 | 2334 | 3.6669 |
| 3.653 | 2.0 | 4668 | 3.6472 |
| 3.6006 | 3.0 | 7002 | 3.6421 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
p123/autotrain-my-sum-1040935781
|
p123
| 2022-06-26T18:02:45Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain",
"zh",
"dataset:p123/autotrain-data-my-sum",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-26T15:19:08Z |
---
tags: autotrain
language: zh
widget:
- text: "I love AutoTrain 🤗"
datasets:
- p123/autotrain-data-my-sum
co2_eq_emissions: 326.52733725745725
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1040935781
- CO2 Emissions (in grams): 326.52733725745725
## Validation Metrics
- Loss: 1.9157543182373047
- Rouge1: 0.4843
- Rouge2: 0.0
- RougeL: 0.4843
- RougeLsum: 0.4843
- Gen Len: 10.9718
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/p123/autotrain-my-sum-1040935781
```
|
ivanlau/ppo-mlppolicy-LunarLander-v2
|
ivanlau
| 2022-06-26T16:39:24Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-26T16:38:42Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 0.39 +/- 42.53
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ryanblak/PPO-LunarLander-v2
|
ryanblak
| 2022-06-26T15:40:24Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-26T15:00:58Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 287.72 +/- 15.68
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
tsantosh7/Bailii-Roberta
|
tsantosh7
| 2022-06-26T15:09:54Z | 4 | 3 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"en",
"arxiv:1907.11692",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-26T12:54:46Z |
---
license: apache-2.0
tags:
- fill-mask
language:
- en
widget:
- text: "He carefully assessed the financial position of the <mask> disclosed within its accounts, including its pension scheme liabilities."
- text: "Moreover, she had chosen not to give <mask> and therefore had not provided any innocent explanation of her communications."
---
# Pre-trained Language Model for England and Wales Court of Appeal (Criminal Division) Decisions
## Introduction
The research for understanding the bias in criminal court decisions need the support of natural language processing tools.
The pre-trained language model has greatly improved the accuracy of text mining in general texts. At present, there is an urgent need for a pre-trained language model specifically for the automatic processing of court decision texts.
We used the text from the [Bailii website](https://www.bailii.org/ew/cases/EWCA/Crim/) as the training set. Based on the deep language model framework of RoBERTa, we constructed bailii-roberta pre-training language model by [transformers/run_mlm.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm.py) and [transformers/mlm_wwm](https://github.com/huggingface/transformers/tree/main/examples/research_projects/mlm_wwm).
## How to use
### Huggingface Transformers
The `from_pretrained` method based on [Huggingface Transformers](https://github.com/huggingface/transformers) can directly obtain bailii-roberta model online.
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("tsantosh7/bailii-roberta")
model = AutoModel.from_pretrained("tsantosh7/bailii-roberta")
```
### Download Models
- The version of the model we provide is `PyTorch`.
### From Huggingface
- Download directly through Huggingface's official website.
- [tsantosh7/bailii-roberta](https://huggingface.co/tsantosh7/Bailii-Roberta/)
## Disclaimer
- The experimental results presented in the report only show the performance under a specific data set and hyperparameter combination, and cannot represent the essence of each model. The experimental results may change due to the random number of seeds and computing equipment.
- **Users can use the model arbitrarily within the scope of the license, but we are not responsible for the direct or indirect losses caused by using the content of the project.**
## Acknowledgment
- bailii-roberta was trained based on [roberta-base](https://arxiv.org/abs/1907.11692)).
|
allegro/herbert-large-cased
|
allegro
| 2022-06-26T14:18:54Z | 1,073 | 6 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"feature-extraction",
"herbert",
"pl",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
language: pl
tags:
- herbert
license: cc-by-4.0
---
# HerBERT
**[HerBERT](https://en.wikipedia.org/wiki/Zbigniew_Herbert)** is a BERT-based Language Model trained on Polish corpora
using Masked Language Modelling (MLM) and Sentence Structural Objective (SSO) with dynamic masking of whole words. For more details, please refer to: [HerBERT: Efficiently Pretrained Transformer-based Language Model for Polish](https://www.aclweb.org/anthology/2021.bsnlp-1.1/).
Model training and experiments were conducted with [transformers](https://github.com/huggingface/transformers) in version 2.9.
## Corpus
HerBERT was trained on six different corpora available for Polish language:
| Corpus | Tokens | Documents |
| :------ | ------: | ------: |
| [CCNet Middle](https://github.com/facebookresearch/cc_net) | 3243M | 7.9M |
| [CCNet Head](https://github.com/facebookresearch/cc_net) | 2641M | 7.0M |
| [National Corpus of Polish](http://nkjp.pl/index.php?page=14&lang=1)| 1357M | 3.9M |
| [Open Subtitles](http://opus.nlpl.eu/OpenSubtitles-v2018.php) | 1056M | 1.1M
| [Wikipedia](https://dumps.wikimedia.org/) | 260M | 1.4M |
| [Wolne Lektury](https://wolnelektury.pl/) | 41M | 5.5k |
## Tokenizer
The training dataset was tokenized into subwords using a character level byte-pair encoding (``CharBPETokenizer``) with
a vocabulary size of 50k tokens. The tokenizer itself was trained with a [tokenizers](https://github.com/huggingface/tokenizers) library.
We kindly encourage you to use the ``Fast`` version of the tokenizer, namely ``HerbertTokenizerFast``.
## Usage
Example code:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("allegro/herbert-large-cased")
model = AutoModel.from_pretrained("allegro/herbert-large-cased")
output = model(
**tokenizer.batch_encode_plus(
[
(
"A potem szedł środkiem drogi w kurzawie, bo zamiatał nogami, ślepy dziad prowadzony przez tłustego kundla na sznurku.",
"A potem leciał od lasu chłopak z butelką, ale ten ujrzawszy księdza przy drodze okrążył go z dala i biegł na przełaj pól do karczmy."
)
],
padding='longest',
add_special_tokens=True,
return_tensors='pt'
)
)
```
## License
CC BY 4.0
## Citation
If you use this model, please cite the following paper:
```
@inproceedings{mroczkowski-etal-2021-herbert,
title = "{H}er{BERT}: Efficiently Pretrained Transformer-based Language Model for {P}olish",
author = "Mroczkowski, Robert and
Rybak, Piotr and
Wr{\'o}blewska, Alina and
Gawlik, Ireneusz",
booktitle = "Proceedings of the 8th Workshop on Balto-Slavic Natural Language Processing",
month = apr,
year = "2021",
address = "Kiyv, Ukraine",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2021.bsnlp-1.1",
pages = "1--10",
}
```
## Authors
The model was trained by [**Machine Learning Research Team at Allegro**](https://ml.allegro.tech/) and [**Linguistic Engineering Group at Institute of Computer Science, Polish Academy of Sciences**](http://zil.ipipan.waw.pl/).
You can contact us at: <a href="mailto:[email protected]">[email protected]</a>
|
Nikkisora/q-FrozenLake-v1-4x4-noSlippery
|
Nikkisora
| 2022-06-26T13:58:56Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-26T13:58:48Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Nikkisora/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.