modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-29 06:27:56
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 534
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-29 06:27:11
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
FreelancerFel/dqn-SpaceInvadersNoFrameskip-v4
|
FreelancerFel
| 2022-06-26T11:39:32Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-26T11:38:55Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 692.50 +/- 193.97
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga FreelancerFel -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga FreelancerFel
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
shafin/distilbert-similarity-b32-3
|
shafin
| 2022-06-26T11:24:03Z | 4 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-06-26T11:23:53Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# shafin/distilbert-similarity-b32-3
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 3 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('shafin/distilbert-similarity-b32-3')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=shafin/distilbert-similarity-b32-3)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 56250 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.OnlineContrastiveLoss.OnlineContrastiveLoss`
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 5000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
(3): Dense({'in_features': 256, 'out_features': 32, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
(4): Dense({'in_features': 32, 'out_features': 3, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
onlplab/alephbert-base
|
onlplab
| 2022-06-26T09:32:47Z | 65,559 | 17 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"language model",
"he",
"dataset:oscar",
"dataset:wikipedia",
"dataset:twitter",
"arxiv:1810.04805",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
language:
- he
tags:
- language model
license: apache-2.0
datasets:
- oscar
- wikipedia
- twitter
---
# AlephBERT
## Hebrew Language Model
State-of-the-art language model for Hebrew.
Based on Google's BERT architecture [(Devlin et al. 2018)](https://arxiv.org/abs/1810.04805).
#### How to use
```python
from transformers import BertModel, BertTokenizerFast
alephbert_tokenizer = BertTokenizerFast.from_pretrained('onlplab/alephbert-base')
alephbert = BertModel.from_pretrained('onlplab/alephbert-base')
# if not finetuning - disable dropout
alephbert.eval()
```
## Training data
1. OSCAR [(Ortiz, 2019)](https://oscar-corpus.com/) Hebrew section (10 GB text, 20 million sentences).
2. Hebrew dump of [Wikipedia](https://dumps.wikimedia.org/hewiki/latest/) (650 MB text, 3 million sentences).
3. Hebrew Tweets collected from the Twitter sample stream (7 GB text, 70 million sentences).
## Training procedure
Trained on a DGX machine (8 V100 GPUs) using the standard huggingface training procedure.
Since the larger part of our training data is based on tweets we decided to start by optimizing using Masked Language Model loss only.
To optimize training time we split the data into 4 sections based on max number of tokens:
1. num tokens < 32 (70M sentences)
2. 32 <= num tokens < 64 (12M sentences)
3. 64 <= num tokens < 128 (10M sentences)
4. 128 <= num tokens < 512 (1.5M sentences)
Each section was first trained for 5 epochs with an initial learning rate set to 1e-4. Then each section was trained for another 5 epochs with an initial learning rate set to 1e-5, for a total of 10 epochs.
Total training time was 8 days.
|
romainlhardy/roberta-large-finetuned-ner
|
romainlhardy
| 2022-06-26T09:20:58Z | 122 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-26T08:07:48Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-large-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9476811355009077
- name: Recall
type: recall
value: 0.9663412992258499
- name: F1
type: f1
value: 0.9569202566452795
- name: Accuracy
type: accuracy
value: 0.990656929827253
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-finetuned-ner
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0495
- Precision: 0.9477
- Recall: 0.9663
- F1: 0.9569
- Accuracy: 0.9907
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.078 | 1.0 | 1756 | 0.0577 | 0.9246 | 0.9536 | 0.9389 | 0.9865 |
| 0.0382 | 2.0 | 3512 | 0.0528 | 0.9414 | 0.9620 | 0.9516 | 0.9890 |
| 0.021 | 3.0 | 5268 | 0.0495 | 0.9477 | 0.9663 | 0.9569 | 0.9907 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
kidzy/distilbert-base-uncased-finetuned-cola
|
kidzy
| 2022-06-26T09:03:19Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-26T08:42:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5443893754588841
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7548
- Matthews Correlation: 0.5444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5303 | 1.0 | 535 | 0.5510 | 0.3636 |
| 0.3527 | 2.0 | 1070 | 0.5543 | 0.4886 |
| 0.2366 | 3.0 | 1605 | 0.5738 | 0.5311 |
| 0.1761 | 4.0 | 2140 | 0.7548 | 0.5444 |
| 0.128 | 5.0 | 2675 | 0.8436 | 0.5380 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
kidzy/distilbert-base-uncased-finetuned-emotion
|
kidzy
| 2022-06-26T08:19:59Z | 7 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-23T13:17:25Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.9246037761691881
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2240
- Accuracy: 0.9245
- F1: 0.9246
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8521 | 1.0 | 250 | 0.3285 | 0.904 | 0.9017 |
| 0.2546 | 2.0 | 500 | 0.2240 | 0.9245 | 0.9246 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
HKHKHKHK/bert-finetuned-squad
|
HKHKHKHK
| 2022-06-26T07:25:52Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-06-26T05:00:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
veb/twitch-bert-base-cased-finetuned
|
veb
| 2022-06-26T06:49:19Z | 12 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-22T05:06:19Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: veb/twitch-bert-base-cased-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# veb/twitch-bert-base-cased-finetuned
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0952
- Train Sparse Categorical Accuracy: 0.9647
- Validation Loss: 0.0359
- Validation Sparse Categorical Accuracy: 0.9881
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch |
|:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:|
| 0.2938 | 0.8775 | 0.1106 | 0.9602 | 0 |
| 0.1404 | 0.9514 | 0.1508 | 0.9523 | 1 |
| 0.0952 | 0.9647 | 0.0359 | 0.9881 | 2 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.6.4
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ashhyun/distilbert-base-uncased-finetuned-squad
|
ashhyun
| 2022-06-26T06:25:36Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-06-26T05:20:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.1563
- eval_runtime: 141.535
- eval_samples_per_second: 76.193
- eval_steps_per_second: 4.762
- epoch: 1.0
- step: 5533
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
hyan97/distilbert-base-uncased-finetuned-squad
|
hyan97
| 2022-06-26T05:55:35Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-06-26T03:31:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3517
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2094 | 1.0 | 8235 | 1.2174 |
| 0.9515 | 2.0 | 16470 | 1.1923 |
| 0.7687 | 3.0 | 24705 | 1.3517 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Tokenizers 0.12.1
|
lstynerl/M1a1
|
lstynerl
| 2022-06-26T02:15:46Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-03-11T03:32:13Z |
---
license: apache-2.0
---
|
dominguesm/stt_pt_quartznet15x5_ctc_small
|
dominguesm
| 2022-06-26T01:05:06Z | 8 | 4 |
nemo
|
[
"nemo",
"automatic-speech-recognition",
"speech",
"audio",
"CTC",
"QuartzNet",
"Transformer",
"NeMo",
"pytorch",
"pt",
"dataset:mozilla-foundation/common_voice_9_0",
"license:cc-by-4.0",
"model-index",
"region:us"
] |
automatic-speech-recognition
| 2022-06-18T15:01:50Z |
---
language:
- pt
license: cc-by-4.0
library_name: nemo
datasets:
- mozilla-foundation/common_voice_9_0
thumbnail: null
tags:
- automatic-speech-recognition
- speech
- audio
- CTC
- QuartzNet
- Transformer
- NeMo
- pytorch
model-index:
- name: stt_pt_quartznet15x5_ctc_small
results:
- task:
type: automatic-speech-recognition
dataset:
type: common_voice
name: Common Voice Portuguese
config: clean
split: test
args:
language: pt
metrics:
- type: wer
value: 49.17
name: Test WER
- type: cer
value: 18.59
name: Test CER
---
## Model Overview
This model transcribes speech in lower case Portuguese alphabet along with spaces. It is a "small" versions of QuartzNet-CTC model.
## NVIDIA NeMo: Training
To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest Pytorch version.
```
pip install nemo_toolkit['all']
```
## How to Use this Model
The model is available for use in the NeMo toolkit [1], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
### Automatically instantiate the model
```python
import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.ASRModel.from_pretrained("dominguesm/stt_pt_quartznet15x5_ctc_small")
```
### Transcribing using Python
First, let's get a sample
```
wget https://github.com/DominguesM/stt_pt_quartznet15x5_ctc_small/raw/main/audios/common_voice_pt_25555332.mp3
```
Then simply do:
```
asr_model.transcribe(['common_voice_pt_25555332.mp3'])
```
### Transcribing many audio files
```shell
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py pretrained_name="dominguesm/stt_pt_quartznet15x5_ctc_small" audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
```
### Input
This model accepts 16000 KHz Mono-channel Audio (wav files) as input.
### Output
This model provides transcribed speech as a string for a given audio sample.
## Model Architecture
This model are based on the QuartzNet architecture, which is a variant of Jasper that uses 1D time-channel separable convolutional layers in its convolutional residual blocks and are therefore smaller than Jasper models.
QuartzNet models take in audio segments and transcribe them to letter, byte pair, or word piece sequences.
## Training
All training scripts will be available at: [DominguesM/stt_pt_quartznet15x5_ctc_small](https://github.com/DominguesM/stt_pt_quartznet15x5_ctc_small)
### Datasets
The model was trained with a part of the Common Voices 9.0 dataset in Portuguese, totaling 26 hours of audio.
* Mozilla Common Voice (v9.0)
## Performance
| Metric | Score |
| ------- | ----- |
| WER | 49% |
| CER | 18% |
The metrics were obtained using the following code:
**Attention**: The steps below must be performed after downloading the dataset (Mozilla Commom Voices 9.0 PT) and following the steps of pre-processing the audio data and `manifest` files contained in the file [`notebooks/Finetuning CTC model Portuguese.ipynb`](https://github.com/DominguesM/stt_pt_quartznet15x5_ctc_small)
```bash
$ wget -P scripts/ "https://raw.githubusercontent.com/NVIDIA/NeMo/v1.9.0/examples/asr/speech_to_text_eval.py"
$ wget -P scripts/ "https://raw.githubusercontent.com/NVIDIA/NeMo/v1.9.0/examples/asr/transcribe_speech.py"
$ python scripts/speech_to_text_eval.py \
pretrained_name="dominguesm/stt_pt_quartznet15x5_ctc_small" \
dataset_manifest="manifests/pt/commonvoice_test_manifest_processed.json" \
output_filename="./evaluation_transcripts.json" \
batch_size=32 \
amp=true \
use_cer=false
```
## Limitations
Since this model was trained on publically available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.
## Citation
If you use our work, please cite:
```cite
@misc{domingues2022quartznet15x15-small-portuguese,
title={Fine-tuned {Quartznet}-15x5 CTC small model for speech recognition in {P}ortuguese},
author={Domingues, Maicon},
howpublished={\url{https://huggingface.co/dominguesm/stt_pt_quartznet15x5_ctc_small}},
year={2022}
}
```
## References
[1] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
|
anas-awadalla/opt-125m-squad
|
anas-awadalla
| 2022-06-25T23:56:38Z | 65 | 0 |
transformers
|
[
"transformers",
"pytorch",
"opt",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-05-19T23:01:14Z |
A facebook/opt-125m model trained on SQUAD for extractive question answering.
To use the model format input in the following manner:
"(Context Text)\nQuestion:(Question Text)\nAnswer:"
|
Forkits/MLAgents-Worm
|
Forkits
| 2022-06-25T22:26:13Z | 12 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Worm",
"region:us"
] |
reinforcement-learning
| 2022-06-25T22:26:06Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Worm
library_name: ml-agents
---
# **ppo** Agent playing **Worm**
This is a trained model of a **ppo** agent playing **Worm** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Worm
2. Step 1: Write your model_id: Forkits/MLAgents-Worm
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
robertodtg/wav2vec2-large-xls-r-300m-pt-colab
|
robertodtg
| 2022-06-25T21:25:10Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_9_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-24T11:52:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_9_0
model-index:
- name: wav2vec2-large-xls-r-300m-pt-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-pt-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_9_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2975
- Wer: 0.1736
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 5.179 | 0.49 | 400 | 1.4554 | 0.9349 |
| 0.7545 | 0.98 | 800 | 0.5594 | 0.5174 |
| 0.4485 | 1.47 | 1200 | 0.3964 | 0.3749 |
| 0.4118 | 1.96 | 1600 | 0.3547 | 0.3172 |
| 0.3282 | 2.45 | 2000 | 0.3372 | 0.3061 |
| 0.3199 | 2.94 | 2400 | 0.3466 | 0.2910 |
| 0.2847 | 3.44 | 2800 | 0.3651 | 0.3310 |
| 0.2713 | 3.93 | 3200 | 0.3509 | 0.3016 |
| 0.2414 | 4.42 | 3600 | 0.3451 | 0.2908 |
| 0.2473 | 4.91 | 4000 | 0.3253 | 0.2747 |
| 0.2168 | 5.4 | 4400 | 0.3243 | 0.2680 |
| 0.219 | 5.89 | 4800 | 0.3067 | 0.2540 |
| 0.196 | 6.38 | 5200 | 0.3268 | 0.2824 |
| 0.1934 | 6.87 | 5600 | 0.3252 | 0.2736 |
| 0.1808 | 7.36 | 6000 | 0.3422 | 0.2737 |
| 0.177 | 7.85 | 6400 | 0.3292 | 0.2707 |
| 0.1626 | 8.34 | 6800 | 0.3089 | 0.2524 |
| 0.1605 | 8.83 | 7200 | 0.3062 | 0.2471 |
| 0.1505 | 9.32 | 7600 | 0.3229 | 0.2474 |
| 0.1491 | 9.82 | 8000 | 0.3098 | 0.2491 |
| 0.1433 | 10.31 | 8400 | 0.3449 | 0.2681 |
| 0.1431 | 10.8 | 8800 | 0.3439 | 0.2532 |
| 0.1349 | 11.29 | 9200 | 0.3112 | 0.2413 |
| 0.1236 | 11.78 | 9600 | 0.3248 | 0.2378 |
| 0.1253 | 12.27 | 10000 | 0.3393 | 0.2394 |
| 0.1195 | 12.76 | 10400 | 0.3050 | 0.2336 |
| 0.1194 | 13.25 | 10800 | 0.3494 | 0.2550 |
| 0.1125 | 13.74 | 11200 | 0.3332 | 0.2395 |
| 0.1063 | 14.23 | 11600 | 0.3134 | 0.2365 |
| 0.1044 | 14.72 | 12000 | 0.3101 | 0.2303 |
| 0.0999 | 15.21 | 12400 | 0.3162 | 0.2248 |
| 0.0986 | 15.71 | 12800 | 0.3183 | 0.2260 |
| 0.0958 | 16.2 | 13200 | 0.3300 | 0.2279 |
| 0.0907 | 16.69 | 13600 | 0.3136 | 0.2260 |
| 0.0875 | 17.18 | 14000 | 0.3492 | 0.2203 |
| 0.0823 | 17.67 | 14400 | 0.3214 | 0.2259 |
| 0.0839 | 18.16 | 14800 | 0.3194 | 0.2145 |
| 0.0783 | 18.65 | 15200 | 0.3122 | 0.2180 |
| 0.0789 | 19.14 | 15600 | 0.3158 | 0.2127 |
| 0.0732 | 19.63 | 16000 | 0.3076 | 0.2109 |
| 0.0715 | 20.12 | 16400 | 0.3216 | 0.2150 |
| 0.0649 | 20.61 | 16800 | 0.2958 | 0.2051 |
| 0.0647 | 21.1 | 17200 | 0.3022 | 0.2014 |
| 0.0649 | 21.59 | 17600 | 0.3045 | 0.2033 |
| 0.0621 | 22.09 | 18000 | 0.3194 | 0.2035 |
| 0.0561 | 22.58 | 18400 | 0.3197 | 0.2022 |
| 0.0582 | 23.07 | 18800 | 0.3109 | 0.1978 |
| 0.0533 | 23.56 | 19200 | 0.3121 | 0.1932 |
| 0.0515 | 24.05 | 19600 | 0.3125 | 0.1939 |
| 0.0484 | 24.54 | 20000 | 0.3081 | 0.1908 |
| 0.0485 | 25.03 | 20400 | 0.3042 | 0.1896 |
| 0.0444 | 25.52 | 20800 | 0.3038 | 0.1886 |
| 0.0426 | 26.01 | 21200 | 0.2985 | 0.1868 |
| 0.0415 | 26.5 | 21600 | 0.3066 | 0.1858 |
| 0.0398 | 26.99 | 22000 | 0.3117 | 0.1828 |
| 0.0397 | 27.48 | 22400 | 0.2980 | 0.1795 |
| 0.0394 | 27.97 | 22800 | 0.2950 | 0.1791 |
| 0.0364 | 28.47 | 23200 | 0.3025 | 0.1773 |
| 0.0365 | 28.96 | 23600 | 0.3022 | 0.1747 |
| 0.0376 | 29.45 | 24000 | 0.2978 | 0.1738 |
| 0.0344 | 29.94 | 24400 | 0.2975 | 0.1736 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.10.0+cu111
- Datasets 2.3.2
- Tokenizers 0.12.1
|
nlp-esg-scoring/bert-base-finetuned-esg-a4s
|
nlp-esg-scoring
| 2022-06-25T21:16:24Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"fill-mask",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-24T21:40:06Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: nlp-esg-scoring/bert-base-finetuned-esg-a4s
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nlp-esg-scoring/bert-base-finetuned-esg-a4s
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.9437
- Validation Loss: 1.9842
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -812, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.9200 | 2.0096 | 0 |
| 1.9249 | 1.9926 | 1 |
| 1.9366 | 2.0100 | 2 |
| 1.9327 | 1.9814 | 3 |
| 1.9266 | 2.0152 | 4 |
| 1.9332 | 2.0519 | 5 |
| 1.9203 | 2.0437 | 6 |
| 1.9238 | 2.0118 | 7 |
| 1.9290 | 2.0019 | 8 |
| 1.9437 | 1.9842 | 9 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
cambridgeltl/simctg_rocstories
|
cambridgeltl
| 2022-06-25T19:33:13Z | 12 | 2 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"arxiv:2202.06417",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-04-02T19:09:14Z |
This model provides a GPT-2 language model trained with SimCTG on the ROCStories benchmark [(Mostafazadeh et al., 2016)](https://aclanthology.org/N16-1098.pdf) based on our paper [_A Contrastive Framework for Neural Text Generation_](https://arxiv.org/abs/2202.06417).
We provide a detailed tutorial on how to apply SimCTG and Contrastive Search in our [project repo](https://github.com/yxuansu/SimCTG#4-huggingface-style-tutorials-back-to-top). In the following, we illustrate a brief tutorial on how to use our approach to perform text generation.
## 1. Installation of SimCTG:
```yaml
pip install simctg --upgrade
```
## 2. Initialize SimCTG Model:
```python
import torch
# load SimCTG language model
from simctg.simctggpt import SimCTGGPT
model_name = r'cambridgeltl/simctg_rocstories'
model = SimCTGGPT(model_name)
model.eval()
tokenizer = model.tokenizer
```
## 3. Prepare the Text Prefix:
```python
prompt = r"Accident in the Lab <|endoftext|>"
print ('Prefix is: {}'.format(prompt))
tokens = model.tokenizer.tokenize(prompt)
input_ids = model.tokenizer.convert_tokens_to_ids(tokens)
input_ids = torch.LongTensor(input_ids).view(1,-1)
```
## 4. Generate Text with Contrastive Search:
```python
beam_width, alpha, decoding_len = 5, 0.65, 45
output = model.fast_contrastive_search(input_ids=input_ids, beam_width=beam_width,
alpha=alpha, decoding_len=decoding_len)
print("Output:\n" + 100 * '-')
print(tokenizer.decode(output).split(model.tokenizer.eos_token)[1].strip())
'''
Prefix is: Accident in the Lab <|endoftext|>
Output:
----------------------------------------------------------------------------------------------------
Tom went to work one day. He noticed a lab accident in the lab. Tom was worried about his safety at work.
Unfortunately the accident didn't go well. Tom wound up leaving early to get back on the job.
'''
```
For more details of our work, please refer to our main [project repo](https://github.com/yxuansu/SimCTG).
## 5. Citation:
If you find our paper and resources useful, please kindly leave a star and cite our paper. Thanks!
```bibtex
@article{su2022contrastive,
title={A Contrastive Framework for Neural Text Generation},
author={Su, Yixuan and Lan, Tian and Wang, Yan and Yogatama, Dani and Kong, Lingpeng and Collier, Nigel},
journal={arXiv preprint arXiv:2202.06417},
year={2022}
}
```
|
cambridgeltl/simctg_lccc_dialogue
|
cambridgeltl
| 2022-06-25T19:21:55Z | 9 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"arxiv:2008.03946",
"arxiv:2202.06417",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
This model provides a Chinese GPT-2 language model trained with SimCTG on the LCCC benchmark [(Wang et al., 2020)](https://arxiv.org/pdf/2008.03946v2.pdf) based on our paper [_A Contrastive Framework for Neural Text Generation_](https://arxiv.org/abs/2202.06417).
We provide a detailed tutorial on how to apply SimCTG and Contrastive Search in our [project repo](https://github.com/yxuansu/SimCTG#4-huggingface-style-tutorials-back-to-top). In the following, we illustrate a brief tutorial on how to use our approach to perform text generation.
## 1. Installation of SimCTG:
```yaml
pip install simctg --upgrade
```
## 2. Initialize SimCTG Model:
```python
import torch
# load SimCTG language model
from simctg.simctggpt import SimCTGGPT
model_name = r'cambridgeltl/simctg_lccc_dialogue'
model = SimCTGGPT(model_name)
model.eval()
tokenizer = model.tokenizer
eos_token = '[SEP]'
eos_token_id = tokenizer.convert_tokens_to_ids([eos_token])[0]
```
## 3. Prepare the Text Prefix:
```python
context_list = ['刺猬很可爱!以前别人送了只没养,味儿太大!', '是很可爱但是非常臭', '是啊,没办法养', '那个怎么养哦不会扎手吗']
prefix_text = eos_token.join(context_list).strip(eos_token) + eos_token
print ('Prefix is: {}'.format(prefix_text))
tokens = tokenizer.tokenize(prefix_text)
input_ids = tokenizer.convert_tokens_to_ids(tokens)
input_ids = torch.LongTensor(input_ids).view(1,-1)
```
## 4. Generate Text with Contrastive Search:
```python
beam_width, alpha, decoding_len = 5, 0.6, 64
output = model.fast_contrastive_search(input_ids=input_ids, beam_width=beam_width, alpha=alpha,
decoding_len=decoding_len, end_of_sequence_token_id=eos_token_id,
early_stop=True)
print("Output:\n" + 100 * '-')
print(''.join(tokenizer.decode(output)))
'''
Prefix is: 刺猬很可爱!以前别人送了只没养,味儿太大![SEP]是很可爱但是非常臭[SEP]是啊,没办法养[SEP]那个怎么养哦不会扎手吗[SEP]
Output:
----------------------------------------------------------------------------------------------------
刺猬很可爱!以前别人送了只没养,味儿太大![SEP]是很可爱但是非常臭[SEP]是啊,没办法养[SEP]那个怎么养哦不会扎手吗[SEP]我觉得还好,就是有点臭
'''
```
For more details of our work, please refer to our main [project repo](https://github.com/yxuansu/SimCTG).
## 5. Citation:
If you find our paper and resources useful, please kindly leave a star and cite our paper. Thanks!
```bibtex
@article{su2022contrastive,
title={A Contrastive Framework for Neural Text Generation},
author={Su, Yixuan and Lan, Tian and Wang, Yan and Yogatama, Dani and Kong, Lingpeng and Collier, Nigel},
journal={arXiv preprint arXiv:2202.06417},
year={2022}
}
```
|
haritzpuerto/xtremedistil-l6-h256-uncased-squad_1.1
|
haritzpuerto
| 2022-06-25T19:13:22Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"QA",
"Question Answering",
"SQuAD",
"en",
"dataset:squad",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-06-25T18:54:25Z |
---
language:
- en
tags:
- QA
- Question Answering
- SQuAD
license: "mit"
datasets:
- squad
metrics:
- squad
model-index:
- name: xtremedistil-l6-h256-uncased
results:
- task:
type: question-answering # Required. Example: automatic-speech-recognition
name: Question Answering # Optional. Example: Speech Recognition
dataset:
type: squad # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
name: SQuAD # Required. A pretty name for the dataset. Example: Common Voice (French)
split: validation # Optional. Example: test
metrics:
- type: squad # Required. Example: wer. Use metric id from https://hf.co/metrics
value: 62.66792809839168 # Required. Example: 20.90
name: SQuAD EM # Optional. Example: Test WER
config: exact_match # Optional. The name of the metric configuration used in `load_metric()`. Example: bleurt-large-512 in `load_metric("bleurt", "bleurt-large-512")`. See the `datasets` docs for more info: https://huggingface.co/docs/datasets/v2.1.0/en/loading#load-configurations
- type: squad # Required. Example: wer. Use metric id from https://hf.co/metrics
value: 74.99490608582015 # Required. Example: 20.90
name: SQuAD F1 # Optional. Example: Test WER
config: F1
---
microsoft/xtremedistil-l6-h256-uncased fined-tuned on SQuAD (https://huggingface.co/datasets/squad)
Hyperparameters:
- epochs: 1
- lr: 1e-5
- train batch sie: 16
- optimizer: adamW
- lr_scheduler: linear
- num warming steps: 0
- max_length: 512
Results on the dev set:
- 'exact_match': 62.66792809839168
- 'f1': 74.99490608582015
|
SusBioRes-UBC/dqn-SpaceInvadersNoFrameskip-v4
|
SusBioRes-UBC
| 2022-06-25T19:10:42Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-25T19:10:14Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 241.50 +/- 137.02
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga SusBioRes-UBC -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga SusBioRes-UBC
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
rpgz31/tiny-nfl
|
rpgz31
| 2022-06-25T18:59:14Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:bittensor",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-25T18:56:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- bittensor
metrics:
- accuracy
model-index:
- name: tiny-nfl
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: bittensor tiny.json
type: bittensor
args: tiny.json
metrics:
- name: Accuracy
type: accuracy
value: 0.15555555555555556
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-nfl
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the bittensor tiny.json dataset.
It achieves the following results on the evaluation set:
- Loss: 6.4602
- Accuracy: 0.1556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
rpgz31/jibber
|
rpgz31
| 2022-06-25T18:00:33Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:bittensor",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-25T17:57:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- bittensor
metrics:
- accuracy
model-index:
- name: test-clm
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: bittensor train-v1.1.json
type: bittensor
args: train-v1.1.json
metrics:
- name: Accuracy
type: accuracy
value: 0.13872832369942195
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-clm
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the bittensor train-v1.1.json dataset.
It achieves the following results on the evaluation set:
- Loss: 6.5199
- Accuracy: 0.1387
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
facebook/contriever-msmarco
|
facebook
| 2022-06-25T17:19:59Z | 232,932 | 24 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:2112.09118",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
---
tags:
- feature-extraction
pipeline_tag: feature-extraction
---
This model is the finetuned version of the pre-trained contriever model available here https://huggingface.co/facebook/contriever, following the approach described in [Towards Unsupervised Dense Information Retrieval with Contrastive Learning](https://arxiv.org/abs/2112.09118). The associated GitHub repository is available here https://github.com/facebookresearch/contriever.
## Usage (HuggingFace Transformers)
Using the model directly available in HuggingFace transformers requires to add a mean pooling operation to obtain a sentence embedding.
```python
import torch
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('facebook/contriever-msmarco')
model = AutoModel.from_pretrained('facebook/contriever-msmarco')
sentences = [
"Where was Marie Curie born?",
"Maria Sklodowska, later known as Marie Curie, was born on November 7, 1867.",
"Born in Paris on 15 May 1859, Pierre Curie was the son of Eugène Curie, a doctor of French Catholic origin from Alsace."
]
# Apply tokenizer
inputs = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
outputs = model(**inputs)
# Mean pooling
def mean_pooling(token_embeddings, mask):
token_embeddings = token_embeddings.masked_fill(~mask[..., None].bool(), 0.)
sentence_embeddings = token_embeddings.sum(dim=1) / mask.sum(dim=1)[..., None]
return sentence_embeddings
embeddings = mean_pooling(outputs[0], inputs['attention_mask'])
```
|
eugenetanjc/wav2vec_test
|
eugenetanjc
| 2022-06-25T17:00:52Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-25T16:23:08Z |
---
tags:
- generated_from_trainer
model-index:
- name: wav2vec_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec_test
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
patrickvonplaten/opt_metaseq_6700m
|
patrickvonplaten
| 2022-06-25T15:56:09Z | 8 | 0 |
transformers
|
[
"transformers",
"opt",
"feature-extraction",
"opt_metasq",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-05-10T17:32:24Z |
---
tags:
- opt_metasq
---
# This repo let's you run the following checkpoint using facebookresearch/metaseq.
Do the following:
## 1. Install PyTorch
```
pip3 install torch==1.10.1+cu113 torchvision==0.11.2+cu113 torchaudio==0.10.1+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
```
## 2. Install Megatron
```
git clone https://github.com/patrickvonplaten/Megatron-LM.git
cd Megatron-LM
pip3 install six regex
pip3 install -e .
```
## 3. Install fairscale
```
git clone https://github.com/facebookresearch/fairscale.git
cd fairscale
git checkout prefetch_fsdp_params_simple
pip3 install -e .
```
## 4. Install metaseq
```
git clone https://github.com/patrickvonplaten/metaseq.git
cd metaseq
pip3 install -e .
```
## 5. Clone this repo (click top right on "How to clone")
## 6. Run the following:
```bash
cd <path/to/cloned/repo>
bash run.sh
```
|
ClassCat/gpt2-base-japanese-v2
|
ClassCat
| 2022-06-25T15:36:22Z | 16 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"ja",
"dataset:wikipedia",
"dataset:cc100",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-04T02:30:34Z |
---
language: ja
license: cc-by-sa-4.0
datasets:
- wikipedia
- cc100
widget:
- text: 天気予報によれば明日は
- text: 私の今日の昼飯は
- text: サッカー日本代表はベルギーに
- text: 日本人サッカー選手が W 杯で
---
## GPT2 Japanese base model version 2
### Prerequisites
transformers==4.19.2
### Model architecture
This model uses GPT2 base setttings except vocabulary size.
### Tokenizer
Using BPE tokenizer with vocabulary size 60,000.
### Training Data
* [wiki40b/ja](https://www.tensorflow.org/datasets/catalog/wiki40b#wiki40bja) (Japanese Wikipedia)
* Subset of [CC-100/ja](https://data.statmt.org/cc-100/) : Monolingual Datasets from Web Crawl Data
### Usage
```python
from transformers import pipeline
generator = pipeline('text-generation', model='ClassCat/gpt2-base-japanese-v2')
generator("今度の連休の天気は", max_length=50, num_return_sequences=5)
```
## (Japanese description) GPT2 日本語 ベースモデル・バージョン 2
### 前提条件
transformers==4.19.2
### モデル・アーキテクチャ
このモデルは GPT2 ベースモデルの設定を (語彙サイズ以外は) 使用しています。
### トークナイザー
語彙サイズ 60,000 の BPE トークナイザーを使用しています。
### 訓練データ
* [wiki40b/ja](https://www.tensorflow.org/datasets/catalog/wiki40b#wiki40bja) (日本語 Wikipedia)
* [CC-100/ja](https://data.statmt.org/cc-100/) のサブセット : Web クロールデータからの単一言語データセット。
### 使用方法
```python
from transformers import pipeline
generator = pipeline('text-generation', model='ClassCat/gpt2-base-japanese-v2')
generator("今度の連休の天気は", max_length=50, num_return_sequences=5)
```
|
bousejin/xlm-roberta-base-finetuned-panx-de
|
bousejin
| 2022-06-25T14:52:35Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-25T05:19:20Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8620945214069894
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1372
- F1: 0.8621
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2575 | 1.0 | 525 | 0.1621 | 0.8292 |
| 0.1287 | 2.0 | 1050 | 0.1378 | 0.8526 |
| 0.0831 | 3.0 | 1575 | 0.1372 | 0.8621 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
NikitaErmolaev/dqn-SpaceInvadersNoFrameskip-v4
|
NikitaErmolaev
| 2022-06-25T12:19:51Z | 3 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-25T12:19:14Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 598.00 +/- 147.67
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga NikitaErmolaev -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga NikitaErmolaev
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
danieladejumo/dqn-SpaceInvadersNoFrameskip-v4
|
danieladejumo
| 2022-06-25T12:12:36Z | 1 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-25T11:31:44Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 618.50 +/- 194.46
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga danieladejumo -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 --no-render --n-timesteps 5000 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga danieladejumo
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
traxes/ppo-LunarLander-v2
|
traxes
| 2022-06-25T10:31:58Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-25T09:31:43Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: -817.34 +/- 267.34
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
transformersbook/xlm-roberta-base-finetuned-panx-all
|
transformersbook
| 2022-06-25T09:44:57Z | 7 | 4 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:wikiann",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
datasets:
- wikiann
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results:
- task:
type: token-classification
name: Token Classification
dataset:
name: wikiann
type: wikiann
config: en
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.843189280620875
verified: true
- name: Precision
type: precision
value: 0.8410061269097046
verified: true
- name: Recall
type: recall
value: 0.8568527450211155
verified: true
- name: F1
type: f1
value: 0.8488554853827908
verified: true
- name: loss
type: loss
value: 0.6632214784622192
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the PAN-X dataset. The model is trained in Chapter 4: Multilingual Named Entity Recognition in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/04_multilingual-ner.ipynb).
It achieves the following results on the evaluation set:
- Loss: 0.1739
- F1: 0.8581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2912 | 1.0 | 835 | 0.1883 | 0.8238 |
| 0.1548 | 2.0 | 1670 | 0.1738 | 0.8480 |
| 0.101 | 3.0 | 2505 | 0.1739 | 0.8581 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
kktoto/tiny_focal_v3
|
kktoto
| 2022-06-25T08:54:15Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-25T05:11:44Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: tiny_focal_v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny_focal_v3
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0023
- Precision: 0.6975
- Recall: 0.6822
- F1: 0.6898
- Accuracy: 0.9515
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.004 | 1.0 | 5561 | 0.0032 | 0.6900 | 0.6102 | 0.6477 | 0.9454 |
| 0.0032 | 2.0 | 11122 | 0.0028 | 0.6901 | 0.6406 | 0.6644 | 0.9477 |
| 0.0029 | 3.0 | 16683 | 0.0026 | 0.6956 | 0.6509 | 0.6725 | 0.9490 |
| 0.0025 | 4.0 | 22244 | 0.0025 | 0.6838 | 0.6764 | 0.6801 | 0.9493 |
| 0.0024 | 5.0 | 27805 | 0.0024 | 0.6954 | 0.6715 | 0.6832 | 0.9504 |
| 0.0023 | 6.0 | 33366 | 0.0024 | 0.7125 | 0.6524 | 0.6811 | 0.9512 |
| 0.0021 | 7.0 | 38927 | 0.0023 | 0.6999 | 0.6748 | 0.6872 | 0.9514 |
| 0.0019 | 8.0 | 44488 | 0.0024 | 0.6962 | 0.6820 | 0.6890 | 0.9513 |
| 0.0019 | 9.0 | 50049 | 0.0023 | 0.7005 | 0.6775 | 0.6888 | 0.9516 |
| 0.0018 | 10.0 | 55610 | 0.0023 | 0.6975 | 0.6822 | 0.6898 | 0.9515 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
YZzfDY/RICE-large
|
YZzfDY
| 2022-06-25T08:37:24Z | 1 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"pretraining",
"en",
"endpoints_compatible",
"region:us"
] | null | 2022-06-25T05:22:48Z |
---
language:
- en
tag: fill-mask
widget:
- text: "Paris is the <mask> of France."
example_title: "Capital"
---
|
bousejin/xlm-roberta-base-finetuned-panx-en
|
bousejin
| 2022-06-25T06:48:13Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-25T06:32:26Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.6900780379041249
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3909
- F1: 0.6901
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1446 | 1.0 | 50 | 0.6385 | 0.3858 |
| 0.5317 | 2.0 | 100 | 0.4248 | 0.6626 |
| 0.3614 | 3.0 | 150 | 0.3909 | 0.6901 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
bousejin/xlm-roberta-base-finetuned-panx-fr
|
bousejin
| 2022-06-25T06:15:40Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-25T05:57:28Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.fr
metrics:
- name: F1
type: f1
value: 0.9241871401929781
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1013
- F1: 0.9242
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5667 | 1.0 | 191 | 0.2318 | 0.8415 |
| 0.2539 | 2.0 | 382 | 0.1428 | 0.8988 |
| 0.1739 | 3.0 | 573 | 0.1013 | 0.9242 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jwuthri/distilbert-base-uncased-finetuned-imdb
|
jwuthri
| 2022-06-25T05:46:38Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-25T02:21:14Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3811
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7046 | 1.0 | 157 | 2.4782 |
| 2.5679 | 2.0 | 314 | 2.4108 |
| 2.5028 | 3.0 | 471 | 2.4121 |
| 2.4825 | 4.0 | 628 | 2.3589 |
| 2.4593 | 5.0 | 785 | 2.4074 |
| 2.4294 | 6.0 | 942 | 2.3742 |
| 2.4258 | 7.0 | 1099 | 2.3706 |
| 2.4152 | 8.0 | 1256 | 2.3315 |
| 2.409 | 9.0 | 1413 | 2.3809 |
| 2.3908 | 10.0 | 1570 | 2.3394 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Ritvik19/sentinet-v1
|
Ritvik19
| 2022-06-25T04:55:57Z | 0 | 0 |
sklearn
|
[
"sklearn",
"sentiment-analysis",
"en",
"region:us"
] | null | 2022-05-24T08:17:47Z |
---
language:
- en
tags:
- sentiment-analysis
- sklearn
---
## Overview
Sentinet V1 is a collection of models to thoroughly analyze the sentiments, emotions of a given text.
The underlying algorithm is TF-IDF Vectorization followed by Logistic Regression
## Performance
sentiment_class | auroc_score
---|---:
sentiment_polarity | 95.04%
opinion | 70.64%
toxicity | 96.12%
toxicity__hate | 97.43%
toxicity__insult | 97.04%
toxicity__obscene | 98.44%
toxicity__sexual_explicit | 98.49%
toxicity__threat | 98.25%
emotion__anger | 86.36%
emotion__disgust | 85.15%
emotion__fear | 93.03%
emotion__guilt | 81.70%
emotion__humour | 97.69%
emotion__joy | 85.87%
emotion__no_emotion | 80.08%
emotion__sadness | 91.04%
emotion__shame | 84.19%
emotion__surprise | 87.55%
|
eugenetanjc/wav2vec_cv
|
eugenetanjc
| 2022-06-25T04:16:10Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-24T17:27:34Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec_cv
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec_cv
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1760
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 6
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 7.1467 | 4.29 | 30 | 4.2173 | 1.0 |
| 6.8918 | 8.57 | 60 | 4.2004 | 1.0 |
| 5.4913 | 12.86 | 90 | 4.2007 | 1.0 |
| 5.3906 | 17.14 | 120 | 4.1765 | 1.0 |
| 4.9212 | 21.43 | 150 | 4.1714 | 1.0 |
| 4.3916 | 25.71 | 180 | 4.1811 | 1.0 |
| 5.2255 | 30.0 | 210 | 4.1633 | 1.0 |
| 4.501 | 34.29 | 240 | 4.2050 | 1.0 |
| 4.4328 | 38.57 | 270 | 4.1572 | 1.0 |
| 4.2136 | 42.86 | 300 | 4.1698 | 1.0 |
| 4.3353 | 47.14 | 330 | 4.1721 | 1.0 |
| 4.1805 | 51.43 | 360 | 4.1804 | 1.0 |
| 4.1695 | 55.71 | 390 | 4.1801 | 1.0 |
| 4.2978 | 60.0 | 420 | 4.1760 | 1.0 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
shuidun/test1
|
shuidun
| 2022-06-25T04:04:42Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-25T03:46:52Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: test1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test1
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
BigSalmon/TextbookInformalFormalEnglish
|
BigSalmon
| 2022-06-25T02:25:15Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-25T02:17:36Z |
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/TextbookInformalFormalEnglish")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/TextbookInformalFormalEnglish")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- nebraska
- unicamerical legislature
- different from federal house and senate
text: featuring a unicameral legislature, nebraska's political system stands in stark contrast to the federal model, comprised of a house and senate.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
Keywords to sentences or sentence.
```
ngos are characterized by:
□ voluntary citizens' group that is organized on a local, national or international level
□ encourage political participation
□ often serve humanitarian functions
□ work for social, economic, or environmental change
***
what are the drawbacks of living near an airbnb?
□ noise
□ parking
□ traffic
□ security
□ strangers
***
```
```
original: musicals generally use spoken dialogue as well as songs to convey the story. operas are usually fully sung.
adapted: musicals generally use spoken dialogue as well as songs to convey the story. ( in a stark departure / on the other hand / in contrast / by comparison / at odds with this practice / far from being alike / in defiance of this standard / running counter to this convention ), operas are usually fully sung.
***
original: akoya and tahitian are types of pearls. akoya pearls are mostly white, and tahitian pearls are naturally dark.
adapted: akoya and tahitian are types of pearls. ( a far cry from being indistinguishable / easily distinguished / on closer inspection / setting them apart / not to be mistaken for one another / hardly an instance of mere synonymy / differentiating the two ), akoya pearls are mostly white, and tahitian pearls are naturally dark.
***
original:
```
```
original: had trouble deciding.
translated into journalism speak: wrestled with the question, agonized over the matter, furrowed their brows in contemplation.
***
original:
```
```
input: not loyal
1800s english: ( two-faced / inimical / perfidious / duplicitous / mendacious / double-dealing / shifty ).
***
input:
```
|
justalittlebitmine/Blank
|
justalittlebitmine
| 2022-06-25T02:15:20Z | 0 | 0 | null |
[
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2022-06-25T02:15:20Z |
---
license: cc-by-nc-sa-4.0
---
|
kangaroo927/test_auto_protocol
|
kangaroo927
| 2022-06-25T00:21:15Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-07T19:24:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: test_auto_protocol
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_auto_protocol
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.19.1
- Pytorch 1.6.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
KukuyKukuev/bert-base-cased-wikitext2
|
KukuyKukuev
| 2022-06-24T22:55:09Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-24T22:15:23Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-cased-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-wikitext2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.8574
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.0916 | 1.0 | 2346 | 7.0492 |
| 6.9039 | 2.0 | 4692 | 6.8751 |
| 6.8845 | 3.0 | 7038 | 6.8929 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
eesungkim/stt_kr_conformer_transducer_large
|
eesungkim
| 2022-06-24T22:11:28Z | 25 | 9 |
nemo
|
[
"nemo",
"automatic-speech-recognition",
"speech",
"audio",
"transducer",
"Conformer",
"Transformer",
"NeMo",
"pytorch",
"kr",
"dataset:Ksponspeech",
"arxiv:2005.08100",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-05-06T05:58:02Z |
---
language:
- kr
license: cc-by-4.0
library_name: nemo
datasets:
- Ksponspeech
thumbnail: null
tags:
- automatic-speech-recognition
- speech
- audio
- transducer
- Conformer
- Transformer
- NeMo
- pytorch
model-index:
- name: stt_kr_conformer_transducer_large
results: []
---
## Model Overview
<DESCRIBE IN ONE LINE THE MODEL AND ITS USE>
## NVIDIA NeMo: Training
To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest Pytorch version.
```
pip install nemo_toolkit['all']
```
## How to Use this Model
The model is available for use in the NeMo toolkit [1], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
### Automatically instantiate the model
```python
import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.ASRModel.from_pretrained("eesungkim/stt_kr_conformer_transducer_large")
```
### Transcribing using Python
First, let's get a sample
```
wget https://dldata-public.s3.us-east-2.amazonaws.com/sample-kor.wav
```
Then simply do:
```
asr_model.transcribe(['sample-kor.wav'])
```
### Transcribing many audio files
```shell
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py pretrained_name="eesungkim/stt_kr_conformer_transducer_large" audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
```
### Input
This model accepts 16000 KHz Mono-channel Audio (wav files) as input.
### Output
This model provides transcribed speech as a string for a given audio sample.
## Model Architecture
Conformer-Transducer model is an autoregressive variant of Conformer model [2] for Automatic Speech Recognition which uses Transducer loss/decoding. You may find more info on the detail of this model here: [Conformer-Transducer Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html).
## Training
The model was finetuned based on the pre-trained English Model for over several epochs.
There are several transcribing and sub-word modeling methods for Korean speech recognition. This model uses sentencepiece subwords of Hangul characters based on phonetic transcription using Google Sentencepiece Tokenizer [3].
### Datasets
All the models in this collection are trained on [Ksponspeech](https://aihub.or.kr/aidata/105/download) dataset, which is an open-domain dialog corpus recorded by 2,000 native Korean speakers in a controlled and quiet environment. The standard split dataset consists of 965 hours of training set, 4 hours of development set, 3 hours of test-clean, and 4 hours of test-other.
## Performance
Version | Tokenizer | eval_clean CER | eval_other CER | eval_clean WER | eval_other WER
--- | --- | --- | --- |--- |---
v1.7.0rc | SentencePiece Char | 6.94% | 7.38% | 19.49% | 22.73%
## Limitations
Since this model was trained on publically available speech datasets, the performance of this model might degrade for speech which including technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech.
This model produces a spoken-form token sequence. If you want to have a written form, you can consider applying inverse text normalization.
## References
[1] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
[2] [Conformer: Convolution-augmented Transformer for Speech Recognition](https://arxiv.org/abs/2005.08100)
[3] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece)
|
philschmid/msmarco-distilbert-base-tas-b-onnx
|
philschmid
| 2022-06-24T21:39:55Z | 13 | 0 |
generic
|
[
"generic",
"onnx",
"text-classification",
"region:us"
] |
text-classification
| 2022-06-24T21:00:16Z |
---
library_name: generic
tags:
- text-classification
---
|
domenicrosati/BioM-ALBERT-xxlarge-finetuned-DAGPap22
|
domenicrosati
| 2022-06-24T19:54:01Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"albert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-24T13:25:01Z |
---
tags:
- text-classification
- generated_from_trainer
model-index:
- name: BioM-ALBERT-xxlarge-finetuned-DAGPap22
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BioM-ALBERT-xxlarge-finetuned-DAGPap22
This model is a fine-tuned version of [sultan/BioM-ALBERT-xxlarge](https://huggingface.co/sultan/BioM-ALBERT-xxlarge) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
sharanharsoor/RL-work-Try
|
sharanharsoor
| 2022-06-24T19:32:18Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-24T19:31:37Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: -24.87 +/- 20.55
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
eugenetanjc/trained_french
|
eugenetanjc
| 2022-06-24T17:50:48Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-23T17:15:58Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: trained_french
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trained_french
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.8493
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 6
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 6.2268 | 5.53 | 50 | 4.9813 | 1.0 |
| 5.724 | 11.11 | 100 | 4.8808 | 1.0 |
| 5.629 | 16.63 | 150 | 4.9001 | 1.0 |
| 5.3351 | 22.21 | 200 | 4.8457 | 1.0 |
| 5.2043 | 27.74 | 250 | 4.8386 | 1.0 |
| 5.1709 | 33.32 | 300 | 4.8647 | 1.0 |
| 5.065 | 38.84 | 350 | 4.8574 | 1.0 |
| 5.0685 | 44.42 | 400 | 4.8449 | 1.0 |
| 5.0584 | 49.95 | 450 | 4.8412 | 1.0 |
| 4.9626 | 55.53 | 500 | 4.8493 | 1.0 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
gballoccu/q-FrozenLake-v1-4x4-Slippery
|
gballoccu
| 2022-06-24T17:18:20Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-24T16:58:42Z |
---
tags:
- FrozenLake-v1-4x4
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-Slippery
results:
- metrics:
- type: mean_reward
value: 0.81 +/- 0.39
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4
type: FrozenLake-v1-4x4
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="gballoccu/q-FrozenLake-v1-4x4-Slippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
Servarr/bert-finetuned-radarr
|
Servarr
| 2022-06-24T16:40:53Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:movie_releases",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-06-24T09:52:33Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- movie_releases
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-radarr
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: movie_releases
type: movie_releases
args: default
metrics:
- name: Precision
type: precision
value: 0.9555421444377389
- name: Recall
type: recall
value: 0.9638798701298701
- name: F1
type: f1
value: 0.9596928982725529
- name: Accuracy
type: accuracy
value: 0.9817602584524263
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-radarr
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the movie_releases dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0731
- Precision: 0.9555
- Recall: 0.9639
- F1: 0.9597
- Accuracy: 0.9818
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0431 | 1.0 | 1191 | 0.1403 | 0.9436 | 0.9574 | 0.9504 | 0.9626 |
| 0.0236 | 2.0 | 2382 | 0.0881 | 0.9485 | 0.9560 | 0.9522 | 0.9694 |
| 0.0138 | 3.0 | 3573 | 0.0731 | 0.9555 | 0.9639 | 0.9597 | 0.9818 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Mraleksa/fine-tune-distilbert-exitru
|
Mraleksa
| 2022-06-24T15:29:02Z | 10 | 0 |
transformers
|
[
"transformers",
"tf",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-24T07:49:54Z |
first test model om Huggingface HUB
|
philschmid/DistilBERT-Banking77
|
philschmid
| 2022-06-24T14:31:49Z | 20 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain",
"en",
"dataset:banking77",
"model-index",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-02T10:38:18Z |
---
tags: autotrain
language: en
widget:
- text: I am still waiting on my card?
datasets:
- banking77
model-index:
- name: BERT-Banking77
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: BANKING77
type: banking77
metrics:
- name: Accuracy
type: accuracy
value: 91.99
- name: Macro F1
type: macro-f1
value: 91.99
- name: Weighted F1
type: weighted-f1
value: 91.99
- task:
type: text-classification
name: Text Classification
dataset:
name: banking77
type: banking77
config: default
split: test
metrics:
- name: Accuracy
type: accuracy
value: 0.922077922077922
verified: true
- name: Precision Macro
type: precision
value: 0.9256326708783564
verified: true
- name: Precision Micro
type: precision
value: 0.922077922077922
verified: true
- name: Precision Weighted
type: precision
value: 0.9256326708783565
verified: true
- name: Recall Macro
type: recall
value: 0.922077922077922
verified: true
- name: Recall Micro
type: recall
value: 0.922077922077922
verified: true
- name: Recall Weighted
type: recall
value: 0.922077922077922
verified: true
- name: F1 Macro
type: f1
value: 0.9221617304411865
verified: true
- name: F1 Micro
type: f1
value: 0.922077922077922
verified: true
- name: F1 Weighted
type: f1
value: 0.9221617304411867
verified: true
- name: loss
type: loss
value: 0.31692808866500854
verified: true
co2_eq_emissions: 5.632805352029529
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 940131045
- CO2 Emissions (in grams): 5.632805352029529
## Validation Metrics
- Loss: 0.3392622470855713
- Accuracy: 0.9199410609037328
- Macro F1: 0.9199390885956755
- Micro F1: 0.9199410609037327
- Weighted F1: 0.9198140295005729
- Macro Precision: 0.9235531521509113
- Micro Precision: 0.9199410609037328
- Weighted Precision: 0.9228777883152248
- Macro Recall: 0.919570805773292
- Micro Recall: 0.9199410609037328
- Weighted Recall: 0.9199410609037328
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/philschmid/autotrain-does-it-work-940131045
```
Or Python API:
```
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
model_id = 'philschmid/DistilBERT-Banking77'
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSequenceClassification.from_pretrained(model_id)
classifier = pipeline('text-classification', tokenizer=tokenizer, model=model)
classifier('What is the base of the exchange rates?')
```
|
philschmid/habana-xlm-r-large-amazon-massive
|
philschmid
| 2022-06-24T13:38:20Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"optimum_habana",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"habana",
"dataset:AmazonScience/massive",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-20T14:16:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
- habana
datasets:
- AmazonScience/massive
metrics:
- accuracy
- f1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# philschmid/habana-xlm-r-large-amazon-massive
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the AmazonScience/massive dataset.
It achieves the following results on the evaluation set:
## 8x HPU approx. 41min
**train results**
```bash
{'loss': 0.2651, 'learning_rate': 2.4e-05, 'epoch': 1.0}
{'loss': 0.1079, 'learning_rate': 1.8e-05, 'epoch': 2.0}
{'loss': 0.0563, 'learning_rate': 1.2e-05, 'epoch': 3.0}
{'loss': 0.0308, 'learning_rate': 6e-06, 'epoch': 4.0}
{'loss': 0.0165, 'learning_rate': 0.0, 'epoch': 5.0}
```
total
```bash
{'train_runtime': 3172.4502, 'train_samples_per_second': 127.028, 'train_steps_per_second': 1.986, 'train_loss': 0.09531746031746031, 'epoch': 5.0}
```
**eval results**
```bash
{'eval_loss': 0.3128528892993927, 'eval_accuracy': 0.9125852013210597, 'eval_f1': 0.9125852013210597, 'eval_runtime': 45.1795, 'eval_samples_per_second': 314.988, 'eval_steps_per_second': 4.936, 'epoch': 1.0}
{'eval_loss': 0.36222779750823975, 'eval_accuracy': 0.9134987000210807, 'eval_f1': 0.9134987000210807, 'eval_runtime': 29.8241, 'eval_samples_per_second': 477.165, 'eval_steps_per_second': 7.477, 'epoch': 2.0}
{'eval_loss': 0.3943144679069519, 'eval_accuracy': 0.9140608530672476, 'eval_f1': 0.9140
608530672476, 'eval_runtime': 30.1085, 'eval_samples_per_second': 472.657, 'eval_steps_per_second': 7.407, 'epoch': 3.0}
{'eval_loss': 0.40938863158226013, 'eval_accuracy': 0.9158878504672897, 'eval_f1': 0.9158878504672897, 'eval_runtime': 30.4546, 'eval_samples_per_second': 467.286, 'eval_steps_per_second': 7.322, 'epoch': 4.0}
{'eval_loss': 0.4137658476829529, 'eval_accuracy': 0.9172932330827067, 'eval_f1': 0.9172932330827067, 'eval_runtime': 30.3464, 'eval_samples_per_second': 468.952, 'eval_steps_per_second': 7.348, 'epoch': 5.0}
```
# Environment
The training was run on a `DL1` instance on AWS using Habana Gaudi1 and `optimum`.
see for more information: https://github.com/philschmid/deep-learning-habana-huggingface
|
Lakshya/q-FrozenLake-v1-4x4-noSlippery
|
Lakshya
| 2022-06-24T13:10:38Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-24T13:10:30Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Lakshya/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
dnouri-kipoi/pyt
|
dnouri-kipoi
| 2022-06-24T12:47:13Z | 0 | 0 | null |
[
"kipoi",
"region:us"
] | null | 2022-06-24T11:20:03Z |
---
tags:
- kipoi
---
Simple testing model for Kipoi/pytorch by Roman Kreuzhuber
|
ashraq/movielense_movie_model_cos_384
|
ashraq
| 2022-06-24T11:33:30Z | 0 | 0 |
keras
|
[
"keras",
"tf-keras",
"region:us"
] | null | 2022-06-24T11:33:18Z |
---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Model Plot
<details>
<summary>View Model Plot</summary>

</details>
|
RJuro/Da-HyggeBERT
|
RJuro
| 2022-06-24T11:09:39Z | 8 | 3 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"danish",
"sentiment",
"Maltehb/danish-bert-botxo",
"Helsinki-NLP/opus-mt-en-da",
"go-emotion",
"Certainly",
"da",
"dataset:go_emotions",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-19T17:41:42Z |
---
language: da
tags:
- danish
- bert
- sentiment
- text-classification
- Maltehb/danish-bert-botxo
- Helsinki-NLP/opus-mt-en-da
- go-emotion
- Certainly
license: cc-by-4.0
datasets:
- go_emotions
metrics:
- Accuracy
widget:
- text: "Det er så sødt af dig at tænke på andre på den måde ved du det?"
- text: "Jeg vil gerne have en playstation."
- text: "Jeg elsker dig"
- text: "Hvordan håndterer jeg min irriterende nabo?"
---
# Danish-Bert-GoÆmotion
Danish Go-Emotions classifier. [Maltehb/danish-bert-botxo](https://huggingface.co/Maltehb/danish-bert-botxo) (uncased) finetuned on a translation of the [go_emotions](https://huggingface.co/datasets/go_emotions) dataset using [Helsinki-NLP/opus-mt-en-da](https://huggingface.co/Helsinki-NLP/opus-mt-de-en). Thus, performance is obviousely dependent on the translation model.
## Training
- Translating the training data with MT: [Notebook](https://colab.research.google.com/github/RJuro/Da-HyggeBERT-finetuning/blob/main/HyggeBERT_translation_en_da.ipynb)
- Fine-tuning danish-bert-botxo: coming soon...
## Training Parameters:
```
Num examples = 189900
Num Epochs = 3
Train batch = 8
Eval batch = 8
Learning Rate = 3e-5
Warmup steps = 4273
Total optimization steps = 71125
```
## Loss
### Training loss

### Eval. loss
```
0.1178 (21100 examples)
```
## Using the model with `transformers`
Easiest use with `transformers` and `pipeline`:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline
model = AutoModelForSequenceClassification.from_pretrained('RJuro/Da-HyggeBERT')
tokenizer = AutoTokenizer.from_pretrained('RJuro/Da-HyggeBERT')
classifier = pipeline("sentiment-analysis", model=model, tokenizer=tokenizer)
classifier('jeg elsker dig')
```
`[{'label': 'kærlighed', 'score': 0.9634820818901062}]`
## Using the model with `simpletransformers`
```python
from simpletransformers.classification import MultiLabelClassificationModel
model = MultiLabelClassificationModel('bert', 'RJuro/Da-HyggeBERT')
predictions, raw_outputs = model.predict(df['text'])
```
|
gary109/ai-light-dance_singing2_ft_wav2vec2-large-xlsr-53
|
gary109
| 2022-06-24T09:28:26Z | 3 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"/workspace/asante/ai-light-dance_datasets/AI_Light_Dance.py",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-23T03:43:52Z |
---
license: apache-2.0
tags:
- automatic-speech-recognition
- /workspace/asante/ai-light-dance_datasets/AI_Light_Dance.py
- generated_from_trainer
model-index:
- name: ai-light-dance_singing2_ft_wav2vec2-large-xlsr-53
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_singing2_ft_wav2vec2-large-xlsr-53
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the /WORKSPACE/ASANTE/AI-LIGHT-DANCE_DATASETS/AI_LIGHT_DANCE.PY - ONSET-SINGING2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7583
- Wer: 0.9386
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-06
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 160
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 27.4755 | 1.0 | 112 | 23.2618 | 1.0 |
| 5.5145 | 2.0 | 224 | 5.2213 | 1.0 |
| 4.2211 | 3.0 | 336 | 4.1673 | 1.0 |
| 3.8386 | 4.0 | 448 | 3.8253 | 1.0 |
| 3.5531 | 5.0 | 560 | 3.6286 | 1.0 |
| 3.5215 | 6.0 | 672 | 3.4762 | 0.9864 |
| 3.3493 | 7.0 | 784 | 3.3549 | 0.9847 |
| 3.1264 | 8.0 | 896 | 3.1797 | 0.9759 |
| 2.7557 | 9.0 | 1008 | 2.8703 | 0.9865 |
| 2.6345 | 10.0 | 1120 | 2.6736 | 0.9970 |
| 2.4297 | 11.0 | 1232 | 2.5638 | 1.0337 |
| 2.3057 | 12.0 | 1344 | 2.3680 | 0.9839 |
| 2.1436 | 13.0 | 1456 | 2.2367 | 0.9648 |
| 2.0856 | 14.0 | 1568 | 2.1635 | 0.9586 |
| 2.0035 | 15.0 | 1680 | 2.0945 | 0.9645 |
| 1.9134 | 16.0 | 1792 | 2.0395 | 0.9630 |
| 1.9443 | 17.0 | 1904 | 2.0017 | 0.9401 |
| 1.8988 | 18.0 | 2016 | 1.9514 | 0.9493 |
| 1.8141 | 19.0 | 2128 | 1.9111 | 0.9475 |
| 1.8344 | 20.0 | 2240 | 1.8790 | 0.9395 |
| 1.7775 | 21.0 | 2352 | 1.8616 | 0.9503 |
| 1.7517 | 22.0 | 2464 | 1.8333 | 0.9433 |
| 1.7037 | 23.0 | 2576 | 1.8156 | 0.9372 |
| 1.7158 | 24.0 | 2688 | 1.7961 | 0.9482 |
| 1.7111 | 25.0 | 2800 | 1.7817 | 0.9422 |
| 1.69 | 26.0 | 2912 | 1.7819 | 0.9430 |
| 1.6889 | 27.0 | 3024 | 1.7721 | 0.9386 |
| 1.6546 | 28.0 | 3136 | 1.7647 | 0.9453 |
| 1.6542 | 29.0 | 3248 | 1.7653 | 0.9375 |
| 1.647 | 30.0 | 3360 | 1.7583 | 0.9386 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
humhealth/chroniccaremanagement
|
humhealth
| 2022-06-24T08:14:42Z | 0 | 0 | null |
[
"license:bsl-1.0",
"region:us"
] | null | 2022-06-24T08:14:24Z |
---
license: bsl-1.0
---
https://www.humhealth.com/chronic-care-management/
|
AlexChe/MLAgents-Pyramids
|
AlexChe
| 2022-06-24T08:12:24Z | 10 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2022-06-24T08:12:08Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: AlexChe/MLAgents-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
humhealth/remote-patientmonitoring
|
humhealth
| 2022-06-24T08:08:34Z | 0 | 1 | null |
[
"license:bsl-1.0",
"region:us"
] | null | 2022-06-24T08:07:20Z |
---
license: bsl-1.0
---
https://www.humhealth.com/remote-patient-monitoring/
https://www.humhealth.com/chronic-care-management/
|
wiselinjayajos/t5-end2end-questions-generation
|
wiselinjayajos
| 2022-06-24T08:04:22Z | 9 | 4 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wiselinjayajos/squad_modified_for_t5_qg",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-22T17:26:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wiselinjayajos/squad_modified_for_t5_qg
widget:
- text: "generate question: Python is developed by Guido Van Rossum and released in 1991.</s>"
model-index:
- name: t5-end2end-questions-generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-end2end-questions-generation
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the squad_modified_for_t5_qg dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5789
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5879 | 0.34 | 100 | 1.9133 |
| 1.9688 | 0.68 | 200 | 1.7313 |
| 1.8513 | 1.02 | 300 | 1.6691 |
| 1.7459 | 1.36 | 400 | 1.6413 |
| 1.7206 | 1.69 | 500 | 1.6200 |
| 1.7026 | 2.03 | 600 | 1.6101 |
| 1.6447 | 2.37 | 700 | 1.5983 |
| 1.6402 | 2.71 | 800 | 1.5979 |
| 1.6332 | 3.05 | 900 | 1.5924 |
| 1.5953 | 3.39 | 1000 | 1.5877 |
| 1.5922 | 3.73 | 1100 | 1.5854 |
| 1.5832 | 4.07 | 1200 | 1.5830 |
| 1.5726 | 4.41 | 1300 | 1.5799 |
| 1.5587 | 4.75 | 1400 | 1.5789 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
jwang/tuned-t5
|
jwang
| 2022-06-24T06:18:32Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-06-24T06:16:12Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: jwang/tuned-t5
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# jwang/tuned-t5
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.6386
- Validation Loss: 3.3773
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.7547 | 3.4438 | 0 |
| 4.6135 | 3.4096 | 1 |
| 4.6386 | 3.3773 | 2 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v1
|
gary109
| 2022-06-24T05:43:24Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"gary109/AI_Light_Dance",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-23T07:53:29Z |
---
license: apache-2.0
tags:
- automatic-speech-recognition
- gary109/AI_Light_Dance
- generated_from_trainer
model-index:
- name: ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v1
This model is a fine-tuned version of [gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53](https://huggingface.co/gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53) on the GARY109/AI_LIGHT_DANCE - ONSET-STEPMANIA2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0763
- Wer: 0.7344
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 160
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1632 | 1.0 | 150 | 1.2007 | 0.9875 |
| 1.1615 | 2.0 | 300 | 1.1912 | 0.9875 |
| 1.1487 | 3.0 | 450 | 1.1942 | 0.9875 |
| 1.1207 | 4.0 | 600 | 1.1753 | 0.9875 |
| 1.0638 | 5.0 | 750 | 1.1345 | 0.8214 |
| 1.0174 | 6.0 | 900 | 1.1541 | 0.7665 |
| 0.9946 | 7.0 | 1050 | 1.0799 | 0.7716 |
| 0.9694 | 8.0 | 1200 | 1.0848 | 0.7418 |
| 0.9566 | 9.0 | 1350 | 1.0763 | 0.7344 |
| 0.9466 | 10.0 | 1500 | 1.0791 | 0.7240 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
eugenetanjc/wav2vec_mle
|
eugenetanjc
| 2022-06-24T05:42:09Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-24T02:12:55Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec_mle
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec_mle
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3076
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 6
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- num_epochs: 60
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 7.3604 | 3.33 | 30 | 4.4612 | 1.0 |
| 4.502 | 6.67 | 60 | 4.5906 | 1.0 |
| 4.2842 | 10.0 | 90 | 4.4217 | 1.0 |
| 4.3833 | 13.33 | 120 | 4.3967 | 1.0 |
| 4.2631 | 16.67 | 150 | 4.3469 | 1.0 |
| 4.3357 | 20.0 | 180 | 4.3372 | 1.0 |
| 4.3941 | 23.33 | 210 | 4.3187 | 1.0 |
| 4.393 | 26.67 | 240 | 4.2981 | 1.0 |
| 4.3619 | 30.0 | 270 | 4.3049 | 1.0 |
| 4.3849 | 33.33 | 300 | 4.3138 | 1.0 |
| 4.3186 | 36.67 | 330 | 4.3123 | 1.0 |
| 4.3196 | 40.0 | 360 | 4.3097 | 1.0 |
| 4.3212 | 43.33 | 390 | 4.3279 | 1.0 |
| 4.3108 | 46.67 | 420 | 4.3249 | 1.0 |
| 4.3112 | 50.0 | 450 | 4.3093 | 1.0 |
| 4.2994 | 53.33 | 480 | 4.3198 | 1.0 |
| 4.2958 | 56.67 | 510 | 4.3071 | 1.0 |
| 4.2905 | 60.0 | 540 | 4.3076 | 1.0 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
Rahulrr/language_model_en_he
|
Rahulrr
| 2022-06-24T05:31:17Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"en",
"he",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-06-24T05:28:35Z |
---
language:
- en
- he
tags:
- translation
license: apache-2.0
---
### en-he
* source group: English
* target group: Hebrew
* OPUS readme: [eng-heb](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-heb/README.md)
* model: transformer-align
* source language(s): eng
* target language(s): heb
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* download original weights: [opus+bt-2021-04-13.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-heb/opus+bt-2021-04-13.zip)
* test set translations: [opus+bt-2021-04-13.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-heb/opus+bt-2021-04-13.test.txt)
* test set scores: [opus+bt-2021-04-13.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-heb/opus+bt-2021-04-13.eval.txt)
## Benchmarks
| testset | BLEU | chr-F | #sent | #words | BP |
|---------|-------|-------|-------|--------|----|
| Tatoeba-test.eng-heb | 37.8 | 0.601 | 10000 | 60359 | 1.000 |
### System Info:
- hf_name: en-he
- source_languages: eng
- target_languages: heb
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-heb/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'he']
- src_constituents: ('English', {'eng'})
- tgt_constituents: ('Hebrew', {'heb'})
- src_multilingual: False
- tgt_multilingual: False
- long_pair: eng-heb
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-heb/opus+bt-2021-04-13.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-heb/opus+bt-2021-04-13.test.txt
- src_alpha3: eng
- tgt_alpha3: heb
- chrF2_score: 0.601
- bleu: 37.8
- src_name: English
- tgt_name: Hebrew
- train_date: 2021-04-13 00:00:00
- src_alpha2: en
- tgt_alpha2: he
- prefer_old: False
- short_pair: en-he
- helsinki_git_sha: c4e978d8de47875b482653b423dcfe968979d7d5
- transformers_git_sha: 56b83cf049823ed074a655eceb28f31e2077c6eb
- port_machine: LAPIN4GLQ2G3
- port_time: 2022-06-22-19:47
|
iaanimashaun/distilgpt2-finetuned-wikitext2
|
iaanimashaun
| 2022-06-24T05:13:39Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-23T10:57:06Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6895
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7852 | 1.0 | 2334 | 3.6895 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
Saraswati/Stable_Baselines3
|
Saraswati
| 2022-06-24T05:03:51Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-06-24T05:02:47Z |
import gym
import numpy as np
from stable_baselines3 import PPO
from stable_baselines3.common.vec_env import DummyVecEnv, SubprocVecEnv
from stable_baselines3.common.env_util import make_vec_env
from stable_baselines3.common.utils import set_random_seed
def make_env(env_id, rank, seed=0):
"""
Utility function for multiprocessed env.
:param env_id: (str) the environment ID
:param num_env: (int) the number of environments you wish to have in subprocesses
:param seed: (int) the inital seed for RNG
:param rank: (int) index of the subprocess
"""
def _init():
env = gym.make(env_id)
env.seed(seed + rank)
return env
set_random_seed(seed)
return _init
if __name__ == '__main__':
env_id = "CartPole-v1"
num_cpu = 4 # Number of processes to use
# Create the vectorized environment
env = SubprocVecEnv([make_env(env_id, i) for i in range(num_cpu)])
# Stable Baselines provides you with make_vec_env() helper
# which does exactly the previous steps for you.
# You can choose between `DummyVecEnv` (usually faster) and `SubprocVecEnv`
# env = make_vec_env(env_id, n_envs=num_cpu, seed=0, vec_env_cls=SubprocVecEnv)
model = PPO('MlpPolicy', env, verbose=1)
model.learn(total_timesteps=25_000)
obs = env.reset()
for _ in range(1000):
action, _states = model.predict(obs)
obs, rewards, dones, info = env.step(action)
env.render()
|
sharpcoder/wav2vec2_bjorn
|
sharpcoder
| 2022-06-24T04:24:07Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-23T02:53:37Z |
This project is meant to fine-tune the facebook/wav2vec2 speech-to-text library using my voice specifically for my own speech to text purposes.
|
sonalily/distilgpt2-finetuned-wikitext2
|
sonalily
| 2022-06-24T04:14:20Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-23T01:12:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-wikitext2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6429
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7607 | 1.0 | 2334 | 3.6664 |
| 3.6527 | 2.0 | 4668 | 3.6473 |
| 3.6015 | 3.0 | 7002 | 3.6429 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
sijunhe/nezha-base-wwm
|
sijunhe
| 2022-06-24T03:55:20Z | 45 | 1 |
transformers
|
[
"transformers",
"pytorch",
"nezha",
"fill-mask",
"arxiv:1909.00204",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-19T09:36:26Z |
---
license: afl-3.0
---
**Please use 'Bert' related tokenizer classes and 'Nezha' related model classes**
[NEZHA: Neural Contextualized Representation for Chinese Language Understanding](https://arxiv.org/abs/1909.00204)
Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen and Qun Liu.
The original checkpoints can be found [here](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/NEZHA-PyTorch)
## Example Usage
```
from transformers import BertTokenizer, NezhaModel
tokenizer = BertTokenizer.from_pretrained("sijunhe/nezha-base-wwm")
model = NezhaModel.from_pretrained("sijunhe/nezha-base-wwm")
text = "我爱北京天安门"
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
|
enoriega/rule_learning_margin_1mm_spanpred_attention
|
enoriega
| 2022-06-24T03:51:00Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"generated_from_trainer",
"dataset:enoriega/odinsynth_dataset",
"endpoints_compatible",
"region:us"
] | null | 2022-06-23T02:15:49Z |
---
tags:
- generated_from_trainer
datasets:
- enoriega/odinsynth_dataset
model-index:
- name: rule_learning_margin_1mm_spanpred_attention
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rule_learning_margin_1mm_spanpred_attention
This model is a fine-tuned version of [enoriega/rule_softmatching](https://huggingface.co/enoriega/rule_softmatching) on the enoriega/odinsynth_dataset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3237
- Margin Accuracy: 0.8518
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2000
- total_train_batch_size: 8000
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Margin Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------------:|
| 0.5768 | 0.16 | 20 | 0.5693 | 0.7577 |
| 0.4593 | 0.32 | 40 | 0.4338 | 0.8105 |
| 0.4219 | 0.48 | 60 | 0.3958 | 0.8218 |
| 0.3953 | 0.64 | 80 | 0.3809 | 0.8308 |
| 0.383 | 0.8 | 100 | 0.3684 | 0.8355 |
| 0.3781 | 0.96 | 120 | 0.3591 | 0.8396 |
| 0.354 | 1.12 | 140 | 0.3535 | 0.8420 |
| 0.3521 | 1.28 | 160 | 0.3491 | 0.8430 |
| 0.3533 | 1.44 | 180 | 0.3423 | 0.8466 |
| 0.344 | 1.6 | 200 | 0.3372 | 0.8472 |
| 0.3352 | 1.76 | 220 | 0.3345 | 0.8478 |
| 0.3318 | 1.92 | 240 | 0.3320 | 0.8487 |
| 0.3478 | 2.08 | 260 | 0.3286 | 0.8494 |
| 0.3329 | 2.24 | 280 | 0.3286 | 0.8505 |
| 0.3424 | 2.4 | 300 | 0.3262 | 0.8506 |
| 0.3463 | 2.56 | 320 | 0.3264 | 0.8512 |
| 0.3416 | 2.72 | 340 | 0.3247 | 0.8518 |
| 0.329 | 2.88 | 360 | 0.3247 | 0.8516 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0
- Datasets 2.2.1
- Tokenizers 0.12.1
|
ferzimo/dummy-model
|
ferzimo
| 2022-06-24T03:41:38Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"camembert",
"fill-mask",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-24T03:36:09Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: dummy-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dummy-model
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.20.0
- TensorFlow 2.7.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
mwong/ernie-v2-fever-evidence-related
|
mwong
| 2022-06-24T03:40:05Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text classification",
"fact checking",
"en",
"dataset:mwong/fever-evidence-related",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-05-01T09:46:03Z |
---
language: en
license: mit
tags:
- text classification
- fact checking
datasets:
- mwong/fever-evidence-related
widget:
- text: "Earth’s changing climate is a critical issue and poses the risk of significant environmental, social and economic disruptions around the globe.</s></s>Because of fears of climate change and adverse effects of drilling explosions and oil spills in the Gulf of Mexico, legislation has been considered, and governmental regulations and orders have been issued, which, combined with the local economic and employment conditions caused by both, could materially adversely impact the oil and gas industries and the economic health of areas in which a significant number of our stores are located."
example_title: "Evidence related to claim"
metrics: f1
---
# FeverErnieV2
FeverErnieV2 is a classifier model that predicts if evidence is related to query claim. The model achieved F1 score of 98.18% with test dataset "mwong/fever-evidence-related". Using pretrained ernie-v2-base model, the classifier head is trained on Fever dataset.
|
mwong/albert-base-fever-claim-related
|
mwong
| 2022-06-24T03:34:53Z | 8 | 2 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"text-classification",
"text classification",
"fact checking",
"en",
"dataset:mwong/fever-claim-related",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-20T12:49:48Z |
---
language: en
license: mit
tags:
- text classification
- fact checking
datasets:
- mwong/fever-claim-related
widget:
- text: "Earth’s changing climate is a critical issue and poses the risk of significant environmental, social and economic disruptions around the globe.</s></s>Because of fears of climate change and adverse effects of drilling explosions and oil spills in the Gulf of Mexico, legislation has been considered, and governmental regulations and orders have been issued, which, combined with the local economic and employment conditions caused by both, could materially adversely impact the oil and gas industries and the economic health of areas in which a significant number of our stores are located."
example_title: "Evidence related to claim"
metrics: f1
---
# FeverAlbert
FeverAlbert is a classifier model that predicts if evidence is related to query claim. The model achieved F1 score of 88.33% with test dataset "mwong/fever-claim-related". Using pretrained albert-base-v2 model, the classifier head is trained on Fever dataset.
|
mwong/roberta-base-fever-evidence-related
|
mwong
| 2022-06-24T03:33:25Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"text classification",
"fact checking",
"en",
"dataset:mwong/fever-evidence-related",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-20T12:50:01Z |
---
language: en
license: mit
tags:
- text classification
- fact checking
datasets:
- mwong/fever-evidence-related
widget:
- text: "Earth’s changing climate is a critical issue and poses the risk of significant environmental, social and economic disruptions around the globe.</s></s>Because of fears of climate change and adverse effects of drilling explosions and oil spills in the Gulf of Mexico, legislation has been considered, and governmental regulations and orders have been issued, which, combined with the local economic and employment conditions caused by both, could materially adversely impact the oil and gas industries and the economic health of areas in which a significant number of our stores are located."
example_title: "Evidence related to claim"
metrics: f1
---
# FeverRoberta
FeverRoberta is a classifier model that predicts if evidence is related to query claim. The model achieved F1 score of 92.67% with test dataset "mwong/fever-evidence-related". Using pretrained roberta-base model, the classifier head is trained on Fever dataset.
|
mwong/climatebert-base-f-climate-evidence-related
|
mwong
| 2022-06-24T03:32:39Z | 5 | 2 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"text classification",
"fact checking",
"en",
"dataset:mwong/fever-evidence-related",
"dataset:mwong/climate-evidence-related",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-20T12:58:32Z |
---
language: en
license: mit
tags:
- text classification
- fact checking
datasets:
- mwong/fever-evidence-related
- mwong/climate-evidence-related
widget:
- text: "Earth’s changing climate is a critical issue and poses the risk of significant environmental, social and economic disruptions around the globe.</s></s>Because of fears of climate change and adverse effects of drilling explosions and oil spills in the Gulf of Mexico, legislation has been considered, and governmental regulations and orders have been issued, which, combined with the local economic and employment conditions caused by both, could materially adversely impact the oil and gas industries and the economic health of areas in which a significant number of our stores are located."
example_title: "Evidence related to claim"
metrics: f1
---
# ClimateBert-related
ClimateBert-related is a classifier model that predicts if climate related evidence is related to query claim. The model achieved F1 score of 81.90% with test dataset "mwong/climate-evidence-related". Using pretrained ClimateBert-f model, the classifier head is trained on Fever dataset and adapted to climate domain using ClimateFever dataset.
|
mwong/climatebert-base-f-fever-evidence-related
|
mwong
| 2022-06-24T03:31:36Z | 6 | 2 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"text classification",
"fact checking",
"en",
"dataset:mwong/fever-evidence-related",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-20T12:53:52Z |
---
language: en
license: mit
tags:
- text classification
- fact checking
datasets:
- mwong/fever-evidence-related
widget:
- text: "Earth’s changing climate is a critical issue and poses the risk of significant environmental, social and economic disruptions around the globe.</s></s>Because of fears of climate change and adverse effects of drilling explosions and oil spills in the Gulf of Mexico, legislation has been considered, and governmental regulations and orders have been issued, which, combined with the local economic and employment conditions caused by both, could materially adversely impact the oil and gas industries and the economic health of areas in which a significant number of our stores are located."
example_title: "Evidence related to claim"
metrics: f1
---
# FeverBert-related
FeverBert-related is a classifier model that predicts if climate related evidence is related to query claim. The model achieved F1 score of 91.23% with test dataset "mwong/fever-evidence-related". Using pretrained ClimateBert-f model, the classifier head is trained on Fever dataset.
|
eugenetanjc/wav2vec2-base-timit-demo-google-colab
|
eugenetanjc
| 2022-06-24T02:12:22Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-18T09:33:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-google-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-google-colab
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
|
TencentGameMate/chinese-wav2vec2-base
|
TencentGameMate
| 2022-06-24T01:53:18Z | 625 | 24 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-06-02T06:17:07Z |
---
license: mit
---
Pretrained on 10k hours WenetSpeech L subset. More details in [TencentGameMate/chinese_speech_pretrain](https://github.com/TencentGameMate/chinese_speech_pretrain)
This model does not have a tokenizer as it was pretrained on audio alone.
In order to use this model speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data.
python package:
transformers==4.16.2
```python
import torch
import torch.nn.functional as F
import soundfile as sf
from fairseq import checkpoint_utils
from transformers import (
Wav2Vec2FeatureExtractor,
Wav2Vec2ForPreTraining,
Wav2Vec2Model,
)
from transformers.models.wav2vec2.modeling_wav2vec2 import _compute_mask_indices
model_path=""
wav_path=""
mask_prob=0.0
mask_length=10
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(model_path)
model = Wav2Vec2Model.from_pretrained(model_path)
# for pretrain: Wav2Vec2ForPreTraining
# model = Wav2Vec2ForPreTraining.from_pretrained(model_path)
model = model.to(device)
model = model.half()
model.eval()
wav, sr = sf.read(wav_path)
input_values = feature_extractor(wav, return_tensors="pt").input_values
input_values = input_values.half()
input_values = input_values.to(device)
# for Wav2Vec2ForPreTraining
# batch_size, raw_sequence_length = input_values.shape
# sequence_length = model._get_feat_extract_output_lengths(raw_sequence_length)
# mask_time_indices = _compute_mask_indices((batch_size, sequence_length), mask_prob=0.0, mask_length=2)
# mask_time_indices = torch.tensor(mask_time_indices, device=input_values.device, dtype=torch.long)
with torch.no_grad():
outputs = model(input_values)
last_hidden_state = outputs.last_hidden_state
# for Wav2Vec2ForPreTraining
# outputs = model(input_values, mask_time_indices=mask_time_indices, output_hidden_states=True)
# last_hidden_state = outputs.hidden_states[-1]
```
|
TencentGameMate/chinese-hubert-base
|
TencentGameMate
| 2022-06-24T01:52:57Z | 1,878 | 36 |
transformers
|
[
"transformers",
"pytorch",
"hubert",
"feature-extraction",
"license:mit",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-06-02T06:21:23Z |
---
license: mit
---
Pretrained on 10k hours WenetSpeech L subset. More details in [TencentGameMate/chinese_speech_pretrain](https://github.com/TencentGameMate/chinese_speech_pretrain)
This model does not have a tokenizer as it was pretrained on audio alone.
In order to use this model speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data.
python package:
transformers==4.16.2
```python
import torch
import torch.nn.functional as F
import soundfile as sf
from transformers import (
Wav2Vec2FeatureExtractor,
HubertModel,
)
model_path=""
wav_path=""
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(model_path)
model = HubertModel.from_pretrained(model_path)
# for pretrain: Wav2Vec2ForPreTraining
# model = Wav2Vec2ForPreTraining.from_pretrained(model_path)
model = model.to(device)
model = model.half()
model.eval()
wav, sr = sf.read(wav_path)
input_values = feature_extractor(wav, return_tensors="pt").input_values
input_values = input_values.half()
input_values = input_values.to(device)
with torch.no_grad():
outputs = model(input_values)
last_hidden_state = outputs.last_hidden_state
```
|
tuhina13/q-FrozenLake-v1-4x4-noSlippery
|
tuhina13
| 2022-06-23T23:29:59Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-23T23:29:53Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="tuhina13/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
yellajaswanth/Test-LunarLander-PPO
|
yellajaswanth
| 2022-06-23T21:33:14Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-23T21:32:45Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 258.90 +/- 17.64
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ArthurZ/opt-66000m
|
ArthurZ
| 2022-06-23T16:24:11Z | 0 | 0 | null |
[
"opt_metasq",
"region:us"
] | null | 2022-06-23T16:20:29Z |
---
tags:
- opt_metasq
---
# This repo let's you run the following checkpoint using facebookresearch/metaseq.
Do the following:
## 1. Install PyTorch
```
pip3 install torch==1.10.1+cu113 torchvision==0.11.2+cu113 torchaudio==0.10.1+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
```
## 2. Install Megatron
```
git clone https://github.com/patrickvonplaten/Megatron-LM.git
cd Megatron-LM
pip3 install six regex
pip3 install -e .
```
## 3. Install fairscale
```
git clone https://github.com/facebookresearch/fairscale.git
cd fairscale
git checkout prefetch_fsdp_params_simple
pip3 install -e .
```
## 4. Install metaseq
```
git clone https://github.com/patrickvonplaten/metaseq.git
cd metaseq
pip3 install -e .
```
## 5. Clone this repo (click top right on "How to clone")
## 6. Run the following:
```bash
cd <path/to/cloned/repo>
bash run.sh
```
|
404E/autotrain-formality-1026434913
|
404E
| 2022-06-23T15:19:21Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"en",
"dataset:404E/autotrain-data-formality",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-23T15:15:53Z |
---
tags: autotrain
language: en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- 404E/autotrain-data-formality
co2_eq_emissions: 7.300283563922049
---
# Model Trained Using AutoTrain
- Problem type: Single Column Regression
- Model ID: 1026434913
- CO2 Emissions (in grams): 7.300283563922049
## Validation Metrics
- Loss: 0.5467672348022461
- MSE: 0.5467672944068909
- MAE: 0.5851736068725586
- R2: 0.6883510493648173
- RMSE: 0.7394371628761292
- Explained Variance: 0.6885714530944824
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/404E/autotrain-formality-1026434913
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("404E/autotrain-formality-1026434913", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("404E/autotrain-formality-1026434913", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
robingeibel/roBERTa-base-finetuned-big_patent
|
robingeibel
| 2022-06-23T14:34:09Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"roberta",
"fill-mask",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-06-22T08:13:56Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: robingeibel/roBERTa-base-finetuned-big_patent
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# robingeibel/roBERTa-base-finetuned-big_patent
This model is a fine-tuned version of [robingeibel/roBERTa-base-finetuned-big_patent](https://huggingface.co/robingeibel/roBERTa-base-finetuned-big_patent) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.1827
- Validation Loss: 1.0542
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 152946, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.1827 | 1.0542 | 0 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
|
transZ/M2M_Vi_Ba
|
transZ
| 2022-06-23T11:01:27Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"translation",
"vi",
"ba",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-06-22T15:26:10Z |
---
language:
- vi
- ba
tags:
- translation
datasets:
- custom dataset
metrics:
- bleu
- sacrebleu
---
# How to run the model
```python
from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer
model = M2M100ForConditionalGeneration.from_pretrained("transZ/M2M_Vi_Ba")
tokenizer = M2M100Tokenizer.from_pretrained("transZ/M2M_Vi_Ba")
tokenizer.src_lang = "vi"
vi_text = "Hôm nay ba đi chợ."
encoded_vi = tokenizer(vi_text, return_tensors="pt")
generated_tokens = model.generate(**encoded_vi, forced_bos_token_id=tokenizer.get_lang_id("ba"))
translate = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)[0]
print(translate)
```
|
aico/TrOCR-MNIST
|
aico
| 2022-06-23T10:38:57Z | 14 | 1 |
transformers
|
[
"transformers",
"pytorch",
"vision-encoder-decoder",
"image-text-to-text",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2022-06-23T06:47:08Z |
Fine Tune MNIST dataset on the ViT TrOCR model
accuracy = 0.99525
ref:
http://yann.lecun.com/exdb/mnist/
https://github.com/microsoft/unilm/tree/master/trocr
|
Homayoon83/Carball
|
Homayoon83
| 2022-06-23T09:55:39Z | 0 | 0 | null |
[
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2022-06-23T09:55:38Z |
---
license: bigscience-bloom-rail-1.0
---
|
Barkavi/t5base_totto
|
Barkavi
| 2022-06-23T09:27:54Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
**Dataset**
ToTTo is an open-domain English Table-to-Text dataset with over 120,000 training examples that proposes a controlled generation task: given a Wikipedia table, a set of highlighted table cells, page title and section title as inputs, it produces a one-sentence description summarising the key details from the inputs. This dataset can be taken from hugging face (https://huggingface.co/datasets/totto).
**Model**
The pre-trained Text-to-Text "t5-base" model is fine-tuned with the Table-to-Text ToTTo dataset(downstream task) for the complete train dataset split of around 120,761 examples. During the fine-tuning process for this downstream task, BertScore metric was used as an evaluation metric instead of the standard BLEU metric.
|
Saraswati/TEST2ppo-LunarLander-v2
|
Saraswati
| 2022-06-23T08:54:21Z | 0 | 1 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-22T11:28:21Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 195.82 +/- 82.45
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
checkpoint = load_from_hub(
repo_id="Saraswati/TEST2ppo-LunarLander-v2",
filename="{MODEL FILENAME}.zip",
)
...
```
|
MRF18/results
|
MRF18
| 2022-06-23T07:18:42Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-22T04:42:00Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [MRF18/results](https://huggingface.co/MRF18/results) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
sun1638650145/dqn-SpaceInvadersNoFrameskip-v4
|
sun1638650145
| 2022-06-23T07:01:36Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-23T07:00:40Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 544.50 +/- 176.63
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga sun1638650145 -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga sun1638650145
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
huawei-noah/SPIRAL-base-MCT
|
huawei-noah
| 2022-06-23T03:29:31Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-06-17T09:19:20Z |
SPIRAL: Self-supervised Perturbation-Invariant Representation Learning for Speech Pre-Training
========
This is the pretrained model of **SPIRAL Base with Multi-Condition Training**, trained with 960-hour LibriSpeech data, and noise dataset from [ICASSP 2021 DNS Challenge](https://github.com/microsoft/DNS-Challenge/tree/icassp2021-final) for noise robustness.
Citation
========
If you find SPIRAL useful in your research, please cite the following paper:
```
@inproceedings{huang2022spiral,
title={{SPIRAL}: Self-supervised Perturbation-Invariant Representation Learning for Speech Pre-Training},
author={Wenyong Huang and Zhenhe Zhang and Yu Ting Yeung and Xin Jiang and Qun Liu},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=TBpg4PnXhYH}
}
```
|
huawei-noah/SPIRAL-Large
|
huawei-noah
| 2022-06-23T03:29:03Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-06-17T09:17:20Z |
SPIRAL: Self-supervised Perturbation-Invariant Representation Learning for Speech Pre-Training
========
This is the pretrained model of **SPIRAL LARGE**, trained with 60k-hour LibriLight data
Citation
========
If you find SPIRAL useful in your research, please cite the following paper:
```
@inproceedings{huang2022spiral,
title={{SPIRAL}: Self-supervised Perturbation-Invariant Representation Learning for Speech Pre-Training},
author={Wenyong Huang and Zhenhe Zhang and Yu Ting Yeung and Xin Jiang and Qun Liu},
booktitle={International Conference on Learning Representations},
year={2022},
url={https://openreview.net/forum?id=TBpg4PnXhYH}
}
```
|
gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53
|
gary109
| 2022-06-23T02:23:51Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"gary109/AI_Light_Dance",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-06-22T04:33:47Z |
---
license: apache-2.0
tags:
- automatic-speech-recognition
- gary109/AI_Light_Dance
- generated_from_trainer
model-index:
- name: ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the GARY109/AI_LIGHT_DANCE - ONSET-STEPMANIA2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2034
- Wer: 0.9875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 160
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.5631 | 1.0 | 150 | 2.4894 | 1.0 |
| 1.9443 | 2.0 | 300 | 1.8861 | 1.0 |
| 1.7618 | 3.0 | 450 | 1.6731 | 1.0 |
| 1.2354 | 4.0 | 600 | 1.2471 | 0.9875 |
| 1.2333 | 5.0 | 750 | 1.2253 | 0.9875 |
| 1.2037 | 6.0 | 900 | 1.2168 | 0.9875 |
| 1.2184 | 7.0 | 1050 | 1.2120 | 0.9875 |
| 1.1932 | 8.0 | 1200 | 1.2080 | 0.9875 |
| 1.179 | 9.0 | 1350 | 1.2039 | 0.9875 |
| 1.1722 | 10.0 | 1500 | 1.2034 | 0.9875 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
BigSalmon/InformalToFormalLincoln52
|
BigSalmon
| 2022-06-23T02:02:34Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-06-23T01:38:07Z |
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln52")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln52")
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
infill: chrome extensions [MASK] accomplish everyday tasks.
Translated into the Style of Abraham Lincoln: chrome extensions ( expedite the ability to / unlock the means to more readily ) accomplish everyday tasks.
infill: at a time when nintendo has become inflexible, [MASK] consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
Translated into the Style of Abraham Lincoln: at a time when nintendo has become inflexible, ( stubbornly [MASK] on / firmly set on / unyielding in its insistence on ) consoles that are tethered to a fixed iteration, sega diligently curates its legacy of classic video games on handheld devices.
infill:
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- nebraska
- unicamerical legislature
- different from federal house and senate
text: featuring a unicameral legislature, nebraska's political system stands in stark contrast to the federal model, comprised of a house and senate.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
Keywords to sentences or sentence.
```
ngos are characterized by:
□ voluntary citizens' group that is organized on a local, national or international level
□ encourage political participation
□ often serve humanitarian functions
□ work for social, economic, or environmental change
***
what are the drawbacks of living near an airbnb?
□ noise
□ parking
□ traffic
□ security
□ strangers
***
```
```
original: musicals generally use spoken dialogue as well as songs to convey the story. operas are usually fully sung.
adapted: musicals generally use spoken dialogue as well as songs to convey the story. ( in a stark departure / on the other hand / in contrast / by comparison / at odds with this practice / far from being alike / in defiance of this standard / running counter to this convention ), operas are usually fully sung.
***
original: akoya and tahitian are types of pearls. akoya pearls are mostly white, and tahitian pearls are naturally dark.
adapted: akoya and tahitian are types of pearls. ( a far cry from being indistinguishable / easily distinguished / on closer inspection / setting them apart / not to be mistaken for one another / hardly an instance of mere synonymy / differentiating the two ), akoya pearls are mostly white, and tahitian pearls are naturally dark.
***
original:
```
```
original: had trouble deciding.
translated into journalism speak: wrestled with the question, agonized over the matter, furrowed their brows in contemplation.
***
original:
```
```
input: not loyal
1800s english: ( two-faced / inimical / perfidious / duplicitous / mendacious / double-dealing / shifty ).
***
input:
```
|
Popppoogtcdcr/H
|
Popppoogtcdcr
| 2022-06-23T00:33:17Z | 0 | 0 | null |
[
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2022-06-23T00:33:17Z |
---
license: cc-by-nc-sa-4.0
---
|
tals/albert-base-vitaminc-fever
|
tals
| 2022-06-22T23:57:17Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"text-classification",
"dataset:fever",
"dataset:glue",
"dataset:tals/vitaminc",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language: python
datasets:
- fever
- glue
- tals/vitaminc
---
# Details
Model used in [Get Your Vitamin C! Robust Fact Verification with Contrastive Evidence](https://aclanthology.org/2021.naacl-main.52/) (Schuster et al., NAACL 21`).
For more details see: https://github.com/TalSchuster/VitaminC
When using this model, please cite the paper.
# BibTeX entry and citation info
```bibtex
@inproceedings{schuster-etal-2021-get,
title = "Get Your Vitamin {C}! Robust Fact Verification with Contrastive Evidence",
author = "Schuster, Tal and
Fisch, Adam and
Barzilay, Regina",
booktitle = "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jun,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.naacl-main.52",
doi = "10.18653/v1/2021.naacl-main.52",
pages = "624--643",
abstract = "Typical fact verification models use retrieved written evidence to verify claims. Evidence sources, however, often change over time as more information is gathered and revised. In order to adapt, models must be sensitive to subtle differences in supporting evidence. We present VitaminC, a benchmark infused with challenging cases that require fact verification models to discern and adjust to slight factual changes. We collect over 100,000 Wikipedia revisions that modify an underlying fact, and leverage these revisions, together with additional synthetically constructed ones, to create a total of over 400,000 claim-evidence pairs. Unlike previous resources, the examples in VitaminC are contrastive, i.e., they contain evidence pairs that are nearly identical in language and content, with the exception that one supports a given claim while the other does not. We show that training using this design increases robustness{---}improving accuracy by 10{\%} on adversarial fact verification and 6{\%} on adversarial natural language inference (NLI). Moreover, the structure of VitaminC leads us to define additional tasks for fact-checking resources: tagging relevant words in the evidence for verifying the claim, identifying factual revisions, and providing automatic edits via factually consistent text generation.",
}
```
|
Dugerij/dqn-SpaceInvadersNoFrameskip-v4
|
Dugerij
| 2022-06-22T22:43:38Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-06-22T22:43:10Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 15.50 +/- 12.54
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Dugerij -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Dugerij
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', True),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.