modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-31 18:27:55
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 543
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-31 18:24:13
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
asi/albert-act-base
|
asi
| 2022-10-21T13:26:29Z | 10 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"tensorboard",
"albert_act",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1603.08983",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-10-11T20:33:26Z |
---
license: apache-2.0
language: en
datasets:
- wikipedia
- bookcorpus
model-index:
- name: asi/albert-act-base
results:
- task:
type: text-classification
name: CoLA
dataset:
type: glue
name: CoLA # General Language Understanding Evaluation benchmark (GLUE)
split: cola
metrics:
- type: matthews_correlation
value: 36.7
name: Matthew's Corr
- task:
type: text-classification
name: SST-2
dataset:
type: glue
name: SST-2 # The Stanford Sentiment Treebank
split: sst2
metrics:
- type: accuracy
value: 87.8
name: Accuracy
- task:
type: text-classification
name: MRPC
dataset:
type: glue
name: MRPC # Microsoft Research Paraphrase Corpus
split: mrpc
metrics:
- type: accuracy
value: 81.4
name: Accuracy
- type: f1
value: 86.5
name: F1
- task:
type: text-similarity
name: STS-B
dataset:
type: glue
name: STS-B # Semantic Textual Similarity Benchmark
split: stsb
metrics:
- type: spearmanr
value: 83.0
name: Spearman Corr
- type: pearsonr
value: 84.2
name: Pearson Corr
- task:
type: text-classification
name: QQP
dataset:
type: glue
name: QQP # Quora Question Pairs
split: qqp
metrics:
- type: f1
value: 68.5
name: F1
- type: accuracy
value: 87.7
name: Accuracy
- task:
type: text-classification
name: MNLI-m
dataset:
type: glue
name: MNLI-m # MultiNLI Matched
split: mnli_matched
metrics:
- type: accuracy
value: 79.9
name: Accuracy
- task:
type: text-classification
name: MNLI-mm
dataset:
type: glue
name: MNLI-mm # MultiNLI Matched
split: mnli_mismatched
metrics:
- type: accuracy
value: 79.2
name: Accuracy
- task:
type: text-classification
name: QNLI
dataset:
type: glue
name: QNLI # Question NLI
split: qnli
metrics:
- type: accuracy
value: 89.0
name: Accuracy
- task:
type: text-classification
name: RTE
dataset:
type: glue
name: RTE # Recognizing Textual Entailment
split: rte
metrics:
- type: accuracy
value: 63.0
name: Accuracy
- task:
type: text-classification
name: WNLI
dataset:
type: glue
name: WNLI # Winograd NLI
split: wnli
metrics:
- type: accuracy
value: 65.1
name: Accuracy
---
# Adaptive Depth Transformers
Implementation of the paper "How Many Layers and Why? An Analysis of the Model Depth in Transformers". In this study, we investigate the role of the multiple layers in deep transformer models. We design a variant of ALBERT that dynamically adapts the number of layers for each token of the input.
## Model architecture
We augment a multi-layer transformer encoder with a halting mechanism, which dynamically adjusts the number of layers for each token.
We directly adapted this mechanism from Graves ([2016](#graves-2016)). At each iteration, we compute a probability for each token to stop updating its state.
## Model use
The architecture is not yet directly included in the Transformers library. The code used for pre-training is available in the following [github repository](https://github.com/AntoineSimoulin/adaptive-depth-transformers). So you should install the code implementation first:
```bash
!pip install git+https://github.com/AntoineSimoulin/adaptive-depth-transformers$
```
Then you can use the model directly.
```python
from act import AlbertActConfig, AlbertActModel, TFAlbertActModel
from transformers import AlbertTokenizer
tokenizer = AlbertTokenizer.from_pretrained('asi/albert-act-base')
model = AlbertActModel.from_pretrained('asi/albert-act-base')
_ = model.eval()
inputs = tokenizer("a lump in the middle of the monkeys stirred and then fell quiet .", return_tensors="pt")
outputs = model(**inputs)
outputs.updates
# tensor([[[[15., 9., 10., 7., 3., 8., 5., 7., 12., 10., 6., 8., 8., 9., 5., 8.]]]])
```
## Citations
### BibTeX entry and citation info
If you use our iterative transformer model for your scientific publication or your industrial applications, please cite the following [paper](https://aclanthology.org/2021.acl-srw.23/):
```bibtex
@inproceedings{simoulin-crabbe-2021-many,
title = "How Many Layers and Why? {A}n Analysis of the Model Depth in Transformers",
author = "Simoulin, Antoine and
Crabb{\'e}, Benoit",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: Student Research Workshop",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-srw.23",
doi = "10.18653/v1/2021.acl-srw.23",
pages = "221--228",
}
```
### References
><div id="graves-2016">Alex Graves. 2016. <a href="https://arxiv.org/abs/1603.08983" target="_blank">Adaptive computation time for recurrent neural networks.</a> CoRR, abs/1603.08983.</div>
|
sd-concepts-library/cortana
|
sd-concepts-library
| 2022-10-21T12:32:55Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2022-10-21T12:32:43Z |
---
license: mit
---
### cortana on Stable Diffusion
This is the `<cortana>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:







|
huggingtweets/tszzl
|
huggingtweets
| 2022-10-21T12:32:06Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/tszzl/1666355521581/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1572784789291401216/1WrwslUF_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">roon</div>
<div style="text-align: center; font-size: 14px;">@tszzl</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from roon.
| Data | roon |
| --- | --- |
| Tweets downloaded | 3207 |
| Retweets | 779 |
| Short tweets | 375 |
| Tweets kept | 2053 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/nr9oggv1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tszzl's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/12g6sck7) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/12g6sck7/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/tszzl')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
reza-aditya/bert-finetuned-squad
|
reza-aditya
| 2022-10-21T12:22:10Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-10-21T09:57:54Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Yinxing/ddpm-butterflies-128
|
Yinxing
| 2022-10-21T12:05:23Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-10-21T10:51:28Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/Yinxing/ddpm-butterflies-128/tensorboard?#scalars)
|
philschmid/setfit-ag-news-endpoint
|
philschmid
| 2022-10-21T11:04:26Z | 11 | 8 |
setfit
|
[
"setfit",
"pytorch",
"mpnet",
"endpoints-template",
"text-classification",
"arxiv:2209.11055",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-21T09:53:46Z |
---
license: mit
tags:
- setfit
- endpoints-template
- text-classification
inference: false
---
# SetFit AG News
This is a [SetFit](https://github.com/huggingface/setfit/tree/main) classifier fine-tuned on the [AG News](https://huggingface.co/datasets/ag_news) dataset.
The model was created following the [Outperform OpenAI GPT-3 with SetFit for text-classifiation](https://www.philschmid.de/getting-started-setfit) blog post of [Philipp Schmid](https://www.linkedin.com/in/philipp-schmid-a6a2bb196/).
The model achieves an accuracy of 0.87 on the test set and was only trained with `32` total examples (8 per class).
```bash
***** Running evaluation *****
model used: sentence-transformers/all-mpnet-base-v2
train dataset: 32 samples
accuracy: 0.8731578947368421
```
#### What is SetFit?
"SetFit" (https://arxiv.org/abs/2209.11055) is a new approach that can be used to create high accuracte text-classification models with limited labeled data. SetFit is outperforming GPT-3 in 7 out of 11 tasks, while being 1600x smaller.
Check out the blog to learn more: [Outperform OpenAI GPT-3 with SetFit for text-classifiation](https://www.philschmid.de/getting-started-setfit)
# Inference Endpoints
The model repository also implements a generic custom `handler.py` as an example for how to use `SetFit` models with [inference-endpoints](https://hf.co/inference-endpoints).
Code: https://huggingface.co/philschmid/setfit-ag-news-endpoint/blob/main/handler.py
## Send requests with Pyton
We are going to use requests to send our requests. (make your you have it installed `pip install requests`)
```python
import json
import requests as r
ENDPOINT_URL=""# url of your endpoint
HF_TOKEN=""
# payload samples
regular_payload = { "inputs": "Coming to The Rescue Got a unique problem? Not to worry: you can find a financial planner for every specialized need"}
# HTTP headers for authorization
headers= {
"Authorization": f"Bearer {HF_TOKEN}",
"Content-Type": "application/json"
}
# send request
response = r.post(ENDPOINT_URL, headers=headers, json=paramter_payload)
classified = response.json()
print(classified)
# [ { "label": "World", "score": 0.12341519122860946 }, { "label": "Sports", "score": 0.11741269832494523 }, { "label": "Business", "score": 0.6124446065942992 }, { "label": "Sci/Tech", "score": 0.14672750385214603 } ]
```
**curl example**
```bash
curl https://YOURDOMAIN.us-east-1.aws.endpoints.huggingface.cloud \
-X POST \
-d '{"inputs": "Coming to The Rescue Got a unique problem? Not to worry: you can find a financial planner for every specialized need"}' \
-H "Authorization: Bearer XXX" \
-H "Content-Type: application/json"
```
|
cjbarrie/distilbert-base-uncased-finetuned-emotion
|
cjbarrie
| 2022-10-21T11:01:51Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-20T16:28:28Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ashish23993/t5-small-finetuned-xsum-a
|
ashish23993
| 2022-10-21T10:48:19Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-21T10:43:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-small-finetuned-xsum-a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum-a
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 8 | 2.2554 | 21.1449 | 9.0713 | 17.7765 | 20.1134 | 19.0 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
jayanta/vit-base-patch16-224-FV2-finetuned-memes
|
jayanta
| 2022-10-21T10:12:26Z | 43 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-10-21T09:34:49Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: vit-base-patch16-224-FV2-finetuned-memes
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8647604327666152
- name: Precision
type: precision
value: 0.865115560305398
- name: Recall
type: recall
value: 0.8647604327666152
- name: F1
type: f1
value: 0.8646314523408155
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-FV2-finetuned-memes
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5458
- Accuracy: 0.8648
- Precision: 0.8651
- Recall: 0.8648
- F1: 0.8646
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00012
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.994 | 0.99 | 20 | 0.7937 | 0.7257 | 0.7148 | 0.7257 | 0.7025 |
| 0.509 | 1.99 | 40 | 0.4634 | 0.8346 | 0.8461 | 0.8346 | 0.8303 |
| 0.2698 | 2.99 | 60 | 0.3851 | 0.8594 | 0.8619 | 0.8594 | 0.8586 |
| 0.1381 | 3.99 | 80 | 0.4186 | 0.8624 | 0.8716 | 0.8624 | 0.8634 |
| 0.0899 | 4.99 | 100 | 0.4038 | 0.8586 | 0.8624 | 0.8586 | 0.8594 |
| 0.0708 | 5.99 | 120 | 0.4170 | 0.8563 | 0.8612 | 0.8563 | 0.8580 |
| 0.0629 | 6.99 | 140 | 0.4414 | 0.8594 | 0.8599 | 0.8594 | 0.8585 |
| 0.0554 | 7.99 | 160 | 0.4617 | 0.8539 | 0.8563 | 0.8539 | 0.8550 |
| 0.0582 | 8.99 | 180 | 0.4712 | 0.8648 | 0.8667 | 0.8648 | 0.8651 |
| 0.0582 | 9.99 | 200 | 0.4753 | 0.8632 | 0.8647 | 0.8632 | 0.8636 |
| 0.0535 | 10.99 | 220 | 0.4653 | 0.8694 | 0.8690 | 0.8694 | 0.8684 |
| 0.0516 | 11.99 | 240 | 0.4937 | 0.8679 | 0.8692 | 0.8679 | 0.8681 |
| 0.0478 | 12.99 | 260 | 0.5109 | 0.8725 | 0.8741 | 0.8725 | 0.8718 |
| 0.0484 | 13.99 | 280 | 0.5144 | 0.8640 | 0.8660 | 0.8640 | 0.8647 |
| 0.0472 | 14.99 | 300 | 0.5249 | 0.8679 | 0.8688 | 0.8679 | 0.8678 |
| 0.043 | 15.99 | 320 | 0.5324 | 0.8709 | 0.8711 | 0.8709 | 0.8704 |
| 0.0473 | 16.99 | 340 | 0.5352 | 0.8648 | 0.8660 | 0.8648 | 0.8647 |
| 0.0502 | 17.99 | 360 | 0.5389 | 0.8694 | 0.8692 | 0.8694 | 0.8687 |
| 0.0489 | 18.99 | 380 | 0.5564 | 0.8648 | 0.8666 | 0.8648 | 0.8651 |
| 0.04 | 19.99 | 400 | 0.5458 | 0.8648 | 0.8651 | 0.8648 | 0.8646 |
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.6.1.dev0
- Tokenizers 0.13.1
|
hezzze/a2c-AntBulletEnv-v0
|
hezzze
| 2022-10-21T09:34:26Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-21T09:33:16Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1658.74 +/- 204.55
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
asapcreditrepairusa/Credit-Repair-Houston
|
asapcreditrepairusa
| 2022-10-21T09:33:48Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-10-21T09:33:11Z |
ASAP Credit Repair has two critical missions, 1) to provide an effective and inexpensive option for credit repair and 2) to provide the best customer service experience along the way. We hope you choose [ASAP Credit Repair](https://asapcreditrepairusa.com) for your future credit repair needs.
|
teacookies/autotrain-21102022_cert_check_date-1828162855
|
teacookies
| 2022-10-21T08:43:12Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"autotrain",
"token-classification",
"unk",
"dataset:teacookies/autotrain-data-21102022_cert_check_date",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-21T08:30:18Z |
---
tags:
- autotrain
- token-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- teacookies/autotrain-data-21102022_cert_check_date
co2_eq_emissions:
emissions: 22.870496971868878
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 1828162855
- CO2 Emissions (in grams): 22.8705
## Validation Metrics
- Loss: 0.021
- Accuracy: 0.994
- Precision: 0.867
- Recall: 0.914
- F1: 0.890
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/teacookies/autotrain-21102022_cert_check_date-1828162855
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("teacookies/autotrain-21102022_cert_check_date-1828162855", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("teacookies/autotrain-21102022_cert_check_date-1828162855", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
kwdev2000/finetuning-sentiment-model-3000-samples
|
kwdev2000
| 2022-10-21T08:23:12Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-20T21:24:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.8533333333333334
- name: F1
type: f1
value: 0.8543046357615894
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3398
- Accuracy: 0.8533
- F1: 0.8543
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
nlp-waseda/roberta-large-japanese-seq512-with-auto-jumanpp
|
nlp-waseda
| 2022-10-21T06:56:38Z | 1,666 | 5 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"ja",
"dataset:wikipedia",
"dataset:cc100",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-10-15T06:04:06Z |
---
language: ja
license: cc-by-sa-4.0
datasets:
- wikipedia
- cc100
mask_token: "[MASK]"
widget:
- text: "早稲田大学で自然言語処理を[MASK]する。"
---
# nlp-waseda/roberta-large-japanese-seq512-with-auto-jumanpp
## Model description
This is a Japanese RoBERTa large model pretrained on Japanese Wikipedia and the Japanese portion of CC-100 with the maximum sequence length of 512.
## How to use
You can use this model for masked language modeling as follows:
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("nlp-waseda/roberta-large-japanese-seq512-with-auto-jumanpp")
model = AutoModelForMaskedLM.from_pretrained("nlp-waseda/roberta-large-japanese-seq512-with-auto-jumanpp")
sentence = '早稲田大学で自然言語処理を[MASK]する。'
encoding = tokenizer(sentence, return_tensors='pt')
...
```
You can fine-tune this model on downstream tasks.
## Tokenization
`BertJapaneseTokenizer` now supports automatic tokenization for [Juman++](https://github.com/ku-nlp/jumanpp). However, if your dataset is large, you may take a long time since `BertJapaneseTokenizer` still does not supoort fast tokenization. You can still do the Juman++ tokenization by your self and use the old model [nlp-waseda/roberta-large-japanese-seq512](https://huggingface.co/nlp-waseda/roberta-large-japanese-seq512).
Juman++ 2.0.0-rc3 was used for pretraining. Each word is tokenized into tokens by [sentencepiece](https://github.com/google/sentencepiece).
## Vocabulary
The vocabulary consists of 32000 tokens including words ([JumanDIC](https://github.com/ku-nlp/JumanDIC)) and subwords induced by the unigram language model of [sentencepiece](https://github.com/google/sentencepiece).
## Training procedure
This model was trained on Japanese Wikipedia (as of 20210920) and the Japanese portion of CC-100 from the checkpoint of [nlp-waseda/roberta-large-japanese](https://huggingface.co/nlp-waseda/roberta-large-japanese). It took a week using eight NVIDIA A100 GPUs.
The following hyperparameters were used during pretraining:
- learning_rate: 6e-5
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 4120 (max_seq_length=128), 4032 (max_seq_length=512)
- max_seq_length: 512
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-6
- lr_scheduler_type: linear
- training_steps: 670000 (max_seq_length=128) + 70000 (max_seq_length=512)
- warmup_steps: 10000
- mixed_precision_training: Native AMP
|
nlp-waseda/roberta-large-japanese-with-auto-jumanpp
|
nlp-waseda
| 2022-10-21T06:55:27Z | 1,733 | 3 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"ja",
"dataset:wikipedia",
"dataset:cc100",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-10-15T05:40:40Z |
---
language: ja
license: cc-by-sa-4.0
datasets:
- wikipedia
- cc100
mask_token: "[MASK]"
widget:
- text: "早稲田大学で自然言語処理を[MASK]する。"
---
# nlp-waseda/roberta-large-japanese-with-auto-jumanpp
## Model description
This is a Japanese RoBERTa large model pretrained on Japanese Wikipedia and the Japanese portion of CC-100.
## How to use
You can use this model for masked language modeling as follows:
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("nlp-waseda/roberta-large-japanese-with-auto-jumanpp")
model = AutoModelForMaskedLM.from_pretrained("nlp-waseda/roberta-large-japanese-with-auto-jumanpp")
sentence = '早稲田大学で自然言語処理を[MASK]する。'
encoding = tokenizer(sentence, return_tensors='pt')
...
```
You can fine-tune this model on downstream tasks.
## Tokenization
`BertJapaneseTokenizer` now supports automatic tokenization for [Juman++](https://github.com/ku-nlp/jumanpp). However, if your dataset is large, you may take a long time since `BertJapaneseTokenizer` still does not supoort fast tokenization. You can still do the Juman++ tokenization by your self and use the old model [nlp-waseda/roberta-large-japanese](https://huggingface.co/nlp-waseda/roberta-large-japanese).
Juman++ 2.0.0-rc3 was used for pretraining. Each word is tokenized into tokens by [sentencepiece](https://github.com/google/sentencepiece).
## Vocabulary
The vocabulary consists of 32000 tokens including words ([JumanDIC](https://github.com/ku-nlp/JumanDIC)) and subwords induced by the unigram language model of [sentencepiece](https://github.com/google/sentencepiece).
## Training procedure
This model was trained on Japanese Wikipedia (as of 20210920) and the Japanese portion of CC-100. It took two weeks using eight NVIDIA A100 GPUs.
The following hyperparameters were used during pretraining:
- learning_rate: 6e-5
- per_device_train_batch_size: 103
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 5
- total_train_batch_size: 4120
- max_seq_length: 128
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-6
- lr_scheduler_type: linear
- training_steps: 670000
- warmup_steps: 10000
- mixed_precision_training: Native AMP
## Performance on JGLUE
See the [Baseline Scores](https://github.com/yahoojapan/JGLUE#baseline-scores) of JGLUE.
|
NinedayWang/PolyCoder-2.7B
|
NinedayWang
| 2022-10-21T06:03:23Z | 314 | 50 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"arxiv:2202.13169",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-20T09:47:34Z |
This is a PolyCoder model with **2.7B** parameters,
presented in the paper ["A Systematic Evaluation of Large Language Models of Code"](https://arxiv.org/pdf/2202.13169.pdf) (MAPS'2022 and ICLR'2022 Workshop Deep Learning 4 Code).
The model was trained on **249 GB** of code across **12** programming languages.
**Note** - this model requires `transformers` version of at least **4.23.0**:
```
pip install transformers==4.23.0
```
For more information, see: [https://github.com/VHellendoorn/Code-LMs](https://github.com/VHellendoorn/Code-LMs)
If you use this model, please cite:
```
@inproceedings{
xu2022polycoder,
title={A Systematic Evaluation of Large Language Models of Code},
author={Frank F. Xu and Uri Alon and Graham Neubig and Vincent Josua Hellendoorn},
booktitle={Deep Learning for Code Workshop},
year={2022},
url={https://openreview.net/forum?id=SLcEnoObJZq}
}
```
|
jo-kwsm/distilbert-base-uncased-finetuned-emotion
|
jo-kwsm
| 2022-10-21T06:02:46Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-21T03:31:19Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9255
- name: F1
type: f1
value: 0.9253582087556043
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2244
- Accuracy: 0.9255
- F1: 0.9254
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8602 | 1.0 | 250 | 0.3344 | 0.901 | 0.8979 |
| 0.263 | 2.0 | 500 | 0.2244 | 0.9255 | 0.9254 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
nrnmnrn/dummy-model
|
nrnmnrn
| 2022-10-21T05:12:17Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"camembert",
"fill-mask",
"fr",
"dataset:oscar",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-10-20T01:26:19Z |
---
language: fr
license: mit
datasets:
- oscar
---
## Model description
CamemBERT is a state-of-the-art language model for French based on the RoBERTa model. It is now available on Hugging Face in 6 different versions with varying number of parameters, amount of pretraining data and pretraining data source domains.
## Evaluation
The model developers evaluated CamemBERT using four different downstream tasks for French: part-of-speech (POS) tagging, dependency parsing, named entity recognition (NER) and natural language inference (NLI).
## Limitations and bias
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
This model was pretrinaed on a subcorpus of OSCAR multilingual corpus. Some of the limitations and risks associated with the OSCAR dataset, which are further detailed in the [OSCAR dataset card](https://huggingface.co/datasets/oscar), include the following:
> The quality of some OSCAR sub-corpora might be lower than expected, specifically for the lowest-resource languages.
> Constructed from Common Crawl, Personal and sensitive information might be present.
## Training data
OSCAR or Open Super-large Crawled Aggregated coRpus is a multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the Ungoliant architecture.
## How to use
-**Filling masks using pipeline**
```python
>>> from transformers import pipeline
>>> camembert_fill_mask = pipeline("fill-mask", model="camembert-base")
>>> results = camembert_fill_mask("Le camembert est <mask> :)")
>>> result
[{'score': 0.49091097712516785,
'token': 7200,
'token_str': 'délicieux',
'sequence': 'Le camembert est délicieux :)'},
{'score': 0.1055697426199913,
'token': 2183,
'token_str': 'excellent',
'sequence': 'Le camembert est excellent :)'},
{'score': 0.03453319892287254,
'token': 26202,
'token_str': 'succulent',
'sequence': 'Le camembert est succulent :)'},
{'score': 0.03303128108382225,
'token': 528,
'token_str': 'meilleur',
'sequence': 'Le camembert est meilleur :)'},
{'score': 0.030076386407017708,
'token': 1654,
'token_str': 'parfait',
'sequence': 'Le camembert est parfait :)'}]
```
-**Extract contextual embedding features from Camembert output**
```python
import torch
>>> tokenized_sentence = tokenizer.tokenize("J'aime le camembert !")
>>> encoded_sentence = tokenizer.encode(tokenized_sentence)
# Can be done in one step : tokenize.encode("J'aime le camembert !")
>>> tokenized_sentence
['▁J', "'", 'aime', '▁le', '▁ca', 'member', 't', '▁!']
>>> encoded_sentence
[5, 121, 11, 660, 16, 730, 25543, 110, 83, 6]
encoded_sentence = torch.tensor(encoded_sentence).unsqueeze(0)
embeddings, _ = camembert(encoded_sentence)
# embeddings.detach()
# embeddings.size torch.Size([1, 10, 768])
# tensor([[[-0.0254, 0.0235, 0.1027, ..., -0.1459, -0.0205, -0.0116],
# [ 0.0606, -0.1811, -0.0418, ..., -0.1815, 0.0880, -0.0766],
# [-0.1561, -0.1127, 0.2687, ..., -0.0648, 0.0249, 0.0446],
# ...,
```
|
jinhybr/layoutlm-funsd-pytorch
|
jinhybr
| 2022-10-21T04:48:34Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"layoutlm",
"token-classification",
"generated_from_trainer",
"dataset:funsd",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-21T03:08:16Z |
---
tags:
- generated_from_trainer
datasets:
- funsd
model-index:
- name: layoutlm-funsd-pytorch
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlm-funsd-pytorch
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on the funsd dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7042
- Answer: {'precision': 0.712403951701427, 'recall': 0.8022249690976514, 'f1': 0.7546511627906977, 'number': 809}
- Header: {'precision': 0.3203125, 'recall': 0.3445378151260504, 'f1': 0.33198380566801616, 'number': 119}
- Question: {'precision': 0.7747589833479404, 'recall': 0.8300469483568075, 'f1': 0.8014505893019038, 'number': 1065}
- Overall Precision: 0.7220
- Overall Recall: 0.7898
- Overall F1: 0.7544
- Overall Accuracy: 0.8078
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 1.7641 | 1.0 | 10 | 1.5569 | {'precision': 0.01979045401629802, 'recall': 0.021013597033374538, 'f1': 0.02038369304556355, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.20930232558139536, 'recall': 0.15211267605633802, 'f1': 0.1761827079934747, 'number': 1065} | 0.1096 | 0.0898 | 0.0987 | 0.3917 |
| 1.4096 | 2.0 | 20 | 1.1718 | {'precision': 0.18729096989966554, 'recall': 0.138442521631644, 'f1': 0.15920398009950248, 'number': 809} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 119} | {'precision': 0.4800601956358164, 'recall': 0.5990610328638498, 'f1': 0.5329991645781119, 'number': 1065} | 0.3892 | 0.3763 | 0.3827 | 0.6045 |
| 1.0362 | 3.0 | 30 | 0.9322 | {'precision': 0.5212620027434842, 'recall': 0.46971569839307786, 'f1': 0.494148244473342, 'number': 809} | {'precision': 0.10344827586206896, 'recall': 0.025210084033613446, 'f1': 0.040540540540540536, 'number': 119} | {'precision': 0.6362847222222222, 'recall': 0.6882629107981221, 'f1': 0.661253946774921, 'number': 1065} | 0.5843 | 0.5600 | 0.5719 | 0.7091 |
| 0.8024 | 4.0 | 40 | 0.7725 | {'precision': 0.6457858769931663, 'recall': 0.7008652657601978, 'f1': 0.6721991701244814, 'number': 809} | {'precision': 0.1791044776119403, 'recall': 0.10084033613445378, 'f1': 0.12903225806451613, 'number': 119} | {'precision': 0.6911130284728214, 'recall': 0.752112676056338, 'f1': 0.7203237410071942, 'number': 1065} | 0.6559 | 0.6924 | 0.6737 | 0.7700 |
| 0.6483 | 5.0 | 50 | 0.7035 | {'precision': 0.6575790621592148, 'recall': 0.7453646477132262, 'f1': 0.6987253765932794, 'number': 809} | {'precision': 0.26881720430107525, 'recall': 0.21008403361344538, 'f1': 0.2358490566037736, 'number': 119} | {'precision': 0.7120067170445005, 'recall': 0.7962441314553991, 'f1': 0.75177304964539, 'number': 1065} | 0.6706 | 0.7406 | 0.7039 | 0.7857 |
| 0.5298 | 6.0 | 60 | 0.6747 | {'precision': 0.6925601750547046, 'recall': 0.7824474660074165, 'f1': 0.73476494486361, 'number': 809} | {'precision': 0.3472222222222222, 'recall': 0.21008403361344538, 'f1': 0.2617801047120419, 'number': 119} | {'precision': 0.7333333333333333, 'recall': 0.8366197183098592, 'f1': 0.7815789473684212, 'number': 1065} | 0.7038 | 0.7772 | 0.7387 | 0.7984 |
| 0.4644 | 7.0 | 70 | 0.6752 | {'precision': 0.6750261233019854, 'recall': 0.7985166872682324, 'f1': 0.7315968289920726, 'number': 809} | {'precision': 0.29357798165137616, 'recall': 0.2689075630252101, 'f1': 0.28070175438596495, 'number': 119} | {'precision': 0.7529812606473595, 'recall': 0.8300469483568075, 'f1': 0.7896382313532827, 'number': 1065} | 0.6973 | 0.7837 | 0.7380 | 0.8010 |
| 0.4253 | 8.0 | 80 | 0.6664 | {'precision': 0.699666295884316, 'recall': 0.7775030902348579, 'f1': 0.7365339578454333, 'number': 809} | {'precision': 0.3106796116504854, 'recall': 0.2689075630252101, 'f1': 0.28828828828828823, 'number': 119} | {'precision': 0.7704485488126649, 'recall': 0.8225352112676056, 'f1': 0.7956403269754768, 'number': 1065} | 0.7186 | 0.7712 | 0.7439 | 0.8017 |
| 0.3815 | 9.0 | 90 | 0.6658 | {'precision': 0.6973684210526315, 'recall': 0.7861557478368356, 'f1': 0.7391051714119697, 'number': 809} | {'precision': 0.3228346456692913, 'recall': 0.3445378151260504, 'f1': 0.3333333333333333, 'number': 119} | {'precision': 0.7474916387959866, 'recall': 0.8394366197183099, 'f1': 0.7908005307386111, 'number': 1065} | 0.7029 | 0.7883 | 0.7431 | 0.8053 |
| 0.3391 | 10.0 | 100 | 0.6736 | {'precision': 0.7022900763358778, 'recall': 0.796044499381953, 'f1': 0.7462340672074159, 'number': 809} | {'precision': 0.3252032520325203, 'recall': 0.33613445378151263, 'f1': 0.3305785123966942, 'number': 119} | {'precision': 0.7681034482758621, 'recall': 0.8366197183098592, 'f1': 0.8008988764044945, 'number': 1065} | 0.7159 | 0.7903 | 0.7513 | 0.8073 |
| 0.3117 | 11.0 | 110 | 0.6947 | {'precision': 0.7086956521739131, 'recall': 0.8059332509270705, 'f1': 0.7541931752458069, 'number': 809} | {'precision': 0.3333333333333333, 'recall': 0.3445378151260504, 'f1': 0.33884297520661155, 'number': 119} | {'precision': 0.7992667277726856, 'recall': 0.8187793427230047, 'f1': 0.8089053803339518, 'number': 1065} | 0.7334 | 0.7852 | 0.7584 | 0.8083 |
| 0.2991 | 12.0 | 120 | 0.6963 | {'precision': 0.7058823529411765, 'recall': 0.8009888751545118, 'f1': 0.7504342790966995, 'number': 809} | {'precision': 0.33064516129032256, 'recall': 0.3445378151260504, 'f1': 0.33744855967078186, 'number': 119} | {'precision': 0.7716262975778547, 'recall': 0.8375586854460094, 'f1': 0.8032417829806394, 'number': 1065} | 0.7193 | 0.7933 | 0.7545 | 0.8076 |
| 0.282 | 13.0 | 130 | 0.6991 | {'precision': 0.7153846153846154, 'recall': 0.8046971569839307, 'f1': 0.7574171029668412, 'number': 809} | {'precision': 0.336, 'recall': 0.35294117647058826, 'f1': 0.3442622950819672, 'number': 119} | {'precision': 0.7898032200357782, 'recall': 0.8291079812206573, 'f1': 0.8089784699954191, 'number': 1065} | 0.7320 | 0.7908 | 0.7603 | 0.8102 |
| 0.2722 | 14.0 | 140 | 0.7044 | {'precision': 0.712253829321663, 'recall': 0.8046971569839307, 'f1': 0.7556587347649449, 'number': 809} | {'precision': 0.3228346456692913, 'recall': 0.3445378151260504, 'f1': 0.3333333333333333, 'number': 119} | {'precision': 0.7811120917917035, 'recall': 0.8309859154929577, 'f1': 0.8052775250227479, 'number': 1065} | 0.7254 | 0.7913 | 0.7569 | 0.8081 |
| 0.2634 | 15.0 | 150 | 0.7042 | {'precision': 0.712403951701427, 'recall': 0.8022249690976514, 'f1': 0.7546511627906977, 'number': 809} | {'precision': 0.3203125, 'recall': 0.3445378151260504, 'f1': 0.33198380566801616, 'number': 119} | {'precision': 0.7747589833479404, 'recall': 0.8300469483568075, 'f1': 0.8014505893019038, 'number': 1065} | 0.7220 | 0.7898 | 0.7544 | 0.8078 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.13.1
|
api19750904/VM-Fast_Check
|
api19750904
| 2022-10-21T04:30:55Z | 72 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-10-21T04:30:42Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: VM-Fast_Check
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9101123809814453
---
# VM-Fast_Check
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### person drinking

#### person smoking

#### swimsuit boy

#### swimsuit girl

|
edbeeching/atari_2B_atari_yarsrevenge_2222
|
edbeeching
| 2022-10-21T04:26:27Z | 3 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-21T04:25:25Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: atari_yarsrevenge
type: atari_yarsrevenge
metrics:
- type: mean_reward
value: 336431.19 +/- 148269.98
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **atari_yarsrevenge** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
edbeeching/atari_2B_atari_wizardofwor_2222
|
edbeeching
| 2022-10-21T04:21:31Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-21T04:20:36Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: atari_wizardofwor
type: atari_wizardofwor
metrics:
- type: mean_reward
value: 61420.00 +/- 23105.79
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **atari_wizardofwor** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
huggingtweets/elonmusk-mar15sa-sergiorocks
|
huggingtweets
| 2022-10-21T04:07:50Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-21T04:06:32Z |
---
language: en
thumbnail: http://www.huggingtweets.com/elonmusk-mar15sa-sergiorocks/1666325239514/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1580062742693699584/RJ5EI7PS_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1142324885550751744/wVNatx7J_400x400.png')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/566329118489194496/f_ALTi7v_400x400.jpeg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Elon Musk & Sergio Pereira 🚀 & Marissa Goldberg</div>
<div style="text-align: center; font-size: 14px;">@elonmusk-mar15sa-sergiorocks</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Elon Musk & Sergio Pereira 🚀 & Marissa Goldberg.
| Data | Elon Musk | Sergio Pereira 🚀 | Marissa Goldberg |
| --- | --- | --- | --- |
| Tweets downloaded | 3200 | 3250 | 3248 |
| Retweets | 133 | 18 | 301 |
| Short tweets | 949 | 54 | 110 |
| Tweets kept | 2118 | 3178 | 2837 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/ahul38aq/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @elonmusk-mar15sa-sergiorocks's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1r3916r2) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1r3916r2/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/elonmusk-mar15sa-sergiorocks')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
edbeeching/atari_2B_atari_upndown_2222
|
edbeeching
| 2022-10-21T04:06:56Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-21T04:05:32Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: atari_upndown
type: atari_upndown
metrics:
- type: mean_reward
value: 427506.50 +/- 5992.08
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **atari_upndown** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
edbeeching/atari_2B_atari_tutankham_2222
|
edbeeching
| 2022-10-21T03:49:09Z | 2 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-21T03:48:06Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: atari_tutankham
type: atari_tutankham
metrics:
- type: mean_reward
value: 253.90 +/- 34.44
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **atari_tutankham** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
edbeeching/atari_2B_atari_timepilot_2222
|
edbeeching
| 2022-10-21T03:38:54Z | 0 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-21T03:37:51Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: atari_timepilot
type: atari_timepilot
metrics:
- type: mean_reward
value: 88855.00 +/- 25100.17
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **atari_timepilot** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
edbeeching/atari_2B_atari_tennis_2222
|
edbeeching
| 2022-10-21T03:32:46Z | 1 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-21T03:31:38Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: atari_tennis
type: atari_tennis
metrics:
- type: mean_reward
value: 23.00 +/- 1.10
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **atari_tennis** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
huggingtweets/levelsio
|
huggingtweets
| 2022-10-21T03:28:44Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-21T03:27:24Z |
---
language: en
thumbnail: http://www.huggingtweets.com/levelsio/1666322920443/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1562107516066095106/IUccJ78Y_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">@levelsio</div>
<div style="text-align: center; font-size: 14px;">@levelsio</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from @levelsio.
| Data | @levelsio |
| --- | --- |
| Tweets downloaded | 3243 |
| Retweets | 173 |
| Short tweets | 535 |
| Tweets kept | 2535 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/tof4zha8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @levelsio's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/lcpeawur) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/lcpeawur/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/levelsio')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Da-Bob/Crying-Chopper
|
Da-Bob
| 2022-10-21T03:21:17Z | 0 | 1 | null |
[
"license:other",
"region:us"
] | null | 2022-10-21T01:33:15Z |
---
license: other
---
<html>
<body>
<h1>Welcome to Crying-Chopper Model</h1>
<p>This is a Stable Diffusion 1.4 based model that adds the ability to make any character you would like into a Crying Chopper meme as seen in the below picture. This model was trained on about 20 different versions aka characters of this art style, thanks to the wonderful artists over at the OnePieceCock discord server. To get the best result do a prompt like this 'NAME as cryingchopper, ...' make sure to keep crying chopper with no space because that was how it trained. </p>
<img alt="cleanchooper.jpg" src="https://s3.amazonaws.com/moonup/production/uploads/1666321853612-631ba03acf39db4b171a0877.jpeg" title="cleanchooper.jpg">
<a href="https://s3.amazonaws.com/moonup/production/uploads/1666321853612-631ba03acf39db4b171a0877.jpeg" download>Download: Crying-Chopper_model-v1</a>
</body>
</html>
|
Shaier/longformer_race
|
Shaier
| 2022-10-21T02:22:30Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"longformer",
"multiple-choice",
"generated_from_trainer",
"dataset:race",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2022-10-20T20:03:38Z |
---
tags:
- generated_from_trainer
datasets:
- race
model-index:
- name: longformer_race
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# longformer_race
This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the race dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.8572
- eval_accuracy: 0.6647
- eval_runtime: 327.7157
- eval_samples_per_second: 10.674
- eval_steps_per_second: 10.674
- epoch: 1.0
- step: 2497
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 25
- total_train_batch_size: 25
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.21.3
- Pytorch 1.12.1
- Datasets 2.5.1
- Tokenizers 0.11.0
|
nlp-waseda/roberta-base-japanese-with-auto-jumanpp
|
nlp-waseda
| 2022-10-21T01:57:40Z | 1,327 | 7 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"ja",
"dataset:wikipedia",
"dataset:cc100",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-10-15T05:09:36Z |
---
language: ja
license: cc-by-sa-4.0
datasets:
- wikipedia
- cc100
mask_token: "[MASK]"
widget:
- text: "早稲田大学で自然言語処理を[MASK]する。"
---
# nlp-waseda/roberta-base-japanese-with-auto-jumanpp
## Model description
This is a Japanese RoBERTa base model pretrained on Japanese Wikipedia and the Japanese portion of CC-100.
## How to use
You can use this model for masked language modeling as follows:
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("nlp-waseda/roberta-base-japanese-with-auto-jumanpp")
model = AutoModelForMaskedLM.from_pretrained("nlp-waseda/roberta-base-japanese-with-auto-jumanpp")
sentence = '早稲田大学で自然言語処理を[MASK]する。'
encoding = tokenizer(sentence, return_tensors='pt')
...
```
You can fine-tune this model on downstream tasks.
## Tokenization
`BertJapaneseTokenizer` now supports automatic tokenization for [Juman++](https://github.com/ku-nlp/jumanpp). However, if your dataset is large, you may take a long time since `BertJapaneseTokenizer` still does not supoort fast tokenization. You can still do the Juman++ tokenization by your self and use the old model [nlp-waseda/roberta-base-japanese](https://huggingface.co/nlp-waseda/roberta-base-japanese).
Juman++ 2.0.0-rc3 was used for pretraining. Each word is tokenized into tokens by [sentencepiece](https://github.com/google/sentencepiece).
## Vocabulary
The vocabulary consists of 32000 tokens including words ([JumanDIC](https://github.com/ku-nlp/JumanDIC)) and subwords induced by the unigram language model of [sentencepiece](https://github.com/google/sentencepiece).
## Training procedure
This model was trained on Japanese Wikipedia (as of 20210920) and the Japanese portion of CC-100. It took a week using eight NVIDIA A100 GPUs.
The following hyperparameters were used during pretraining:
- learning_rate: 1e-4
- per_device_train_batch_size: 256
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 4096
- max_seq_length: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 700000
- warmup_steps: 10000
- mixed_precision_training: Native AMP
## Performance on JGLUE
See the [Baseline Scores](https://github.com/yahoojapan/JGLUE#baseline-scores) of JGLUE.
|
xxxxxxxxxxxxxxxxxxxxxx/model-y
|
xxxxxxxxxxxxxxxxxxxxxx
| 2022-10-21T01:49:43Z | 0 | 0 | null |
[
"license:wtfpl",
"region:us"
] | null | 2022-10-17T07:21:03Z |
---
license: wtfpl
---
# wwww
```typescript
import React, { CSSProperties, PropsWithRef } from 'react';
import MarkdownPreview, { MarkdownPreviewProps } from '@uiw/react-markdown-preview';
import { ITextAreaProps } from './components/TextArea';
import { ICommand } from './commands';
import { ContextStore, PreviewType } from './Context';
import './index.less';
export interface IProps {
prefixCls?: string;
className?: string;
}
export interface MDEditorProps extends Omit<React.HTMLAttributes<HTMLDivElement>, 'onChange'>, IProps {
/**
* The Markdown value.
*/
value?: string;
/**
* Event handler for the `onChange` event.
*/
onChange?: (value?: string, event?: React.ChangeEvent<HTMLTextAreaElement>, state?: ContextStore) => void;
/**
* editor height change listener
*/
onHeightChange?: (value?: CSSProperties['height'], oldValue?: CSSProperties['height'], state?: ContextStore) => void;
/**
* Can be used to make `Markdown Editor` focus itself on initialization. Defaults to on.
* it will be set to true when either the source `textarea` is focused,
* or it has an `autofocus` attribute and no other element is focused.
*/
autoFocus?: ITextAreaProps['autoFocus'];
/**
* The height of the editor.
* ⚠️ `Dragbar` is invalid when **`height`** parameter percentage.
*/
height?: CSSProperties['height'];
/**
* Custom toolbar heigth
* @default 29px
*
* @deprecated toolbar height adaptive: https://github.com/uiwjs/react-md-editor/issues/427
*
*/
toolbarHeight?: number;
/**
* Show drag and drop tool. Set the height of the editor.
*/
visibleDragbar?: boolean;
/**
* @deprecated use `visibleDragbar`
*/
visiableDragbar?: boolean;
/**
* Show markdown preview.
*/
preview?: PreviewType;
/**
* Full screen display editor.
*/
fullscreen?: boolean;
/**
* Disable `fullscreen` setting body styles
*/
overflow?: boolean;
/**
* Maximum drag height. `visibleDragbar=true`
*/
maxHeight?: number;
/**
* Minimum drag height. `visibleDragbar=true`
*/
minHeight?: number;
/**
* This is reset [react-markdown](https://github.com/rexxars/react-markdown) settings.
*/
previewOptions?: Omit<MarkdownPreviewProps, 'source'>;
/**
* Set the `textarea` related props.
*/
textareaProps?: ITextAreaProps;
/**
* Use div to replace TextArea or re-render TextArea
* @deprecated Please use ~~`renderTextarea`~~ -> `components`
*/
renderTextarea?: ITextAreaProps['renderTextarea'];
/**
* re-render element
*/
components?: {
/** Use div to replace TextArea or re-render TextArea */
textarea?: ITextAreaProps['renderTextarea'];
/**
* Override the default command element
* _`toolbar`_ < _`command[].render`_
*/
toolbar?: ICommand['render'];
/** Custom markdown preview */
preview?: (source: string, state: ContextStore, dispath: React.Dispatch<ContextStore>) => JSX.Element;
};
/**
* Disable editing area code highlighting. The value is `false`, which increases the editing speed.
* @default true
*/
highlightEnable?: boolean;
/**
* The number of characters to insert when pressing tab key.
* Default `2` spaces.
*/
tabSize?: number;
/**
* If `false`, the `tab` key inserts a tab character into the textarea. If `true`, the `tab` key executes default behavior e.g. focus shifts to next element.
*/
defaultTabEnable?: boolean;
/**
* You can create your own commands or reuse existing commands.
*/
commands?: ICommand[];
/**
* Filter or modify your commands.
* https://github.com/uiwjs/react-md-editor/issues/296
*/
commandsFilter?: (command: ICommand, isExtra: boolean) => false | ICommand;
/**
* You can create your own commands or reuse existing commands.
*/
extraCommands?: ICommand[];
/**
* Hide the tool bar
*/
hideToolbar?: boolean;
/** Whether to enable scrolling */
enableScroll?: boolean;
/** Toolbar on bottom */
toolbarBottom?: boolean;
}
declare type Editor = React.FC<PropsWithRef<MDEditorProps>> & {
Markdown: typeof MarkdownPreview;
};
declare const mdEditor: Editor;
export default mdEditor;
```
## asdjk
### lskjdflskj
as
d
s
d
|
Shushant/NepaliCovidTweetsClassification
|
Shushant
| 2022-10-21T01:07:30Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-25T16:02:20Z |
# Nepali Covid Tweet Classification
## This model was developed by finetuning the NepaliBERT model previously developed by me on Nepali COVID-related tweets. This dataset has about 15000 observations annotated with positive, negative, and neutral labels. NepaliBERT model was able to achieve SOTA results while finetuning this model for text classification. While training the model, the evaluation metrics obtained were:
* Training loss: 0.35592623149202174
* Validation loss: 0.6570735214928906
* F1 Score (Weighted): 0.7719232825307907
# LABELS INDICATOR
* LABEL 0 - Neutral
* LABEL 1 - Positive
* Label 2 - Negative
## USAGE
```python
from transformers import pipeline
classifier = pipeline("text-classification", model = "Shushant/NepaliCovidTweetsClassification")
classifier("आउँदा केही दिनमा अमेरिकाले १५ लाखभन्दा बढी नेपालीलाई पुग्नेगरी कोभीड१९ खोप निशुल्क उपलब्ध गराउंदैछ।")
```
|
ArafatBHossain/bert-distilled-single_teacher_mind_epoch07_alpha0.8
|
ArafatBHossain
| 2022-10-21T00:57:47Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-21T00:26:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-distilled-single_teacher_mind_epoch07_alpha0.8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-distilled-single_teacher_mind_epoch07_alpha0.8
This model is a fine-tuned version of [ArafatBHossain/distill_bert_fine_tuned_mind](https://huggingface.co/ArafatBHossain/distill_bert_fine_tuned_mind) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1023
- Accuracy: 0.9208
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2937 | 1.0 | 3054 | 0.2652 | 0.8802 |
| 0.2339 | 2.0 | 6108 | 0.2510 | 0.8822 |
| 0.1721 | 3.0 | 9162 | 0.1781 | 0.9038 |
| 0.1284 | 4.0 | 12216 | 0.1450 | 0.9108 |
| 0.0993 | 5.0 | 15270 | 0.1195 | 0.9182 |
| 0.0765 | 6.0 | 18324 | 0.1115 | 0.9172 |
| 0.063 | 7.0 | 21378 | 0.1023 | 0.9208 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.11.0
- Datasets 2.6.1
- Tokenizers 0.12.1
|
AIguysingstoo/moonlander
|
AIguysingstoo
| 2022-10-21T00:29:24Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-21T00:28:51Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 242.85 +/- 21.77
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
noahkim/KoT5_news_summarization
|
noahkim
| 2022-10-21T00:05:27Z | 402 | 5 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"summarization",
"news",
"ko",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
summarization
| 2022-10-20T11:06:55Z |
---
language: ko
tags:
- summarization
- news
inference: false
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KoT5_news_summarization
- This model is a [lcw99/t5-base-korean-text-summary](https://huggingface.co/lcw99/t5-base-korean-text-summary) finetuned on the [daekeun-ml/naver-news-summarization-ko](https://huggingface.co/datasets/daekeun-ml/naver-news-summarization-ko)
## Model description
<<20221021 Commit>>
프로젝트용으로 뉴스 요약 모델 특화된 모델을 만들기 위해 lcw99님의 t5-base-korean-text-summary 모델에 추가적으로 daekeun-ml님이 제공해주신 naver-news-summarization-ko 데이터셋으로 파인튜닝 했습니다.
현재 제가 가지고 있는 뉴스 데이터로 추가 학습 진행 예정입니다.
지속적으로 발전시켜 좋은 성능의 모델을 구현하겠습니다.
감사합니다.
실행환경
- Google Colab Pro
- CPU : Intel(R) Xeon(R) CPU @ 2.20GHz
- GPU : A100-SXM4-40GB
<pre><code>
# Python Code
from transformers import AutoTokenizer
from transformers import AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("noahkim/KoT5_news_summarization")
model = AutoModelForSeq2SeqLM.from_pretrained("noahkim/KoT5_news_summarization")
</pre></code>
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.4513 | 1.0 | 2775 | 0.4067 |
| 0.42 | 2.0 | 5550 | 0.3933 |
| 0.395 | 3.0 | 8325 | 0.3864 |
| 0.3771 | 4.0 | 11100 | 0.3872 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
edbeeching/atari_2B_atari_yarsrevenge_1111
|
edbeeching
| 2022-10-21T00:01:51Z | 7 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-21T00:00:47Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: atari_yarsrevenge
type: atari_yarsrevenge
metrics:
- type: mean_reward
value: 224390.75 +/- 197367.31
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **atari_yarsrevenge** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
edbeeching/atari_2B_atari_videopinball_1111
|
edbeeching
| 2022-10-20T23:54:10Z | 6 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-20T23:52:57Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: atari_videopinball
type: atari_videopinball
metrics:
- type: mean_reward
value: 372372.91 +/- 274249.66
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **atari_videopinball** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
g30rv17ys/ddpm-hkuoct-wamd-1000ep
|
g30rv17ys
| 2022-10-20T23:26:06Z | 4 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-10-20T19:00:48Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-hkuoct-wamd-1000ep
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/geevegeorge/ddpm-hkuoct-wamd-1000ep/tensorboard?#scalars)
|
edbeeching/atari_2B_atari_tutankham_1111
|
edbeeching
| 2022-10-20T23:24:00Z | 6 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-20T23:22:55Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: atari_tutankham
type: atari_tutankham
metrics:
- type: mean_reward
value: 292.90 +/- 43.36
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **atari_tutankham** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
edbeeching/atari_2B_atari_tennis_1111
|
edbeeching
| 2022-10-20T23:09:49Z | 10 | 0 |
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-20T23:08:40Z |
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: atari_tennis
type: atari_tennis
metrics:
- type: mean_reward
value: 18.80 +/- 2.20
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **atari_tennis** environment.
This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
|
salascorp/distilroberta-base-mrpc-glue-oscar-salas
|
salascorp
| 2022-10-20T22:48:41Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-20T01:44:30Z |
---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
datasets:
- glue
model-index:
- name: distilroberta-base-mrpc-glue-oscar-salas
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-mrpc-glue-oscar-salas
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the datasetX dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6456
- eval_accuracy: 0.8260
- eval_f1: 0.8795
- eval_runtime: 30.3289
- eval_samples_per_second: 13.453
- eval_steps_per_second: 1.682
- epoch: 1.09
- step: 500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cpu
- Datasets 2.6.1
- Tokenizers 0.13.1
|
huggingtweets/jiswooning-the3ammusician
|
huggingtweets
| 2022-10-20T22:27:14Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-20T22:26:00Z |
---
language: en
thumbnail: http://www.huggingtweets.com/jiswooning-the3ammusician/1666304830215/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1560736534143422465/3oAu6oCD_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1521185553382883334/fHjvh84L_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">TOR Kate & K8 misses KARD</div>
<div style="text-align: center; font-size: 14px;">@jiswooning-the3ammusician</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from TOR Kate & K8 misses KARD.
| Data | TOR Kate | K8 misses KARD |
| --- | --- | --- |
| Tweets downloaded | 3234 | 3193 |
| Retweets | 1038 | 1194 |
| Short tweets | 310 | 208 |
| Tweets kept | 1886 | 1791 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1vcg0753/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jiswooning-the3ammusician's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1plbf2ii) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1plbf2ii/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jiswooning-the3ammusician')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
jayanta/cvt-13-384-22k-fv-finetuned-memes
|
jayanta
| 2022-10-20T22:05:58Z | 42 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"cvt",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-10-20T21:40:10Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: cvt-13-384-22k-fv-finetuned-memes
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8315301391035549
- name: Precision
type: precision
value: 0.8302128280229624
- name: Recall
type: recall
value: 0.8315301391035549
- name: F1
type: f1
value: 0.8292026505769348
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cvt-13-384-22k-fv-finetuned-memes
This model is a fine-tuned version of [microsoft/cvt-13-384-22k](https://huggingface.co/microsoft/cvt-13-384-22k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5761
- Accuracy: 0.8315
- Precision: 0.8302
- Recall: 0.8315
- F1: 0.8292
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00012
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.3821 | 0.99 | 20 | 1.2780 | 0.4969 | 0.5083 | 0.4969 | 0.4458 |
| 1.0785 | 1.99 | 40 | 0.8633 | 0.6669 | 0.6658 | 0.6669 | 0.6500 |
| 0.8862 | 2.99 | 60 | 0.7110 | 0.7218 | 0.7258 | 0.7218 | 0.7013 |
| 0.665 | 3.99 | 80 | 0.5515 | 0.8045 | 0.8137 | 0.8045 | 0.8050 |
| 0.6056 | 4.99 | 100 | 0.5956 | 0.7960 | 0.8041 | 0.7960 | 0.7846 |
| 0.4779 | 5.99 | 120 | 0.6229 | 0.7937 | 0.7945 | 0.7937 | 0.7857 |
| 0.4554 | 6.99 | 140 | 0.5355 | 0.8099 | 0.8126 | 0.8099 | 0.8086 |
| 0.4249 | 7.99 | 160 | 0.5447 | 0.8269 | 0.8275 | 0.8269 | 0.8236 |
| 0.4313 | 8.99 | 180 | 0.5530 | 0.8153 | 0.8140 | 0.8153 | 0.8132 |
| 0.423 | 9.99 | 200 | 0.5346 | 0.8238 | 0.8230 | 0.8238 | 0.8223 |
| 0.3997 | 10.99 | 220 | 0.5413 | 0.8338 | 0.8347 | 0.8338 | 0.8338 |
| 0.4095 | 11.99 | 240 | 0.5999 | 0.8207 | 0.8231 | 0.8207 | 0.8177 |
| 0.3979 | 12.99 | 260 | 0.5632 | 0.8284 | 0.8255 | 0.8284 | 0.8250 |
| 0.3408 | 13.99 | 280 | 0.5725 | 0.8207 | 0.8198 | 0.8207 | 0.8196 |
| 0.3828 | 14.99 | 300 | 0.5631 | 0.8277 | 0.8258 | 0.8277 | 0.8260 |
| 0.3595 | 15.99 | 320 | 0.6005 | 0.8308 | 0.8297 | 0.8308 | 0.8275 |
| 0.3789 | 16.99 | 340 | 0.5840 | 0.8300 | 0.8271 | 0.8300 | 0.8273 |
| 0.3545 | 17.99 | 360 | 0.5983 | 0.8246 | 0.8226 | 0.8246 | 0.8222 |
| 0.3472 | 18.99 | 380 | 0.5795 | 0.8416 | 0.8382 | 0.8416 | 0.8390 |
| 0.355 | 19.99 | 400 | 0.5761 | 0.8315 | 0.8302 | 0.8315 | 0.8292 |
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.6.1.dev0
- Tokenizers 0.13.1
|
imodels/gpt-neo-2.7B-titles
|
imodels
| 2022-10-20T21:17:47Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-17T18:36:43Z |
---
license: apache-2.0
widget:
- text: "2021\n\n"
---
Full code and details at https://github.com/csinva/gpt-paper-title-generator
**Model**
- finetunes starting from the[gpt-neo-2.7B checkpoint](https://huggingface.co/EleutherAI/gpt-neo-2.7B)
- for training details see [the training script](https://github.com/csinva/gpt-paper-title-generator/blob/0157f26be9b0763b4ea6480e5b149fdb8dff4626/gptneo/02_finetune_hf.py)
- inference
- should prepend with a year and two newlines before querying for a title, e.g. `2022\n\n`
```python
from transformers import AutoModelForCausalLM, pipeline, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("csinva/gpt-neo-2.7B-titles")
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-2.7B")
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer)
pipe('2022\n\n')
```
**Data**
- all [papers on arXiv](https://www.kaggle.com/datasets/Cornell-University/arxiv) in the categories cs.AI, cs.LG, stat.ML
- date cutoff: only finetuned on papers with dat on or before Apr 1, 2022
- random 5% of papers also excluded
- this results in 98,388 papers for finetuning
- during finetuning each paper title was given starting with the prompt `<year>\n\n <title>\n` (e.g. `2022\n\n Emb-GAM: an Interpretable and Efficient Predictor using Pre-trained Language Models\n`)
|
creditgrossepointe/creditgrossepointe
|
creditgrossepointe
| 2022-10-20T21:13:37Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-10-20T21:12:54Z |
We are a family-owned and operated Credit Repair company, founded in 2013. Our goal is to help you achieve financial success and reach your credit goals.
Follow this [link](https://grossepointepark.asapcreditrepairusa.com/)
|
jinhybr/layoutlm-funsd-tf
|
jinhybr
| 2022-10-20T20:48:26Z | 10 | 0 |
transformers
|
[
"transformers",
"tf",
"tensorboard",
"layoutlm",
"token-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-10-20T20:10:28Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: layoutlm-funsd-tf
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# layoutlm-funsd-tf
This model is a fine-tuned version of [microsoft/layoutlm-base-uncased](https://huggingface.co/microsoft/layoutlm-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2509
- Validation Loss: 0.6942
- Train Overall Precision: 0.7291
- Train Overall Recall: 0.7888
- Train Overall F1: 0.7578
- Train Overall Accuracy: 0.8067
- Epoch: 7
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Train Overall Precision | Train Overall Recall | Train Overall F1 | Train Overall Accuracy | Epoch |
|:----------:|:---------------:|:-----------------------:|:--------------------:|:----------------:|:----------------------:|:-----:|
| 1.6886 | 1.4100 | 0.2324 | 0.2313 | 0.2318 | 0.5009 | 0 |
| 1.1702 | 0.8486 | 0.5971 | 0.6618 | 0.6278 | 0.7338 | 1 |
| 0.7521 | 0.7032 | 0.6561 | 0.7341 | 0.6929 | 0.7687 | 2 |
| 0.5727 | 0.6268 | 0.6736 | 0.7662 | 0.7169 | 0.7957 | 3 |
| 0.4586 | 0.6322 | 0.6909 | 0.7772 | 0.7315 | 0.7999 | 4 |
| 0.3725 | 0.6378 | 0.7134 | 0.7782 | 0.7444 | 0.8096 | 5 |
| 0.2987 | 0.6835 | 0.7270 | 0.7777 | 0.7515 | 0.8056 | 6 |
| 0.2509 | 0.6942 | 0.7291 | 0.7888 | 0.7578 | 0.8067 | 7 |
### Framework versions
- Transformers 4.23.1
- TensorFlow 2.6.0
- Datasets 2.6.1
- Tokenizers 0.13.1
|
asi/gpt-fr-cased-small
|
asi
| 2022-10-20T18:30:45Z | 1,755 | 8 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"fr",
"license:apache-2.0",
"model-index",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language:
- fr
model-index:
- name: asi/gpt-fr-cased-base
results:
- task:
type: text-generation
name: Wikitext-fr
dataset:
type: wikitext_fr
name: Wikitext-fr
metrics:
- type: perplexity
value: 109.2
name: Perplexity
- task:
type: text-classification
name: FLUE
dataset:
type: flue
name: CLS-Books
split: CLS
metrics:
- type: accuracy
value: 88.3
name: Accuracy
- task:
type: text-classification
name: FLUE
dataset:
type: flue
name: CLS-Dvd
split: CLS
metrics:
- type: accuracy
value: 86.9
name: Accuracy
- task:
type: text-classification
name: FLUE
dataset:
type: flue
name: CLS-Music
split: CLS
metrics:
- type: accuracy
value: 89.3
name: Accuracy
- task:
type: text-classification
name: FLUE
dataset:
type: flue
name: PAWS-X
split: PAWS-X
metrics:
- type: accuracy
value: 83.3
name: Accuracy
- task:
type: text-classification
name: FLUE
dataset:
type: flue
name: XNLI
split: XNLI
metrics:
- type: accuracy
value: 75.6
name: Accuracy
- task:
type: summarization
name: OrangeSum
dataset:
type: orange_sum
name: OrangeSum-Abstract
split: abstract
metrics:
- name: ROUGE-1
type: rouge
value: 17.5
- name: ROUGE-2
type: rouge
value: 3.1
- name: ROUGE-L
type: rouge
value: 12.1
- task:
type: summarization
name: OrangeSum
dataset:
type: orange_sum
name: OrangeSum-Title
split: title
metrics:
- name: ROUGE-1
type: rouge
value: 13.9
- name: ROUGE-2
type: rouge
value: 2.3
- name: ROUGE-L
type: rouge
value: 9.7
tags:
- tf
- pytorch
- gpt2
- text-generation
license: apache-2.0
thumbnail: https://raw.githubusercontent.com/AntoineSimoulin/gpt-fr/main/imgs/logo.png
---
<img src="https://raw.githubusercontent.com/AntoineSimoulin/gpt-fr/main/imgs/logo.png" width="200">
## Model description
**GPT-fr** 🇫🇷 is a GPT model for French developped by [Quantmetry](https://www.quantmetry.com/) and the [Laboratoire de Linguistique Formelle (LLF)](http://www.llf.cnrs.fr/en). We train the model on a very large and heterogeneous French corpus. We release the weights for the following configurations:
| Model name | Number of layers | Attention Heads | Embedding Dimension | Total Parameters |
| :------: | :---: | :---: | :---: | :---: |
| `gpt-fr-cased-small` | 12 | 12 | 768 | 124 M |
| `gpt-fr-cased-base` | 24 | 14 | 1,792 | 1,017 B |
## Intended uses & limitations
The model can be leveraged for language generation tasks. Besides, many tasks may be formatted such that the output is directly generated in natural language. Such configuration may be used for tasks such as automatic summary or question answering. We do hope our model might be used for both academic and industrial applications.
#### How to use
The model might be used through the astonishing 🤗 `Transformers` librairie:
```python
from transformers import GPT2Tokenizer, GPT2LMHeadModel
# Load pretrained model and tokenizer
model = GPT2LMHeadModel.from_pretrained("asi/gpt-fr-cased-small")
tokenizer = GPT2Tokenizer.from_pretrained("asi/gpt-fr-cased-small")
# Generate a sample of text
model.eval()
input_sentence = "Longtemps je me suis couché de bonne heure."
input_ids = tokenizer.encode(input_sentence, return_tensors='pt')
beam_outputs = model.generate(
input_ids,
max_length=100,
do_sample=True,
top_k=50,
top_p=0.95,
num_return_sequences=1
)
print("Output:\n" + 100 * '-')
print(tokenizer.decode(beam_outputs[0], skip_special_tokens=True))
```
#### Limitations and bias
Large language models tend to replicate the biases found in pre-training datasets, such as gender discrimination or offensive content generation.
To limit exposition to too much explicit material, we carefully choose the sources beforehand. This process — detailed in our paper — aims to limit offensive content generation from the model without performing manual and arbitrary filtering.
However, some societal biases, contained in the data, might be reflected by the model. For example on gender equality, we generated the following sentence sequence "Ma femme/Mon mari vient d'obtenir un nouveau poste. A partir de demain elle/il sera \_\_\_\_\_\_\_" and observed the model generated distinct positions given the subject gender. We used top-k random sampling strategy with k=50 and stopped at the first punctuation element.
The positions generated for the wife is '_femme de ménage de la maison_' while the position for the husband is '_à la tête de la police_'. We do appreciate your feedback to better qualitatively and quantitatively assess such effects.
## Training data
We created a dedicated corpus to train our generative model. Indeed the model uses a fixed-length context size of 1,024 and require long documents to be trained. We aggregated existing corpora: [Wikipedia](https://dumps.wikimedia.org/frwiki/), [OpenSubtitle](http://opus.nlpl.eu/download.php?f=OpenSubtitles/v2016/mono/) ([Tiedemann, 2012](#tiedemann-2012)), [Gutenberg](http://www.gutenberg.org). Corpora are filtered and separated into sentences. Successive sentences are then concatenated within the limit of 1,024 tokens per document.
## Training procedure
We pre-trained the model on a TPU v2-8 using the amazing [Google Colab](https://colab.research.google.com) inter-server.
## Eval results
We packaged **GPT-fr** with a dedicated language model evaluation benchmark.
In line with the [WikiText](https://blog.einstein.ai/the-wikitext-long-term-dependency-language-modeling-dataset/) benchmark in English, we collected over 70 million tokens from the set of verified [good](https://fr.wikipedia.org/wiki/Wikip%C3%A9dia:Articles_de_qualit%C3%A9) and [featured](https://fr.wikipedia.org/wiki/Wikip%C3%A9dia:Bons_articles) articles on French Wikipedia. The model reaches a zero-shot perplexity of **109.2** on the test set.
### BibTeX entry and citation info
Along with the model hosted by HuggingFace transformers library, we maintain a [git repository](https://github.com/AntoineSimoulin/gpt-fr).
If you use **GPT-fr** for your scientific publications or your industrial applications, please cite the following paper:
```bibtex
@inproceedings{simoulin:hal-03265900,
TITLE = {{Un mod{\`e}le Transformer G{\'e}n{\'e}ratif Pr{\'e}-entrain{\'e} pour le \_\_\_\_\_\_ fran{\c c}ais}},
AUTHOR = {Simoulin, Antoine and Crabb{\'e}, Benoit},
URL = {https://hal.archives-ouvertes.fr/hal-03265900},
BOOKTITLE = {{Traitement Automatique des Langues Naturelles}},
ADDRESS = {Lille, France},
EDITOR = {Denis, Pascal and Grabar, Natalia and Fraisse, Amel and Cardon, R{\'e}mi and Jacquemin, Bernard and Kergosien, Eric and Balvet, Antonio},
PUBLISHER = {{ATALA}},
PAGES = {246-255},
YEAR = {2021},
KEYWORDS = {fran{\c c}ais. ; GPT ; G{\'e}n{\'e}ratif ; Transformer ; Pr{\'e}-entra{\^i}n{\'e}},
PDF = {https://hal.archives-ouvertes.fr/hal-03265900/file/7.pdf},
HAL_ID = {hal-03265900},
HAL_VERSION = {v1},
}
```
### References
><div name="tiedemann-2012">Jörg Tiedemann: Parallel Data, Tools and Interfaces in OPUS. LREC 2012: 2214-2218</div>
|
allenai/drug-combo-classifier-pubmedbert-dapt
|
allenai
| 2022-10-20T18:23:30Z | 23 | 5 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"en",
"arxiv:2205.02289",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2022-05-04T03:20:11Z |
---
language: en
license: mit
---
This is the baseline model used in most experiments in the paper ["A Dataset for N-ary Relation Extraction of Drug Combinations"](https://arxiv.org/abs/2205.02289).
*(for just the domain-adapted masked language model that we use underneath this model, see [here](https://huggingface.co/allenai/drug_combinations_lm_pubmedbert?text=Paxlovid+works+well+in+combination+with+%5BMASK%5D+for+treating+breast+cancer.))*
**Steps to load this model**
1) Download accompanying code:
```
git clone https://github.com/allenai/drug-combo-extraction.git
conda create --name drug_combo python=3.8.5
conda activate drug_combo
```
2) Download model from Huggingface:
```
git lfs install
git clone https://huggingface.co/allenai/drug-combo-classifier-pubmedbert-dapt
```
3) Load model (`in Python`):
```
from modeling.model import load_model
checkpoint_path = "drug-combo-classifier-pubmedbert-dapt"
model, tokenizer, metadata = load_model(checkpoint_path)
```
|
mprzibilla/super_large_finetune_M01
|
mprzibilla
| 2022-10-20T17:56:53Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-10-19T12:05:24Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: super_large_finetune_M01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# super_large_finetune_M01
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9906
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 20
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 35440
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:------:|:---------------:|:---:|
| 10.0626 | 20.0 | 70880 | 3.0307 | 1.0 |
| 2.5319 | 40.0 | 141760 | 3.0316 | 1.0 |
| 2.4978 | 60.0 | 212640 | 3.0123 | 1.0 |
| 2.4849 | 80.0 | 283520 | 2.9923 | 1.0 |
| 2.4776 | 100.0 | 354400 | 3.0092 | 1.0 |
| 2.4733 | 120.0 | 425280 | 2.9964 | 1.0 |
| 2.4702 | 140.0 | 496160 | 2.9968 | 1.0 |
| 2.4686 | 160.0 | 567040 | 2.9937 | 1.0 |
| 2.4669 | 180.0 | 637920 | 2.9908 | 1.0 |
| 2.4661 | 200.0 | 708800 | 2.9906 | 1.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
ThomasNLG/CT0-11B
|
ThomasNLG
| 2022-10-20T17:02:33Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"arxiv:2205.12393",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-19T11:48:14Z |
---
language: en
license: apache-2.0
widget:
- text: "A is the son's of B's uncle. What is the family relationship between A and B?"
- text: "Reorder the words in this sentence: justin and name bieber years is my am I 27 old."
- text: "Task: copy but say the opposite.\n
PSG won its match against Barca."
- text: "Is this review positive or negative? Review: Best cast iron skillet you will every buy."
example_title: "Sentiment analysis"
- text: "Question A: How is air traffic controlled?
\nQuestion B: How do you become an air traffic controller?\nPick one: these questions are duplicates or not duplicates."
- text: "Barack Obama nominated Hilary Clinton as his secretary of state on Monday. He chose her because she had foreign affairs experience as a former First Lady.
\nIn the previous sentence, decide who 'her' is referring to."
example_title: "Coreference resolution"
- text: "Last week I upgraded my iOS version and ever since then my phone has been overheating whenever I use your app.\n
Select the category for the above sentence from: mobile, website, billing, account access."
- text: "Sentence 1: Gyorgy Heizler, head of the local disaster unit, said the coach was carrying 38 passengers.\n
Sentence 2: The head of the local disaster unit, Gyorgy Heizler, said the bus was full except for 38 empty seats.\n\n
Do sentences 1 and 2 have the same meaning?"
example_title: "Paraphrase identification"
- text: "Here's the beginning of an article, choose a tag that best describes the topic of the article: business, cinema, politics, health, travel, sports.\n\n
The best and worst fo 007 as 'No time to die' marks Daniel Craig's exit.\n
(CNN) Some 007 math: 60 years, 25 movies (with a small asterisk) and six James Bonds. For a Cold War creation, Ian Fleming's suave spy has certainly gotten around, but despite different guises in the tuxedo and occasional scuba gear, when it comes to Bond ratings, there really shouldn't be much argument about who wore it best."
- text: "Max: Know any good websites to buy clothes from?\n
Payton: Sure :) LINK 1, LINK 2, LINK 3\n
Max: That's a lot of them!\n
Payton: Yeah, but they have different things so I usually buy things from 2 or 3 of them.\n
Max: I'll check them out. Thanks.\n\n
Who or what are Payton and Max referring to when they say 'them'?"
- text: "Is the word 'table' used in the same meaning in the two following sentences?\n\n
Sentence A: you can leave the books on the table over there.\n
Sentence B: the tables in this book are very hard to read."
- text: "On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book.\n
The red book is to the right of the gray book. The black book is to the left of the blue book. The blue book is to the left of the gray book. The purple book is the second from the right.\n\n
Which book is the leftmost book?"
example_title: "Logic puzzles"
- text: "The two men running to become New York City's next mayor will face off in their first debate Wednesday night.\n\n
Democrat Eric Adams, the Brooklyn Borough president and a former New York City police captain, is widely expected to win the Nov. 2 election against Republican Curtis Sliwa, the founder of the 1970s-era Guardian Angels anti-crime patril.\n\n
Who are the men running for mayor?"
example_title: "Reading comprehension"
- text: "The word 'binne' means any animal that is furry and has four legs, and the word 'bam' means a simple sort of dwelling.\n\n
Which of the following best characterizes binne bams?\n
- Sentence 1: Binne bams are for pets.\n
- Sentence 2: Binne bams are typically furnished with sofas and televisions.\n
- Sentence 3: Binne bams are luxurious apartments.\n
- Sentence 4: Binne bams are places where people live."
---
**How do I pronounce the name of the model?** CT0 should be pronounced "C T Zero" (like in "Continual T5 for zero-shot")
# Model Description
CT0 is an extention of T0, a model showing great zero-shot task generalization on English natural language prompts, outperforming GPT-3 on many tasks, while being 16x smaller.
```bibtex
@misc{scialom2022Continual,
title={Fine-tuned Language Models are Continual Learners},
author={Thomas Scialom and Tuhin Chakrabarty and Smaranda Muresan},
year={2022},
eprint={2205.12393},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
|
rbroc/contrastive-user-encoder-singlepost
|
rbroc
| 2022-10-20T16:56:21Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"feature-extraction",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-10-19T08:56:38Z |
---
language:
- en
license: apache-2.0
library_name: transformers
---
### Contrastive user encoder (single post)
This model is a `DistilBertModel` trained by fine-tuning `distilbert-base-uncased` on author-based triplet loss.
#### Details
Training and evaluation details are provided in our EMNLP Findings paper:
- Rocca, R., & Yarkoni, T. (2022), Language as a fingerprint: Self-supervised learning of user encodings using transformers, to appear in *Findings of the Association for Computational Linguistics: EMNLP 2022*
#### Training
We fine-tuned DistilBERT on triplets consisting of:
- a single Reddit submission from a given user (the "anchor") - see ```rbroc/contrastive-user-encoder-multipost``` for a model trained on aggregated embeddings of multiple anchors;
- an additional post from the same user (a "positive example");
- a post from a different, randomly selected user (the "negative example")
To compute the loss, we use [CLS] encoding of the anchor, positive example and negative example from the last layer of the DistilBERT encoder. We optimize for \\(max(||f(a) - f(n)|| - ||f(a) - f(p)|| + \alpha,0)\\)
where:
- \\( f(a)\\) is the [CLS] encoding of the anchor;
- \\( f(n) \\) is the [CLS] encoding of the negative example;
- \\( f(p) \\) is the [CLS] encoding of the positive example;
- \\( \alpha \\) is a tunable parameter called margin. Here, we tuned this to \\( \alpha = 1.0\\)
#### Evaluation and usage
The model yields performance advantages downstream user-based classification tasks.
We encourage usage and benchmarking on tasks involving:
- prediction of user traits (e.g., personality);
- extraction of user-aware text encodings (e.g., style modeling);
- contextualized text modeling, where standard text representations are complemented with compact user representations
#### Limitations
Being exclusively trained on Reddit data, our models probably overfit to linguistic markers and traits which are relevant to characterizing the Reddit user population, but less salient in the general population. Domain-specific fine-tuning may be required before deployment.
Furthermore, our self-supervised approach enforces little or no control over biases, which models may actively use as part of their heuristics in contrastive and downstream tasks.
|
tringuyexn/ppo-LunarLander-v2
|
tringuyexn
| 2022-10-20T16:55:51Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-20T16:55:27Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 237.09 +/- 23.08
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
amanneo/mail-generator-mini-v2
|
amanneo
| 2022-10-20T14:49:33Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-20T13:12:41Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: amanneo/mail-generator-mini-v2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# amanneo/mail-generator-mini-v2
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5212
- Train Accuracy: 0.0027
- Validation Loss: 5.5781
- Validation Accuracy: 0.0
- Epoch: 99
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -994, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 2.5928 | 0.0171 | 5.5430 | 0.0048 | 0 |
| 2.6003 | 0.0207 | 5.5430 | 0.0048 | 1 |
| 2.5954 | 0.0171 | 5.5508 | 0.0048 | 2 |
| 2.5775 | 0.0190 | 5.5508 | 0.0024 | 3 |
| 2.5758 | 0.0231 | 5.5508 | 0.0024 | 4 |
| 2.5742 | 0.0207 | 5.5586 | 0.0048 | 5 |
| 2.5547 | 0.0209 | 5.5586 | 0.0048 | 6 |
| 2.5566 | 0.0188 | 5.5586 | 0.0048 | 7 |
| 2.5391 | 0.0193 | 5.5586 | 0.0048 | 8 |
| 2.5378 | 0.0215 | 5.5508 | 0.0048 | 9 |
| 2.5238 | 0.0188 | 5.5469 | 0.0048 | 10 |
| 2.5150 | 0.0160 | 5.5508 | 0.0048 | 11 |
| 2.4967 | 0.0174 | 5.5508 | 0.0071 | 12 |
| 2.4691 | 0.0193 | 5.5430 | 0.0071 | 13 |
| 2.4626 | 0.0163 | 5.5430 | 0.0071 | 14 |
| 2.4417 | 0.0231 | 5.5352 | 0.0048 | 15 |
| 2.4323 | 0.0215 | 5.5352 | 0.0048 | 16 |
| 2.4193 | 0.0226 | 5.5469 | 0.0048 | 17 |
| 2.4170 | 0.0185 | 5.5469 | 0.0048 | 18 |
| 2.3743 | 0.0193 | 5.5312 | 0.0048 | 19 |
| 2.3730 | 0.0207 | 5.5312 | 0.0048 | 20 |
| 2.3535 | 0.0198 | 5.5312 | 0.0048 | 21 |
| 2.3372 | 0.0182 | 5.5312 | 0.0071 | 22 |
| 2.3324 | 0.0177 | 5.5312 | 0.0048 | 23 |
| 2.3011 | 0.0204 | 5.5195 | 0.0048 | 24 |
| 2.2650 | 0.0212 | 5.5117 | 0.0048 | 25 |
| 2.2568 | 0.0198 | 5.5078 | 0.0048 | 26 |
| 2.2331 | 0.0196 | 5.5156 | 0.0048 | 27 |
| 2.2021 | 0.0193 | 5.5078 | 0.0048 | 28 |
| 2.1807 | 0.0204 | 5.5039 | 0.0048 | 29 |
| 2.1691 | 0.0190 | 5.5 | 0.0 | 30 |
| 2.1463 | 0.0174 | 5.4766 | 0.0 | 31 |
| 2.1097 | 0.0196 | 5.4844 | 0.0 | 32 |
| 2.1014 | 0.0179 | 5.4844 | 0.0024 | 33 |
| 2.0833 | 0.0177 | 5.4844 | 0.0024 | 34 |
| 2.0423 | 0.0201 | 5.4844 | 0.0 | 35 |
| 2.0163 | 0.0198 | 5.4844 | 0.0 | 36 |
| 1.9909 | 0.0168 | 5.4883 | 0.0 | 37 |
| 1.9774 | 0.0207 | 5.4805 | 0.0 | 38 |
| 1.9414 | 0.0207 | 5.4844 | 0.0 | 39 |
| 1.9206 | 0.0215 | 5.4766 | 0.0 | 40 |
| 1.8849 | 0.0182 | 5.4805 | 0.0 | 41 |
| 1.8732 | 0.0193 | 5.4648 | 0.0 | 42 |
| 1.8460 | 0.0160 | 5.4609 | 0.0 | 43 |
| 1.8171 | 0.0168 | 5.4648 | 0.0 | 44 |
| 1.7791 | 0.0201 | 5.4531 | 0.0 | 45 |
| 1.7583 | 0.0158 | 5.4570 | 0.0 | 46 |
| 1.7360 | 0.0171 | 5.4570 | 0.0 | 47 |
| 1.7061 | 0.0120 | 5.4297 | 0.0 | 48 |
| 1.6802 | 0.0155 | 5.4258 | 0.0 | 49 |
| 1.6551 | 0.0182 | 5.4141 | 0.0 | 50 |
| 1.6289 | 0.0130 | 5.4219 | 0.0 | 51 |
| 1.5981 | 0.0130 | 5.3945 | 0.0 | 52 |
| 1.5656 | 0.0128 | 5.4297 | 0.0 | 53 |
| 1.5535 | 0.0168 | 5.4219 | 0.0 | 54 |
| 1.5184 | 0.0141 | 5.4102 | 0.0 | 55 |
| 1.4943 | 0.0149 | 5.4023 | 0.0 | 56 |
| 1.4616 | 0.0122 | 5.4062 | 0.0 | 57 |
| 1.4344 | 0.0111 | 5.4062 | 0.0 | 58 |
| 1.3965 | 0.0111 | 5.4141 | 0.0 | 59 |
| 1.3643 | 0.0122 | 5.4375 | 0.0 | 60 |
| 1.3309 | 0.0087 | 5.4453 | 0.0 | 61 |
| 1.3215 | 0.0090 | 5.4648 | 0.0 | 62 |
| 1.3058 | 0.0084 | 5.4727 | 0.0 | 63 |
| 1.2700 | 0.0109 | 5.4453 | 0.0 | 64 |
| 1.2396 | 0.0079 | 5.4609 | 0.0 | 65 |
| 1.2189 | 0.0092 | 5.4375 | 0.0 | 66 |
| 1.1855 | 0.0079 | 5.4375 | 0.0 | 67 |
| 1.1592 | 0.0073 | 5.4375 | 0.0 | 68 |
| 1.1219 | 0.0071 | 5.4648 | 0.0 | 69 |
| 1.1071 | 0.0065 | 5.4570 | 0.0 | 70 |
| 1.0848 | 0.0060 | 5.4375 | 0.0 | 71 |
| 1.0581 | 0.0076 | 5.4453 | 0.0 | 72 |
| 1.0316 | 0.0090 | 5.4570 | 0.0 | 73 |
| 1.0068 | 0.0063 | 5.4219 | 0.0 | 74 |
| 0.9832 | 0.0060 | 5.4570 | 0.0 | 75 |
| 0.9534 | 0.0046 | 5.4570 | 0.0 | 76 |
| 0.9378 | 0.0057 | 5.4648 | 0.0 | 77 |
| 0.9170 | 0.0033 | 5.4844 | 0.0 | 78 |
| 0.8941 | 0.0041 | 5.4883 | 0.0 | 79 |
| 0.8666 | 0.0030 | 5.4922 | 0.0 | 80 |
| 0.8419 | 0.0054 | 5.4375 | 0.0 | 81 |
| 0.8200 | 0.0035 | 5.4492 | 0.0 | 82 |
| 0.8020 | 0.0022 | 5.4648 | 0.0 | 83 |
| 0.7785 | 0.0057 | 5.4883 | 0.0 | 84 |
| 0.7607 | 0.0052 | 5.4648 | 0.0 | 85 |
| 0.7454 | 0.0041 | 5.5078 | 0.0 | 86 |
| 0.7208 | 0.0024 | 5.5078 | 0.0 | 87 |
| 0.7040 | 0.0027 | 5.5078 | 0.0 | 88 |
| 0.6799 | 0.0041 | 5.5156 | 0.0 | 89 |
| 0.6594 | 0.0030 | 5.5312 | 0.0 | 90 |
| 0.6397 | 0.0030 | 5.5312 | 0.0 | 91 |
| 0.6217 | 0.0030 | 5.5195 | 0.0 | 92 |
| 0.6112 | 0.0033 | 5.5195 | 0.0 | 93 |
| 0.5937 | 0.0046 | 5.5625 | 0.0 | 94 |
| 0.5745 | 0.0035 | 5.5625 | 0.0 | 95 |
| 0.5616 | 0.0027 | 5.5586 | 0.0 | 96 |
| 0.5468 | 0.0043 | 5.5742 | 0.0 | 97 |
| 0.5354 | 0.0027 | 5.5781 | 0.0 | 98 |
| 0.5212 | 0.0027 | 5.5781 | 0.0 | 99 |
### Framework versions
- Transformers 4.23.1
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Mattbrenr/What
|
Mattbrenr
| 2022-10-20T14:07:37Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-10-20T14:07:37Z |
---
license: creativeml-openrail-m
---
|
moro23/wav2vec2-large-xlsr-53-ha-colab_1
|
moro23
| 2022-10-20T14:05:09Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_10_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-10-20T11:29:16Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_10_0
model-index:
- name: wav2vec2-large-xlsr-53-ha-colab_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-ha-colab_1
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice_10_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7843
- Wer: 0.4827
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.2849 | 5.19 | 400 | 2.8140 | 1.0 |
| 1.4323 | 10.39 | 800 | 0.6695 | 0.5772 |
| 0.2833 | 15.58 | 1200 | 0.6866 | 0.5036 |
| 0.1798 | 20.77 | 1600 | 0.7698 | 0.4950 |
| 0.1369 | 25.97 | 2000 | 0.7843 | 0.4827 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu113
- Datasets 2.3.2
- Tokenizers 0.10.3
|
jeonsworld/ddpm-butterflies-128
|
jeonsworld
| 2022-10-20T13:56:19Z | 3 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-10-20T12:40:13Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/jeonsworld/ddpm-butterflies-128/tensorboard?#scalars)
|
jayanta/mit-b2-fv-finetuned-memes
|
jayanta
| 2022-10-20T13:21:30Z | 46 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"segformer",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:other",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-10-20T11:38:15Z |
---
license: other
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: mit-b2-fv-finetuned-memes
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8323029366306027
- name: Precision
type: precision
value: 0.831217385971583
- name: Recall
type: recall
value: 0.8323029366306027
- name: F1
type: f1
value: 0.831492653119617
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mit-b2-fv-finetuned-memes
This model is a fine-tuned version of [nvidia/mit-b2](https://huggingface.co/nvidia/mit-b2) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5984
- Accuracy: 0.8323
- Precision: 0.8312
- Recall: 0.8323
- F1: 0.8315
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00012
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.3683 | 0.99 | 20 | 1.1798 | 0.5703 | 0.4914 | 0.5703 | 0.4915 |
| 1.0113 | 1.99 | 40 | 1.0384 | 0.6159 | 0.6813 | 0.6159 | 0.6274 |
| 0.7581 | 2.99 | 60 | 0.8348 | 0.6808 | 0.7377 | 0.6808 | 0.6840 |
| 0.6241 | 3.99 | 80 | 0.6034 | 0.7713 | 0.7864 | 0.7713 | 0.7735 |
| 0.4999 | 4.99 | 100 | 0.5481 | 0.7944 | 0.8000 | 0.7944 | 0.7909 |
| 0.3981 | 5.99 | 120 | 0.5253 | 0.8022 | 0.8091 | 0.8022 | 0.8000 |
| 0.3484 | 6.99 | 140 | 0.4688 | 0.8238 | 0.8147 | 0.8238 | 0.8146 |
| 0.3142 | 7.99 | 160 | 0.6245 | 0.7867 | 0.8209 | 0.7867 | 0.7920 |
| 0.2339 | 8.99 | 180 | 0.5053 | 0.8362 | 0.8426 | 0.8362 | 0.8355 |
| 0.2284 | 9.99 | 200 | 0.5070 | 0.8230 | 0.8220 | 0.8230 | 0.8187 |
| 0.1824 | 10.99 | 220 | 0.5780 | 0.8006 | 0.8138 | 0.8006 | 0.8035 |
| 0.1561 | 11.99 | 240 | 0.5429 | 0.8253 | 0.8197 | 0.8253 | 0.8218 |
| 0.1229 | 12.99 | 260 | 0.5325 | 0.8331 | 0.8296 | 0.8331 | 0.8303 |
| 0.1232 | 13.99 | 280 | 0.5595 | 0.8277 | 0.8290 | 0.8277 | 0.8273 |
| 0.118 | 14.99 | 300 | 0.5974 | 0.8292 | 0.8345 | 0.8292 | 0.8299 |
| 0.11 | 15.99 | 320 | 0.5796 | 0.8253 | 0.8228 | 0.8253 | 0.8231 |
| 0.0948 | 16.99 | 340 | 0.5581 | 0.8346 | 0.8358 | 0.8346 | 0.8349 |
| 0.0985 | 17.99 | 360 | 0.5700 | 0.8338 | 0.8301 | 0.8338 | 0.8318 |
| 0.0821 | 18.99 | 380 | 0.5756 | 0.8331 | 0.8343 | 0.8331 | 0.8335 |
| 0.0813 | 19.99 | 400 | 0.5984 | 0.8323 | 0.8312 | 0.8323 | 0.8315 |
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.6.1.dev0
- Tokenizers 0.13.1
|
knkarthick/Action_Items
|
knkarthick
| 2022-10-20T12:10:12Z | 75 | 7 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"seq2seq",
"en",
"dataset:Custom",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-20T10:45:18Z |
---
language: en
tags:
- distilbert
- seq2seq
- text-classification
license: apache-2.0
datasets:
- Custom
metrics:
- Accuracy
- Precision
- Recall
widget:
- text: |-
Let's start the project as soon as possible as we are running out of deadline.
model-index:
- name: Action_Items
results:
- task:
name: Action Item Classification
type: text-classification
dataset:
name: Custom
type: custom
metrics:
- name: Validation Accuracy
type: accuracy
value:
- name: Validation Precision
type: precision
value:
- name: Validation Recall
type: recall
value:
- name: Test Accuracy
type: accuracy
value:
- name: Test Precision
type: precision
value:
- name: Test Recall
type: recall
value:
---
Model obtained by Fine Tuning 'distilbert' using Custom Dataset!
LABEL_0 - Not an Action Item
LABEL_1 - Action Item
## Usage
# Example 1
```python
from transformers import pipeline
summarizer = pipeline("text-classification", model="knkarthick/Action_Items")
text = '''
Customer portion will have the dependency of , you know , fifty five probably has to be on XGEVA before we can start that track , but we can at least start the enablement track for sales and CSM who are as important as customers because they're the top of our funnel , especially sales.
'''
summarizer(text)
```
# Example 2
```python
from transformers import pipeline
summarizer = pipeline("text-classification", model="knkarthick/Action_Items")
text = '''
India, officially the Republic of India, is a country in South Asia.
'''
summarizer(text)
```
# Example 3
```python
from transformers import pipeline
summarizer = pipeline("text-classification", model="knkarthick/Action_Items")
text = '''
We have been running the business successfully for over a decade now.
'''
summarizer(text)
```
|
bthomas/article2keyword2.1b_barthez-orangesum-title_finetuned16_for_mlm
|
bthomas
| 2022-10-20T12:04:52Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"mlm",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-20T09:46:19Z |
---
license: apache-2.0
tags:
- mlm
- generated_from_trainer
model-index:
- name: article2keyword2.1b_barthez-orangesum-title_finetuned16_for_mlm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# article2keyword2.1b_barthez-orangesum-title_finetuned16_for_mlm
This model is a fine-tuned version of [moussaKam/barthez-orangesum-title](https://huggingface.co/moussaKam/barthez-orangesum-title) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0525
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.2976 | 1.0 | 1353 | 0.0543 |
| 0.0566 | 2.0 | 2706 | 0.0509 |
| 0.0487 | 3.0 | 4059 | 0.0458 |
| 0.0433 | 4.0 | 5412 | 0.0456 |
| 0.04 | 5.0 | 6765 | 0.0460 |
| 0.0373 | 6.0 | 8118 | 0.0454 |
| 0.0355 | 7.0 | 9471 | 0.0465 |
| 0.0328 | 8.0 | 10824 | 0.0474 |
| 0.0317 | 9.0 | 12177 | 0.0470 |
| 0.03 | 10.0 | 13530 | 0.0488 |
| 0.0285 | 11.0 | 14883 | 0.0489 |
| 0.0272 | 12.0 | 16236 | 0.0500 |
| 0.0262 | 13.0 | 17589 | 0.0510 |
| 0.0258 | 14.0 | 18942 | 0.0511 |
| 0.0245 | 15.0 | 20295 | 0.0522 |
| 0.0239 | 16.0 | 21648 | 0.0525 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.11.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
sanantonioasapcreditrepair/sanantonioasapcreditrepair
|
sanantonioasapcreditrepair
| 2022-10-20T11:57:24Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-10-20T11:43:44Z |
We want to get to know you, but first you should get to know us!
We are a family-owned and operated Credit Repair company, founded in 2013. Our goal is to help you achieve financial success and reach your credit goals.
Follow this [link](https://sanantonio.asapcreditrepairusa.com/)
|
amanneo/mail-generator-mini
|
amanneo
| 2022-10-20T11:02:15Z | 11 | 0 |
transformers
|
[
"transformers",
"tf",
"gpt2",
"text-generation",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-20T07:56:09Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: amanneo/mail-generator-mini
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# amanneo/mail-generator-mini
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 5.4613
- Train Accuracy: 0.1611
- Validation Loss: 5.2617
- Validation Accuracy: 0.1386
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': -925, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 10.0053 | 0.1068 | 8.5247 | 0.1394 | 0 |
| 8.7772 | 0.1505 | 7.9685 | 0.1656 | 1 |
| 8.2057 | 0.1663 | 7.4436 | 0.1655 | 2 |
| 7.5786 | 0.1611 | 6.8572 | 0.1654 | 3 |
| 6.9698 | 0.1679 | 6.3646 | 0.1735 | 4 |
| 6.4911 | 0.1763 | 6.0124 | 0.1787 | 5 |
| 6.1632 | 0.1834 | 5.7751 | 0.1826 | 6 |
| 5.9057 | 0.1840 | 5.5786 | 0.1749 | 7 |
| 5.6874 | 0.1758 | 5.4023 | 0.1616 | 8 |
| 5.4613 | 0.1611 | 5.2617 | 0.1386 | 9 |
### Framework versions
- Transformers 4.23.1
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.1
|
readerbench/RoSummary-large
|
readerbench
| 2022-10-20T10:00:37Z | 10 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-19T06:42:50Z |
Model card for RoSummary-large
---
language:
- ro
---
# RoSummary
This is a version of the RoGPT2 model trained on the [AlephNews](https://huggingface.co/datasets/readerbench/AlephNews) dataset for the summarization task. There are 3 trained versions, they are available on the HuggingFace Hub:
* [base](https://huggingface.co/readerbench/RoSummary-base)
* [medium](https://huggingface.co/readerbench/RoSummary-medium)
* [large](https://huggingface.co/readerbench/RoSummary-large)
## Evaluation on [AlephNews](https://huggingface.co/datasets/readerbench/AlephNews)
| Model | Decode Method | | BERTScore | | | ROUGE | |
|:------:|:--------------:|:---------:|:---------:|:--------:|:--------:|:--------:|:--------:|
| | | Precision | Recall | F1-Score | ROUGE-1 | ROUGE-2 | ROUGE-L |
| | Greedy | 0.7335 | 0.7399 | 0.7358 | 0.3360 | 0.1862 | 0.3333 |
| Base | Beam Search | 0.7354 | 0.7468 | 0.7404 | 0.3480 | 0.1991 | 0.3416 |
| | Top-p Sampling | 0.7296 | 0.7299 | 0.7292 | 0.3058 | 0.1452 | 0.2951 |
| | Greedy | 0.7378 | 0.7401 | 0.7380 | 0.3422 | 0.1922 | 0.3394 |
| Medium | Beam Search | 0.7390 | **0.7493**|**0.7434**|**0.3546**|**0.2061**|**0.3467**|
| | Top-p Sampling | 0.7315 | 0.7285 | 0.7294 | 0.3042 | 0.1400 | 0.2921 |
| | Greedy | 0.7376 | 0.7424 | 0.7391 | 0.3414 | 0.1895 | 0.3355 |
| Large | Beam Search | **0.7394**| 0.7470 | 0.7424 | 0.3492 | 0.1995 | 0.3384 |
| | Top-p Sampling | 0.7311 | 0.7301 | 0.7299 | 0.3051 | 0.1418 | 0.2931 |
## Acknowledgments
---
Research supported with [Cloud TPUs](https://cloud.google.com/tpu/) from Google's [TensorFlow Research Cloud (TFRC)](https://www.tensorflow.org/tfrc)
|
bthomas/article2keyword2.1b_paraphrase-multilingual-MiniLM-L12-v2_finetuned_for_mlm
|
bthomas
| 2022-10-20T09:36:12Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"mlm",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-10-20T08:33:40Z |
---
license: apache-2.0
tags:
- mlm
- generated_from_trainer
model-index:
- name: article2keyword2.1b_paraphrase-multilingual-MiniLM-L12-v2_finetuned_for_mlm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# article2keyword2.1b_paraphrase-multilingual-MiniLM-L12-v2_finetuned_for_mlm
This model is a fine-tuned version of [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0673
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.3777 | 1.0 | 1353 | 0.3168 |
| 0.2358 | 2.0 | 2706 | 0.1564 |
| 0.1372 | 3.0 | 4059 | 0.1149 |
| 0.1046 | 4.0 | 5412 | 0.0956 |
| 0.086 | 5.0 | 6765 | 0.0853 |
| 0.0741 | 6.0 | 8118 | 0.0786 |
| 0.0653 | 7.0 | 9471 | 0.0750 |
| 0.0594 | 8.0 | 10824 | 0.0726 |
| 0.0542 | 9.0 | 12177 | 0.0699 |
| 0.0504 | 10.0 | 13530 | 0.0692 |
| 0.047 | 11.0 | 14883 | 0.0684 |
| 0.0444 | 12.0 | 16236 | 0.0675 |
| 0.0423 | 13.0 | 17589 | 0.0674 |
| 0.0404 | 14.0 | 18942 | 0.0673 |
| 0.0392 | 15.0 | 20295 | 0.0672 |
| 0.0379 | 16.0 | 21648 | 0.0673 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.11.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
model-attribution-challenge/gpt2
|
model-attribution-challenge
| 2022-10-20T09:34:54Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"tflite",
"rust",
"safetensors",
"gpt2",
"text-generation",
"exbert",
"en",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-08T21:30:42Z |
---
language: en
tags:
- exbert
license: mit
---
# GPT-2
Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
[this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
and first released at [this page](https://openai.com/blog/better-language-models/).
Disclaimer: The team releasing GPT-2 also wrote a
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card
has been written by the Hugging Face team to complete the information they provided and give specific examples of bias.
## Model description
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This
means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
prompt.
## Intended uses & limitations
You can use the raw model for text generation or fine-tune it to a downstream task. See the
[model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you.
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
[{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."},
{'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"},
{'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"},
{'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"},
{'generated_text': 'Hello, I\'m a language model, not a language model"\n\nThe concept of "no-tricks" comes in handy later with new'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import GPT2Tokenizer, GPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
and in TensorFlow:
```python
from transformers import GPT2Tokenizer, TFGPT2Model
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = TFGPT2Model.from_pretrained('gpt2')
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='tf')
output = model(encoded_input)
```
### Limitations and bias
The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of
unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their
[model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases):
> Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases
> that require the generated text to be true.
>
> Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do
> not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a
> study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race,
> and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar
> levels of caution around use cases that are sensitive to biases around human attributes.
Here's an example of how the model can have biased predictions:
```python
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='gpt2')
>>> set_seed(42)
>>> generator("The White man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The White man worked as a mannequin for'},
{'generated_text': 'The White man worked as a maniser of the'},
{'generated_text': 'The White man worked as a bus conductor by day'},
{'generated_text': 'The White man worked as a plumber at the'},
{'generated_text': 'The White man worked as a journalist. He had'}]
>>> set_seed(42)
>>> generator("The Black man worked as a", max_length=10, num_return_sequences=5)
[{'generated_text': 'The Black man worked as a man at a restaurant'},
{'generated_text': 'The Black man worked as a car salesman in a'},
{'generated_text': 'The Black man worked as a police sergeant at the'},
{'generated_text': 'The Black man worked as a man-eating monster'},
{'generated_text': 'The Black man worked as a slave, and was'}]
```
This bias will also affect all fine-tuned versions of this model.
## Training data
The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web
pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from
this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights
40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText
[here](https://github.com/openai/gpt-2/blob/master/domains.txt).
## Training procedure
### Preprocessing
The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a
vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens.
The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact
details of training.
## Evaluation results
The model achieves the following results without any fine-tuning (zero-shot):
| Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW |
|:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:|
| (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) |
| | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 |
### BibTeX entry and citation info
```bibtex
@article{radford2019language,
title={Language Models are Unsupervised Multitask Learners},
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
year={2019}
}
```
<a href="https://huggingface.co/exbert/?model=gpt2">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
mprzibilla/super_large_finetune_CM01
|
mprzibilla
| 2022-10-20T09:04:35Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-10-19T23:12:30Z |
---
tags:
- generated_from_trainer
model-index:
- name: super_large_finetune_CM01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# super_large_finetune_CM01
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 7.2285
- Wer: 0.7714
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 15
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 857
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.0031 | 5.0 | 1715 | 1.9766 | 0.7857 |
| 0.2107 | 10.0 | 3430 | 3.8748 | 0.8238 |
| 0.1393 | 15.0 | 5145 | 4.7403 | 0.7952 |
| 0.0931 | 20.0 | 6860 | 3.5077 | 0.6667 |
| 0.0649 | 25.0 | 8575 | 7.7419 | 0.9333 |
| 0.0592 | 30.0 | 10290 | 5.6440 | 0.7762 |
| 0.0396 | 35.0 | 12005 | 6.9629 | 0.6810 |
| 0.03 | 40.0 | 13720 | 7.8282 | 0.7524 |
| 0.0191 | 45.0 | 15435 | 6.4626 | 0.7429 |
| 0.0121 | 50.0 | 17150 | 7.2285 | 0.7714 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.2+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
auriolar/Reinforce-CartPole-v1
|
auriolar
| 2022-10-20T08:37:22Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-20T08:35:04Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 161.70 +/- 52.53
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
Ddff/Edee
|
Ddff
| 2022-10-20T07:54:35Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2022-10-20T07:54:35Z |
---
license: bigscience-openrail-m
---
|
nayan06/binary-classifier-conversion-intent-1.1-l12
|
nayan06
| 2022-10-20T07:05:09Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-10-18T11:34:15Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# Setfit Classification Model ON Conversion Dataset With L12 sbert Model as Base
This is a Setfit Model with the L6 model as a Base for classification.
<!--- Describe your model here -->
## Usage (Setfit)
```
pip install setfit
```
Then you can use the model like this:
```python
from setfit import SetFitModel
model = SetFitModel.from_pretrained("nayan06/binary-classifier-conversion-intent-1.1-l12")
prediction = model(['i want to buy thing'])
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2163 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 2163,
"warmup_steps": 217,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Dataset Used
https://huggingface.co/datasets/nayan06/conversion1.0
## Citing & Authors
<!--- Describe where people can find more information -->
|
nayan06/binary-classifier-conversion-intent-1.1-mpnet
|
nayan06
| 2022-10-20T07:04:31Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-10-18T12:00:33Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# Setfit Classification Model ON Conversion Dataset With mpnet sbert Model as Base
This is a Setfit Model with the L6 model as a Base for classification.
<!--- Describe your model here -->
## Usage (Setfit)
```
pip install setfit
```
Then you can use the model like this:
```python
from setfit import SetFitModel
model = SetFitModel.from_pretrained("nayan06/binary-classifier-conversion-intent-1.1-mpnet")
prediction = model(['i want to buy thing'])
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2163 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 2163,
"warmup_steps": 217,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Dataset Used
https://huggingface.co/datasets/nayan06/conversion1.0
## Citing & Authors
<!--- Describe where people can find more information -->
|
ArafatBHossain/bert-distilled-model-flip_mind_epoch12_alpha0.8
|
ArafatBHossain
| 2022-10-20T06:26:44Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-20T05:35:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-distilled-model-flip_mind_epoch12_alpha0.8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-distilled-model-flip_mind_epoch12_alpha0.8
This model is a fine-tuned version of [ArafatBHossain/distill_bert_fine_tuned_mind](https://huggingface.co/ArafatBHossain/distill_bert_fine_tuned_mind) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7953
- Accuracy: 0.914
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.8595 | 1.0 | 3054 | 1.8311 | 0.854 |
| 1.7769 | 2.0 | 6108 | 1.7204 | 0.847 |
| 1.7614 | 3.0 | 9162 | 1.7666 | 0.8666 |
| 1.7212 | 4.0 | 12216 | 1.8134 | 0.8716 |
| 1.7255 | 5.0 | 15270 | 1.7368 | 0.8812 |
| 1.6845 | 6.0 | 18324 | 1.7368 | 0.8898 |
| 1.7346 | 7.0 | 21378 | 1.6621 | 0.8936 |
| 1.7436 | 8.0 | 24432 | 1.7180 | 0.9008 |
| 1.7333 | 9.0 | 27486 | 1.7523 | 0.9048 |
| 1.7805 | 10.0 | 30540 | 1.7820 | 0.9078 |
| 1.792 | 11.0 | 33594 | 1.7329 | 0.9096 |
| 1.7463 | 12.0 | 36648 | 1.7953 | 0.914 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.11.0
- Datasets 2.6.1
- Tokenizers 0.12.1
|
nguyenkhoa2407/bert-base-cased-NER-favsbot
|
nguyenkhoa2407
| 2022-10-20T05:11:31Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:favsbot",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-23T15:57:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- favsbot
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-cased-NER-favsbot
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: favsbot
type: favsbot
config: default
split: train
args: default
metrics:
- name: Precision
type: precision
value: 0.8461538461538461
- name: Recall
type: recall
value: 0.88
- name: F1
type: f1
value: 0.8627450980392156
- name: Accuracy
type: accuracy
value: 0.9444444444444444
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-NER-favsbot
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the favsbot dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1680
- Precision: 0.8462
- Recall: 0.88
- F1: 0.8627
- Accuracy: 0.9444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 7 | 1.8761 | 0.0 | 0.0 | 0.0 | 0.5833 |
| No log | 2.0 | 14 | 1.3530 | 0.0 | 0.0 | 0.0 | 0.5972 |
| No log | 3.0 | 21 | 1.0400 | 1.0 | 0.12 | 0.2143 | 0.6389 |
| No log | 4.0 | 28 | 0.7987 | 0.7895 | 0.6 | 0.6818 | 0.8194 |
| No log | 5.0 | 35 | 0.6055 | 0.85 | 0.68 | 0.7556 | 0.875 |
| No log | 6.0 | 42 | 0.4749 | 0.8696 | 0.8 | 0.8333 | 0.9167 |
| No log | 7.0 | 49 | 0.3838 | 0.84 | 0.84 | 0.8400 | 0.9444 |
| No log | 8.0 | 56 | 0.3084 | 0.88 | 0.88 | 0.88 | 0.9583 |
| No log | 9.0 | 63 | 0.2643 | 0.88 | 0.88 | 0.88 | 0.9583 |
| No log | 10.0 | 70 | 0.2360 | 0.8462 | 0.88 | 0.8627 | 0.9444 |
| No log | 11.0 | 77 | 0.2168 | 0.8462 | 0.88 | 0.8627 | 0.9444 |
| No log | 12.0 | 84 | 0.2031 | 0.8462 | 0.88 | 0.8627 | 0.9444 |
| No log | 13.0 | 91 | 0.1937 | 0.88 | 0.88 | 0.88 | 0.9583 |
| No log | 14.0 | 98 | 0.1853 | 0.8462 | 0.88 | 0.8627 | 0.9444 |
| No log | 15.0 | 105 | 0.1791 | 0.8462 | 0.88 | 0.8627 | 0.9444 |
| No log | 16.0 | 112 | 0.1757 | 0.8462 | 0.88 | 0.8627 | 0.9444 |
| No log | 17.0 | 119 | 0.1718 | 0.8462 | 0.88 | 0.8627 | 0.9444 |
| No log | 18.0 | 126 | 0.1698 | 0.8148 | 0.88 | 0.8462 | 0.9444 |
| No log | 19.0 | 133 | 0.1686 | 0.8148 | 0.88 | 0.8462 | 0.9444 |
| No log | 20.0 | 140 | 0.1680 | 0.8462 | 0.88 | 0.8627 | 0.9444 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
debbiesoon/longformer_summarise_large
|
debbiesoon
| 2022-10-20T03:55:16Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"led",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-20T03:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: longformer_summarise_large
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# longformer_summarise_large
This model is a fine-tuned version of [patrickvonplaten/led-large-16384-pubmed](https://huggingface.co/patrickvonplaten/led-large-16384-pubmed) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.21.3
- Pytorch 1.12.1+cu113
- Datasets 1.2.1
- Tokenizers 0.12.1
|
debbiesoon/longformer_summarise
|
debbiesoon
| 2022-10-20T03:09:10Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"led",
"text2text-generation",
"generated_from_trainer",
"dataset:scientific_papers",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-20T02:24:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- scientific_papers
model-index:
- name: longformer_summarise
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# longformer_summarise
This model is a fine-tuned version of [allenai/led-base-16384](https://huggingface.co/allenai/led-base-16384) on the scientific_papers dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3003
- Rouge2 Precision: 0.1654
- Rouge2 Recall: 0.0966
- Rouge2 Fmeasure: 0.1118
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure |
|:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:|
| 2.909 | 0.08 | 10 | 2.8969 | 0.09 | 0.1439 | 0.0953 |
| 2.615 | 0.16 | 20 | 2.6182 | 0.1232 | 0.0865 | 0.0924 |
| 2.581 | 0.24 | 30 | 2.4687 | 0.1357 | 0.0733 | 0.09 |
| 2.1294 | 0.32 | 40 | 2.5215 | 0.1495 | 0.0932 | 0.1044 |
| 2.8083 | 0.4 | 50 | 2.3870 | 0.1794 | 0.1054 | 0.1224 |
| 3.0704 | 0.48 | 60 | 2.3676 | 0.1572 | 0.0989 | 0.1108 |
| 2.4716 | 0.56 | 70 | 2.3554 | 0.1707 | 0.1039 | 0.1198 |
| 2.454 | 0.64 | 80 | 2.3411 | 0.1619 | 0.0943 | 0.1115 |
| 2.3046 | 0.72 | 90 | 2.3105 | 0.1547 | 0.0965 | 0.1116 |
| 1.7467 | 0.8 | 100 | 2.3417 | 0.1551 | 0.0877 | 0.1046 |
| 2.7696 | 0.88 | 110 | 2.3226 | 0.1543 | 0.0954 | 0.1085 |
| 2.4999 | 0.96 | 120 | 2.3003 | 0.1654 | 0.0966 | 0.1118 |
### Framework versions
- Transformers 4.21.3
- Pytorch 1.12.1+cu113
- Datasets 1.2.1
- Tokenizers 0.12.1
|
chrisjay/cos801-802-hf-workshop-mt5-small
|
chrisjay
| 2022-10-20T02:02:57Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"dataset:xlsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-10-19T23:56:56Z |
---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
datasets:
- xlsum
metrics:
- rouge
model-index:
- name: cos801-802-hf-workshop-mt5-small
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xlsum
type: xlsum
config: swahili
split: train
args: swahili
metrics:
- name: Rouge1
type: rouge
value: 20.928
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cos801-802-hf-workshop-mt5-small
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the xlsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7998
- Rouge1: 20.928
- Rouge2: 6.3239
- Rougel: 17.4455
- Rougelsum: 17.4566
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:-------:|:---------:|
| 3.844 | 1.0 | 1975 | 2.7998 | 20.928 | 6.3239 | 17.4455 | 17.4566 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
PublicPrompts/16-bit-landscape_PublicPrompts
|
PublicPrompts
| 2022-10-20T01:58:39Z | 0 | 15 | null |
[
"license:openrail",
"region:us"
] | null | 2022-10-19T23:04:16Z |
---
license: openrail
---
This is a Stable Diffusion model trained using DreamBooth to create pixel art landscapes
|
mariolinml/deberta-v3-base_mnli_uf_ner_1019_v1
|
mariolinml
| 2022-10-19T23:53:22Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-19T22:55:55Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: deberta-v3-base_mnli_uf_ner_1019_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-base_mnli_uf_ner_1019_v1
This model is a fine-tuned version of [mariolinml/deberta-v3-base_MNLI_10_19_v0](https://huggingface.co/mariolinml/deberta-v3-base_MNLI_10_19_v0) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
CavenLen/ddpm-Kaga-128
|
CavenLen
| 2022-10-19T22:03:31Z | 19 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:CavenLen/Kaga",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-10-17T12:48:44Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: CavenLen/Kaga
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-Kaga-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `CavenLen/Kaga` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/CavenLen/ddpm-Kaga-128/tensorboard?#scalars)
|
thucdangvan020999/marian-finetuned-kde4-en-to-fr
|
thucdangvan020999
| 2022-10-19T21:12:47Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-10-19T19:27:37Z |
---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-fr
split: train
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 52.83113187001415
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8560
- Bleu: 52.8311
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
g30rv17ys/ddpm-hkuoct-wamd-500ep
|
g30rv17ys
| 2022-10-19T20:24:53Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-10-19T18:10:25Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-hkuoct-wamd-500ep
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/geevegeorge/ddpm-hkuoct-wamd-500ep/tensorboard?#scalars)
|
sd-concepts-library/sims-2-portrait
|
sd-concepts-library
| 2022-10-19T18:56:56Z | 0 | 4 | null |
[
"license:mit",
"region:us"
] | null | 2022-10-19T18:33:54Z |
---
license: mit
---
### Sims 2 Portrait on Stable Diffusion
This is the `<sims2-portrait>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:







Here are example images generated using this style:



I'm not satisfied with the result as it usually fails to capture the game's aesthetic.
|
ArafatBHossain/roberta-base-twitter_eval_sentiment
|
ArafatBHossain
| 2022-10-19T18:09:50Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-19T06:29:29Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base-twitter_eval_sentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-twitter_eval_sentiment
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8020
- Accuracy: 0.6635
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0144 | 1.0 | 1875 | 0.9109 | 0.6025 |
| 0.8331 | 2.0 | 3750 | 0.8187 | 0.6555 |
| 0.7549 | 3.0 | 5625 | 0.8020 | 0.6635 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
api19750904/situaciones-turismo
|
api19750904
| 2022-10-19T17:59:42Z | 40 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-10-19T17:59:26Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: situaciones-turismo
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9101123809814453
---
# situaciones-turismo
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### people beach

#### people party

#### people restaurant

#### people walking

|
g30rv17ys/ddpm-hkuoct-wamd-300ep
|
g30rv17ys
| 2022-10-19T17:45:19Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-10-19T16:22:40Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-hkuoct-wamd-300ep
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/geevegeorge/ddpm-hkuoct-wamd-300ep/tensorboard?#scalars)
|
platzi/platzi-distilroberta-base-mrpc-glue-yeder-lvicente
|
platzi
| 2022-10-19T17:11:35Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-19T16:28:30Z |
---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: platzi-distilroberta-base-mrpc-glue-yeder-lvicente
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: datasetX
type: glue
config: mrpc
split: train
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8235294117647058
- name: F1
type: f1
value: 0.8666666666666666
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-distilroberta-base-mrpc-glue-yeder-lvicente
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the datasetX dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6547
- Accuracy: 0.8235
- F1: 0.8667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5096 | 1.09 | 500 | 0.7020 | 0.8235 | 0.8780 |
| 0.3387 | 2.18 | 1000 | 0.6547 | 0.8235 | 0.8667 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
api19750904/comida-vgm
|
api19750904
| 2022-10-19T16:54:30Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-10-19T16:54:16Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: comida-vgm
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9550561904907227
---
# comida-vgm
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### burguer

#### macarroni

#### pizza

#### spaguetti

|
jayanta/resnet-50-finetuned-memes-v2
|
jayanta
| 2022-10-19T16:28:21Z | 16 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"resnet",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-10-19T16:18:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: resnet-50-finetuned-memes-v2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.4567233384853168
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-50-finetuned-memes-v2
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3295
- Accuracy: 0.4567
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00012
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4954 | 0.99 | 20 | 1.4559 | 0.4567 |
| 1.407 | 1.99 | 40 | 1.3772 | 0.4567 |
| 1.3744 | 2.99 | 60 | 1.3378 | 0.4567 |
| 1.3427 | 3.99 | 80 | 1.3295 | 0.4567 |
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.6.1.dev0
- Tokenizers 0.13.1
|
huggingtweets/konradha_
|
huggingtweets
| 2022-10-19T16:11:00Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-10-19T16:09:29Z |
---
language: en
thumbnail: http://www.huggingtweets.com/konradha_/1666195856134/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1540685336422088704/JDxiybNe_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Konrad</div>
<div style="text-align: center; font-size: 14px;">@konradha_</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Konrad.
| Data | Konrad |
| --- | --- |
| Tweets downloaded | 256 |
| Retweets | 38 |
| Short tweets | 75 |
| Tweets kept | 143 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3ox7i4yk/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @konradha_'s tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/10k5hc9s) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/10k5hc9s/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/konradha_')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
gabski/sbert-relative-claim-quality
|
gabski
| 2022-10-19T16:10:59Z | 1 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-10-19T15:59:40Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# Model
This [sentence-transformers](https://www.SBERT.net) model model was obtained by fine-tuning bert-base-cased on the ClaimRev dataset.
Paper: [Learning From Revisions: Quality Assessment of Claims in Argumentation at Scale](https://aclanthology.org/2021.eacl-main.147/)
Authors: Gabriella Skitalinskaya, Jonas Klaff, Henning Wachsmuth
# Claim Quality Classification
We cast this task as a pairwise classification task, where the objective is to compare two versions of the same claim and determine which one is better. We train this model by fine-tuning SBERT based on bert-base-cased using a siamese network structure with softmax loss. Outputs can also be used to rank multiple versions of the same claim, for example, using [SVMRank](https://github.com/ds4dm/PySVMRank) or BTL (Bradley-Terry-Luce model).
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('gabski/sbert-relative-claim-quality')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('gabski/sbert-relative-claim-quality')
model = AutoModel.from_pretrained('gabski/sbert-relative-claim-quality')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
```bibtex
@inproceedings{skitalinskaya-etal-2021-learning,
title = "Learning From Revisions: Quality Assessment of Claims in Argumentation at Scale",
author = "Skitalinskaya, Gabriella and
Klaff, Jonas and
Wachsmuth, Henning",
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
month = apr,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.eacl-main.147",
doi = "10.18653/v1/2021.eacl-main.147",
pages = "1718--1729",
}
```
|
gabski/bert-relative-claim-quality
|
gabski
| 2022-10-19T16:09:19Z | 18 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"dataset:ClaimRev",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-10-19T14:04:22Z |
---
language: en
license: cc-by-nc-sa-4.0
datasets:
- ClaimRev
---
# Model
This model was obtained by fine-tuning bert-base-cased on the ClaimRev dataset.
Paper: [Learning From Revisions: Quality Assessment of Claims in Argumentation at Scale](https://aclanthology.org/2021.eacl-main.147/)
Authors: Gabriella Skitalinskaya, Jonas Klaff, Henning Wachsmuth
# Claim Quality Classification
We cast this task as a pairwise classification task, where the objective is to compare two versions of the same claim and determine which one is better.
# Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("gabski/bert-relative-claim-quality")
model = AutoModelForSequenceClassification.from_pretrained("gabski/bert-relative-claim-quality")
claim_1 = 'Smoking marijuana is less harmfull then smoking cigarettes.'
claim_2 = 'Smoking marijuana is less harmful than smoking cigarettes.'
model_input = tokenizer(claim_1,claim_2, return_tensors='pt')
model_outputs = model(**model_input)
outputs = torch.nn.functional.softmax(model_outputs.logits, dim = -1)
print(outputs)
```
|
asi/albert-act-tiny
|
asi
| 2022-10-19T15:22:06Z | 8 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"tensorboard",
"albert_act",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-10-11T14:03:17Z |
---
license: apache-2.0
language: en
datasets:
- wikipedia
- bookcorpus
model-index:
- name: asi/albert-act-base
results:
- task:
type: text-classification
name: CoLA
dataset:
type: glue
name: CoLA # General Language Understanding Evaluation benchmark (GLUE)
split: cola
metrics:
- type: matthews_correlation
value: 27.5
name: Matthew's Corr
- task:
type: text-classification
name: SST-2
dataset:
type: glue
name: SST-2 # The Stanford Sentiment Treebank
split: sst2
metrics:
- type: accuracy
value: 87.6
name: Accuracy
- task:
type: text-classification
name: MRPC
dataset:
type: glue
name: MRPC # Microsoft Research Paraphrase Corpus
split: mrpc
metrics:
- type: accuracy
value: 78.7
name: Accuracy
- type: f1
value: 84.7
name: F1
- task:
type: text-similarity
name: STS-B
dataset:
type: glue
name: STS-B # Semantic Textual Similarity Benchmark
split: stsb
metrics:
- type: spearmanr
value: 79.7
name: Spearman Corr
- type: pearsonr
value: 81.8
name: Pearson Corr
- task:
type: text-classification
name: QQP
dataset:
type: glue
name: QQP # Quora Question Pairs
split: qqp
metrics:
- type: f1
value: 67.8
name: F1
- type: accuracy
value: 87.5
name: Accuracy
- task:
type: text-classification
name: MNLI-m
dataset:
type: glue
name: MNLI-m # MultiNLI Matched
split: mnli_matched
metrics:
- type: accuracy
value: 77.0
name: Accuracy
- task:
type: text-classification
name: MNLI-mm
dataset:
type: glue
name: MNLI-mm # MultiNLI Matched
split: mnli_mismatched
metrics:
- type: accuracy
value: 76.8
name: Accuracy
- task:
type: text-classification
name: QNLI
dataset:
type: glue
name: QNLI # Question NLI
split: qnli
metrics:
- type: accuracy
value: 86.4
name: Accuracy
- task:
type: text-classification
name: RTE
dataset:
type: glue
name: RTE # Recognizing Textual Entailment
split: rte
metrics:
- type: accuracy
value: 62.0
name: Accuracy
- task:
type: text-classification
name: WNLI
dataset:
type: glue
name: WNLI # Winograd NLI
split: wnli
metrics:
- type: accuracy
value: 65.1
name: Accuracy
---
# Adaptive Depth Transformers
Implementation of the paper "How Many Layers and Why? An Analysis of the Model Depth in Transformers". In this study, we investigate the role of the multiple layers in deep transformer models. We design a variant of ALBERT that dynamically adapts the number of layers for each token of the input.
## Model architecture
We augment a multi-layer transformer encoder with a halting mechanism, which dynamically adjusts the number of layers for each token.
We directly adapted this mechanism from Graves ([2016](#graves-2016)). At each iteration, we compute a probability for each token to stop updating its state.
## Model use
The architecture is not yet directly included in the Transformers library. The code used for pre-training is available in the following [github repository](https://github.com/AntoineSimoulin/adaptive-depth-transformers). So you should install the code implementation first:
```bash
!pip install git+https://github.com/AntoineSimoulin/adaptive-depth-transformers$
```
Then you can use the model directly.
```python
from act import AlbertActConfig, AlbertActModel, TFAlbertActModel
from transformers import AlbertTokenizer
tokenizer = AlbertTokenizer.from_pretrained('asi/albert-act-base')
model = AlbertActModel.from_pretrained('asi/albert-act-base')
_ = model.eval()
inputs = tokenizer("a lump in the middle of the monkeys stirred and then fell quiet .", return_tensors="pt")
outputs = model(**inputs)
outputs.updates
# tensor([[[[15., 9., 10., 7., 3., 8., 5., 7., 12., 10., 6., 8., 8., 9., 5., 8.]]]])
```
## Citations
### BibTeX entry and citation info
If you use our iterative transformer model for your scientific publication or your industrial applications, please cite the following [paper](https://aclanthology.org/2021.acl-srw.23/):
```bibtex
@inproceedings{simoulin-crabbe-2021-many,
title = "How Many Layers and Why? {A}n Analysis of the Model Depth in Transformers",
author = "Simoulin, Antoine and
Crabb{\'e}, Benoit",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: Student Research Workshop",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-srw.23",
doi = "10.18653/v1/2021.acl-srw.23",
pages = "221--228",
}
```
### References
><div id="graves-2016">Alex Graves. 2016. Adaptive computation time for recurrent neural networks. CoRR, abs/1603.08983.</div>
|
asi/albert-act-small
|
asi
| 2022-10-19T15:21:44Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"tensorboard",
"albert_act",
"en",
"dataset:wikipedia",
"dataset:bookcorpus",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | 2022-10-11T14:04:00Z |
---
license: apache-2.0
language: en
datasets:
- wikipedia
- bookcorpus
model-index:
- name: asi/albert-act-base
results:
- task:
type: text-classification
name: CoLA
dataset:
type: glue
name: CoLA # General Language Understanding Evaluation benchmark (GLUE)
split: cola
metrics:
- type: matthews_correlation
value: 33.8
name: Matthew's Corr
- task:
type: text-classification
name: SST-2
dataset:
type: glue
name: SST-2 # The Stanford Sentiment Treebank
split: sst2
metrics:
- type: accuracy
value: 88.6
name: Accuracy
- task:
type: text-classification
name: MRPC
dataset:
type: glue
name: MRPC # Microsoft Research Paraphrase Corpus
split: mrpc
metrics:
- type: accuracy
value: 79.4
name: Accuracy
- type: f1
value: 85.2
name: F1
- task:
type: text-similarity
name: STS-B
dataset:
type: glue
name: STS-B # Semantic Textual Similarity Benchmark
split: stsb
metrics:
- type: spearmanr
value: 81.2
name: Spearman Corr
- type: pearsonr
value: 82.7
name: Pearson Corr
- task:
type: text-classification
name: QQP
dataset:
type: glue
name: QQP # Quora Question Pairs
split: qqp
metrics:
- type: f1
value: 67.8
name: F1
- type: accuracy
value: 87.4
name: Accuracy
- task:
type: text-classification
name: MNLI-m
dataset:
type: glue
name: MNLI-m # MultiNLI Matched
split: mnli_matched
metrics:
- type: accuracy
value: 79.5
name: Accuracy
- task:
type: text-classification
name: MNLI-mm
dataset:
type: glue
name: MNLI-mm # MultiNLI Matched
split: mnli_mismatched
metrics:
- type: accuracy
value: 78.5
name: Accuracy
- task:
type: text-classification
name: QNLI
dataset:
type: glue
name: QNLI # Question NLI
split: qnli
metrics:
- type: accuracy
value: 88.3
name: Accuracy
- task:
type: text-classification
name: RTE
dataset:
type: glue
name: RTE # Recognizing Textual Entailment
split: rte
metrics:
- type: accuracy
value: 61.9
name: Accuracy
- task:
type: text-classification
name: WNLI
dataset:
type: glue
name: WNLI # Winograd NLI
split: wnli
metrics:
- type: accuracy
value: 65.1
name: Accuracy
---
# Adaptive Depth Transformers
Implementation of the paper "How Many Layers and Why? An Analysis of the Model Depth in Transformers". In this study, we investigate the role of the multiple layers in deep transformer models. We design a variant of ALBERT that dynamically adapts the number of layers for each token of the input.
## Model architecture
We augment a multi-layer transformer encoder with a halting mechanism, which dynamically adjusts the number of layers for each token.
We directly adapted this mechanism from Graves ([2016](#graves-2016)). At each iteration, we compute a probability for each token to stop updating its state.
## Model use
The architecture is not yet directly included in the Transformers library. The code used for pre-training is available in the following [github repository](https://github.com/AntoineSimoulin/adaptive-depth-transformers). So you should install the code implementation first:
```bash
!pip install git+https://github.com/AntoineSimoulin/adaptive-depth-transformers$
```
Then you can use the model directly.
```python
from act import AlbertActConfig, AlbertActModel, TFAlbertActModel
from transformers import AlbertTokenizer
tokenizer = AlbertTokenizer.from_pretrained('asi/albert-act-base')
model = AlbertActModel.from_pretrained('asi/albert-act-base')
_ = model.eval()
inputs = tokenizer("a lump in the middle of the monkeys stirred and then fell quiet .", return_tensors="pt")
outputs = model(**inputs)
outputs.updates
# tensor([[[[15., 9., 10., 7., 3., 8., 5., 7., 12., 10., 6., 8., 8., 9., 5., 8.]]]])
```
## Citations
### BibTeX entry and citation info
If you use our iterative transformer model for your scientific publication or your industrial applications, please cite the following [paper](https://aclanthology.org/2021.acl-srw.23/):
```bibtex
@inproceedings{simoulin-crabbe-2021-many,
title = "How Many Layers and Why? {A}n Analysis of the Model Depth in Transformers",
author = "Simoulin, Antoine and
Crabb{\'e}, Benoit",
booktitle = "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: Student Research Workshop",
month = aug,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.acl-srw.23",
doi = "10.18653/v1/2021.acl-srw.23",
pages = "221--228",
}
```
### References
><div id="graves-2016">Alex Graves. 2016. Adaptive computation time for recurrent neural networks. CoRR, abs/1603.08983.</div>
|
bthomas/article2keyword2.2_barthez-orangesum-title_finetuned_for_mlm
|
bthomas
| 2022-10-19T14:48:17Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"mlm",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-10-19T14:32:22Z |
---
license: apache-2.0
tags:
- mlm
- generated_from_trainer
model-index:
- name: article2keyword2.2_barthez-orangesum-title_finetuned_for_mlm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# article2keyword2.2_barthez-orangesum-title_finetuned_for_mlm
This model is a fine-tuned version of [moussaKam/barthez-orangesum-title](https://huggingface.co/moussaKam/barthez-orangesum-title) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0452
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.3187 | 1.0 | 1235 | 0.0545 |
| 0.0544 | 2.0 | 2470 | 0.0491 |
| 0.0461 | 3.0 | 3705 | 0.0463 |
| 0.042 | 4.0 | 4940 | 0.0452 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.11.0
- Datasets 2.3.2
- Tokenizers 0.12.1
|
mercelisw/xlm-roberta-base-extended-language-detection
|
mercelisw
| 2022-10-19T14:40:52Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-30T09:45:18Z |
---
languages:
- la
- grc
- he
---
This model builds upon [an existing language detection model](https://huggingface.co/papluca/xlm-roberta-base-language-detection). It uses the same dataset, extended with Latin, Ancient Greek and (modern) Hebrew texts.
|
theodotus/stt_uk_squeezeformer_rnnt_xs
|
theodotus
| 2022-10-19T14:33:51Z | 6 | 0 |
nemo
|
[
"nemo",
"automatic-speech-recognition",
"uk",
"dataset:mozilla-foundation/common_voice_10_0",
"dataset:Yehor/voa-uk-transcriptions",
"license:bsd-3-clause",
"model-index",
"region:us"
] |
automatic-speech-recognition
| 2022-10-17T18:08:59Z |
---
language:
- uk
library_name: nemo
datasets:
- mozilla-foundation/common_voice_10_0
- Yehor/voa-uk-transcriptions
tags:
- automatic-speech-recognition
model-index:
- name: stt_uk_squeezeformer_rnnt_xs
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Mozilla Common Voice 10.0
type: mozilla-foundation/common_voice_10_0
config: clean
split: test
args:
language: uk
metrics:
- name: Test WER
type: wer
value: 8.814
license: bsd-3-clause
---
# Squeezeformer-RNNT XS (uk-UA)
<style>
img {
display: inline;
}
</style>
| [](#model-architecture)
| [](#model-architecture)
| [](#datasets) |
|
mclarknc/ppo-LunarLander-v2
|
mclarknc
| 2022-10-19T14:03:08Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-19T14:02:48Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 273.18 +/- 23.61
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
model-attribution-challenge/bloom-560m
|
model-attribution-challenge
| 2022-10-19T12:35:58Z | 24 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"safetensors",
"bloom",
"feature-extraction",
"text-generation",
"ak",
"ar",
"as",
"bm",
"bn",
"ca",
"code",
"en",
"es",
"eu",
"fon",
"fr",
"gu",
"hi",
"id",
"ig",
"ki",
"kn",
"lg",
"ln",
"ml",
"mr",
"ne",
"nso",
"ny",
"or",
"pa",
"pt",
"rn",
"rw",
"sn",
"st",
"sw",
"ta",
"te",
"tn",
"ts",
"tum",
"tw",
"ur",
"vi",
"wo",
"xh",
"yo",
"zh",
"zhs",
"zht",
"zu",
"arxiv:1909.08053",
"arxiv:2110.02861",
"arxiv:2108.12409",
"license:bigscience-bloom-rail-1.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-11-09T22:43:31Z |
---
license: bigscience-bloom-rail-1.0
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zhs
- zht
- zu
pipeline_tag: text-generation
---
<h1 style='text-align: center '>BLOOM LM</h1>
<h2 style='text-align: center '><em>BigScience Large Open-science Open-access Multilingual Language Model</em> </h2>
<h3 style='text-align: center '>Model Card</h3>
<img src="https://s3.amazonaws.com/moonup/production/uploads/1657124309515-5f17f0a0925b9863e28ad517.png" alt="BigScience Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
Version 1.0 / 26.May.2022
## Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Training Data](#training-data)
4. [Risks and Limitations](#risks-and-limitations)
5. [Evaluation](#evaluation)
6. [Recommendations](#recommendations)
7. [Glossary and Calculations](#glossary-and-calculations)
8. [More Information](#more-information)
9. [Model Card Authors](#model-card-authors)
## Model Details
### Basics
*This section provides information for anyone who wants to know about the model.*
<details>
<summary>Click to expand</summary> <br/>
**Developed by:** BigScience ([website](https://bigscience.huggingface.co))
* All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)*
**Model Type:** Transformer-based Language Model
**Version:** 1.0.0
**Languages:** Multiple; see [training data](#training-data)
**License:** RAIL License v1.0 ([link](https://huggingface.co/spaces/bigscience/license))
**Release Date Estimate:** Monday, 11.July.2022
**Send Questions to:** [email protected]
**Cite as:** BigScience, _BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model_. International, May 2021-May 2022
**Funded by:**
* The French government.
* Hugging Face ([website](https://huggingface.co)).
* Organizations of contributors. *(Further breakdown of organizations forthcoming.)*
</details>
### Technical Specifications
*This section provides information for people who work on model development.*
<details>
<summary>Click to expand</summary><br/>
Please see [the BLOOM training README](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml#readme) for full details on replicating training.
**Model Architecture:** Modified from Megatron-LM GPT2 (see [paper](https://arxiv.org/abs/1909.08053), [BLOOM Megatron code](https://github.com/bigscience-workshop/Megatron-DeepSpeed)):
* Decoder-only architecture
* Layer normalization applied to word embeddings layer (`StableEmbedding`; see [code](https://github.com/facebookresearch/bitsandbytes), [paper](https://arxiv.org/pdf/2110.02861.pdf))
* ALiBI positional encodings (see [paper](https://arxiv.org/pdf/2108.12409.pdf)), with GeLU activation functions
* 559,214,592 parameters:
* 256,901,120 embedding parameters
* 24 layers, 16 attention heads
* Hidden layers are 1024-dimensional
* Sequence length of 2048 tokens (see [BLOOM tokenizer](https://huggingface.co/bigscience/tokenizer), [tokenizer description](#tokenization))
**Objective Function:** Cross Entropy with mean reduction (see [API documentation](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss)).
**Compute infrastructure:** Jean Zay Public Supercomputer, provided by the French government (see [announcement](https://www.enseignementsup-recherche.gouv.fr/fr/signature-du-marche-d-acquisition-de-l-un-des-supercalculateurs-les-plus-puissants-d-europe-46733)).
* Hardware: 384 A100 80GB GPUs (48 nodes):
* Additional 32 A100 80GB GPUs (4 nodes) in reserve
* 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links
* CPU: AMD
* CPU memory: 512GB per node
* GPU memory: 640GB per node
* Inter-node connect: Omni-Path Architecture (OPA)
* NCCL-communications network: a fully dedicated subnet
* Disc IO network: shared network with other types of nodes
* Software:
* Megatron-DeepSpeed ([Github link](https://github.com/bigscience-workshop/Megatron-DeepSpeed))
* DeepSpeed ([Github link](https://github.com/microsoft/DeepSpeed))
* PyTorch (pytorch-1.11 w/ CUDA-11.5; see [Github link](https://github.com/pytorch/pytorch))
* apex ([Github link](https://github.com/NVIDIA/apex))
#### **Training**
Training logs: [Tensorboard link](https://huggingface.co/bigscience/tr11e-350M-logs)
- Training throughput: About 150 TFLOPs per GPU
- Number of epochs: 1 (*current target*)
- Dates:
- Started 11th March, 2022 11:42am PST
- Ended 5th July, 2022
- Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments and other model sizes)
- Server training location: Île-de-France, France
#### **Tokenization**
The BLOOM tokenizer ([link](https://huggingface.co/bigscience/tokenizer)) is a learned subword tokenizer trained using:
- A byte-level Byte Pair Encoding (BPE) algorithm
- A simple pre-tokenization rule, no normalization
- A vocabulary size of 250,680
It was trained on a subset of a preliminary version of the corpus using alpha-weighting per language.
</details>
### Environmental Impact
<details>
<summary>Click to expand</summary><br/>
The training supercomputer, Jean Zay ([website](http://www.idris.fr/eng/jean-zay/jean-zay-presentation-eng.html)), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing.
**Estimated carbon emissions:** *(Forthcoming upon completion of training.)*
**Estimated electricity usage:** *(Forthcoming upon completion of training.)*
</details>
<p> </p>
## Uses
*This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model.
It provides information for anyone considering using the model or who is affected by the model.*
<details>
<summary>Click to expand</summary><br/>
### Intended Use
This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive.
#### **Direct Use**
- Text generation
- Exploring characteristics of language generated by a language model
- Examples: Cloze tests, counterfactuals, generations with reframings
#### **Downstream Use**
- Tasks that leverage language models include: Information Extraction, Question Answering, Summarization
### Misuse and Out-of-scope Use
*This section addresses what users ought not do with the model.*
See the [BLOOM License](https://huggingface.co/spaces/bigscience/license), Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases.
#### **Out-of-scope Uses**
Using the model in [high-stakes](#high-stakes) settings is out of scope for this model. The model is not designed for [critical decisions](#critical-decisions) nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct.
##### Out-of-scope Uses Include:
- Usage in biomedical domains, political and legal domains, or finance domains
- Usage for evaluating or scoring individuals, such as for employment, education, or credit
- Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct
#### **Misuse**
Intentionally using the model for harm, violating [human rights](#human-rights), or other kinds of malicious activities, is a misuse of this model. This includes:
- Spam generation
- Disinformation and influence operations
- Disparagement and defamation
- Harassment and abuse
- [Deception](#deception)
- Unconsented impersonation and imitation
- Unconsented surveillance
- Generating content without attribution to the model, as specified in the [RAIL License, Use Restrictions](https://huggingface.co/spaces/bigscience/license)
### Intended Users
#### **Direct Users**
- General Public
- Researchers
- Students
- Educators
- Engineers/developers
- Non-commercial entities
- Community advocates, including human and civil rights groups
#### Indirect Users
- Users of derivatives created by Direct Users, such as those using software with an [intended use](#intended-use)
- Users of [Derivatives of the Model, as described in the License](https://huggingface.co/spaces/bigscience/license)
#### Others Affected (Parties Prenantes)
- People and groups referred to by the LLM
- People and groups exposed to outputs of, or decisions based on, the LLM
- People and groups whose original work is included in the LLM
</details>
<p> </p>
## Training Data
*This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.*
<details>
<summary>Click to expand</summary><br/>
Details for each dataset are provided in individual [Data Cards](https://huggingface.co/spaces/bigscience/BigScienceCorpus).
Training data includes:
- 45 natural languages
- 12 programming languages
- In 1.5TB of pre-processed text, converted into 350B unique tokens (see [the tokenizer section](#tokenization) for more.)
#### **Languages**
The pie chart shows the distribution of languages in training data.

The following table shows the further distribution of Niger-Congo and Indic languages in the training data.
<details>
<summary>Click to expand</summary><br/>
| Niger Congo | Percentage | | Indic | Percentage |
|----------------|------------ |------ |-----------|------------|
| Chi Tumbuka | 0.00002 | | Assamese | 0.01 |
| Kikuyu | 0.00004 | | Odia | 0.04 |
| Bambara | 0.00004 | | Gujarati | 0.04 |
| Akan | 0.00007 | | Marathi | 0.05 |
| Xitsonga | 0.00007 | | Punjabi | 0.05 |
| Sesotho | 0.00007 | | Kannada | 0.06 |
| Chi Chewa | 0.0001 | | Nepali | 0.07 |
| Setswana | 0.0002 | | Telugu | 0.09 |
| Northern Sotho | 0.0002 | | Malayalam | 0.10 |
| Fon | 0.0002 | | Urdu | 0.10 |
| Kirundi | 0.0003 | | Tamil | 0.20 |
| Wolof | 0.0004 | | Bengali | 0.50 |
| Kuganda | 0.0004 | | Hindi | 0.70 |
| Chi Shona | 0.001 |
| Isi Zulu | 0.001 |
| Igbo | 0.001 |
| Xhosa | 0.001 |
| Kinyarwanda | 0.003 |
| Yoruba | 0.006 |
| Swahili | 0.02 |
</details>
The following table shows the distribution of programming languages.
<details>
<summary>Click to expand</summary><br/>
| Extension | Language | Number of files |
|----------------|------------|-----------------|
| java | Java | 5,407,724 |
| php | PHP | 4,942,186 |
| cpp | C++ | 2,503,930 |
| py | Python | 2,435,072 |
| js | JavaScript | 1,905,518 |
| cs | C# | 1,577,347 |
| rb | Ruby | 6,78,413 |
| cc | C++ | 443,054 |
| hpp | C++ | 391,048 |
| lua | Lua | 352,317 |
| go | GO | 227,763 |
| ts | TypeScript | 195,254 |
| C | C | 134,537 |
| scala | Scala | 92,052 |
| hh | C++ | 67,161 |
| H | C++ | 55,899 |
| tsx | TypeScript | 33,107 |
| rs | Rust | 29,693 |
| phpt | PHP | 9,702 |
| c++ | C++ | 1,342 |
| h++ | C++ | 791 |
| php3 | PHP | 540 |
| phps | PHP | 270 |
| php5 | PHP | 166 |
| php4 | PHP | 29 |
</details>
</details>
<p> </p>
## Risks and Limitations
*This section identifies foreseeable harms and misunderstandings.*
<details>
<summary>Click to expand</summary><br/>
Model may:
- Overrepresent some viewpoints and underrepresent others
- Contain stereotypes
- Contain [personal information](#personal-data-and-information)
- Generate:
- Hateful, abusive, or violent language
- Discriminatory or prejudicial language
- Content that may not be appropriate for all settings, including sexual content
- Make errors, including producing incorrect information as if it were factual
- Generate irrelevant or repetitive outputs
</details>
<p> </p>
## Evaluation
*This section describes the evaluation protocols and provides the results.*
<details>
<summary>Click to expand</summary><br/>
### Metrics
*This section describes the different ways performance is calculated and why.*
Includes:
| Metric | Why chosen |
|--------------------|--------------------------------------------------------------------|
| [Perplexity](#perplexity) | Standard metric for quantifying model improvements during training |
| Cross Entropy [Loss](#loss) | Standard objective for language models. |
And multiple different metrics for specific tasks. _(More evaluation metrics forthcoming upon completion of evaluation protocol.)_
### Factors
*This section lists some different aspects of what BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.*
- Language, such as English or Yoruba
- Domain, such as newswire or stories
- Demographic characteristics, such as gender or nationality
### Results
*Results are based on the [Factors](#factors) and [Metrics](#metrics).*
**Train-time Evaluation:**
As of 25.May.2022, 15:00 PST:
- Training Loss: 2.0
- Validation Loss: 2.2
- Perplexity: 8.9
(More evaluation scores forthcoming at the end of model training.)
</details>
<p> </p>
## Recommendations
*This section provides information on warnings and potential mitigations.*
<details>
<summary>Click to expand</summary><br/>
- Indirect users should be made aware when the content they're working with is created by the LLM.
- Users should be aware of [Risks and Limitations](#risks-and-limitations), and include an appropriate age disclaimer or blocking interface as necessary.
- Models pretrained with the LLM should include an updated Model Card.
- Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments.
</details>
<p> </p>
## Glossary and Calculations
*This section defines common terms and how metrics are calculated.*
<details>
<summary>Click to expand</summary><br/>
- <a name="loss">**Loss:**</a> A calculation of the difference between what the model has learned and what the data shows ("groundtruth"). The lower the loss, the better. The training process aims to minimize the loss.
- <a name="perplexity">**Perplexity:**</a> This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy.
- <a name="high-stakes">**High-stakes settings:**</a> Such as those identified as "high-risk AI systems" and "unacceptable risk AI systems" in the European Union's proposed [Artificial Intelligence (AI) Act](https://artificialintelligenceact.eu/annexes/).
- <a name="critical-decisions">**Critical decisions:**</a> Such as those defined in [the United States' proposed Algorithmic Accountability Act](https://www.congress.gov/117/bills/s3572/BILLS-117s3572is.pdf).
- <a name="human-rights">**Human rights:**</a> Includes those rights defined in the [Universal Declaration of Human Rights](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf).
- <a name="personal-data-and-information">**Personal Data and Personal Information:**</a> Personal data and information is defined in multiple data protection regulations, such as "[personal data](https://gdpr-info.eu/issues/personal-data/)" in the [European Union's General Data Protection Regulation](https://gdpr-info.eu); and "personal information" in the Republic of South Africa's [Protection of Personal Information Act](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf), The People's Republic of China's [Personal information protection law](http://en.npc.gov.cn.cdurl.cn/2021-12/29/c_694559.htm).
- <a name="sensitive-characteristics">**Sensitive characteristics:**</a> This includes specifically protected categories in human rights (see [UHDR, Article 2](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf)) and personal information regulation (see GDPR, [Article 9; Protection of Personal Information Act, Chapter 1](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf))
- <a name="deception">**Deception:**</a> Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated.
</details>
<p> </p>
## More Information
<details>
<summary>Click to expand</summary><br/>
### Dataset Creation
Blog post detailing the design choices during the dataset creation: https://bigscience.huggingface.co/blog/building-a-tb-scale-multilingual-dataset-for-language-modeling
### Technical Specifications
Blog post summarizing how the architecture, size, shape, and pre-training duration where selected: https://bigscience.huggingface.co/blog/what-language-model-to-train-if-you-have-two-million-gpu-hours
More details on the architecture/optimizer: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml
Blog post on the hardware/engineering side: https://bigscience.huggingface.co/blog/which-hardware-to-train-a-176b-parameters-model
Details on the distributed setup used for the training: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml
Tensorboard updated during the training: https://huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard#scalars&tagFilter=loss
Insights on how to approach training, negative results: https://github.com/bigscience-workshop/bigscience/blob/master/train/lessons-learned.md
Details on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): https://github.com/bigscience-workshop/bigscience/blob/master/train/tr11-176B-ml/chronicles.md
### Initial Results
Initial prompting experiments using interim checkpoints: https://huggingface.co/spaces/bigscience/bloom-book
</details>
<p> </p>
## Model Card Authors
*Ordered roughly chronologically and by amount of time spent.*
Margaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay, Niklas Muennighoff
|
thisisHJLee/wav2vec2-large-xls-r-300m-korean-s5
|
thisisHJLee
| 2022-10-19T11:41:38Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-10-19T06:20:30Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xls-r-300m-korean-s5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-korean-s5
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0236
- Cer: 0.0041
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 8.6117 | 0.39 | 800 | 3.1381 | 0.9999 |
| 0.6539 | 0.78 | 1600 | 0.3426 | 0.0932 |
| 0.3963 | 1.17 | 2400 | 0.1645 | 0.0428 |
| 0.2286 | 1.56 | 3200 | 0.1001 | 0.0254 |
| 0.1656 | 1.94 | 4000 | 0.0567 | 0.0135 |
| 0.1271 | 2.33 | 4800 | 0.0334 | 0.0065 |
| 0.1039 | 2.72 | 5600 | 0.0236 | 0.0041 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu113
- Datasets 1.18.3
- Tokenizers 0.13.1
|
xh3b4sd/q-FrozenLake-v1-4x4-noSlippery
|
xh3b4sd
| 2022-10-19T11:38:24Z | 0 | 0 | null |
[
"FrozenLake-v1-8x8",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-10-19T11:38:16Z |
---
tags:
- FrozenLake-v1-8x8
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-8x8
type: FrozenLake-v1-8x8
metrics:
- type: mean_reward
value: 0.42 +/- 0.49
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="xh3b4sd/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
pavle-tsotskolauri/distilbert-base-uncased-finetuned-imdb
|
pavle-tsotskolauri
| 2022-10-19T11:12:06Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-10-19T10:50:14Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4738
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7133 | 1.0 | 157 | 2.4957 |
| 2.5751 | 2.0 | 314 | 2.4250 |
| 2.5293 | 3.0 | 471 | 2.4358 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.