modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-30 18:29:32
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 538
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-30 18:29:11
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
damilare-akin/q-FrozenLake-v1-4x4-noSlippery
|
damilare-akin
| 2022-09-01T21:06:06Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-09-01T21:06:01Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="damilare-akin/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
caffsean/t5-large-finetune-keyword-to-text-generation
|
caffsean
| 2022-09-01T20:52:39Z | 11 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-29T21:49:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-large-finetune-keyword-to-text-generation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-large-finetune-keyword-to-text-generation
This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1471
- Rouge1: 2.175
- Rouge2: 0.3661
- Rougel: 1.7927
- Rougelsum: 1.7951
- Gen Len: 15.3252
## Model description
This model is designed to generate text from a single keyword. This project is intended to be used for generating vocabulary questions for ed-tech applications.
NOTE!: Be sure to use the 'summarize: ' prefix before the word that you would like to un-summarize.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 3.3083 | 1.0 | 3000 | 3.1706 | 2.1498 | 0.331 | 1.7579 | 1.761 | 16.6826 |
| 3.2121 | 2.0 | 6000 | 3.1403 | 2.1555 | 0.3409 | 1.7659 | 1.769 | 16.208 |
| 3.1286 | 3.0 | 9000 | 3.1300 | 2.1577 | 0.3511 | 1.7703 | 1.7733 | 15.9009 |
| 3.0567 | 4.0 | 12000 | 3.1282 | 2.183 | 0.3584 | 1.7895 | 1.7909 | 15.7135 |
| 2.9953 | 5.0 | 15000 | 3.1293 | 2.1589 | 0.3525 | 1.776 | 1.7781 | 15.678 |
| 2.9483 | 6.0 | 18000 | 3.1308 | 2.1645 | 0.3556 | 1.7824 | 1.784 | 15.425 |
| 2.9009 | 7.0 | 21000 | 3.1358 | 2.1622 | 0.3622 | 1.7848 | 1.7877 | 15.3348 |
| 2.8752 | 8.0 | 24000 | 3.1387 | 2.1716 | 0.36 | 1.7936 | 1.7963 | 15.5296 |
| 2.835 | 9.0 | 27000 | 3.1454 | 2.1806 | 0.3658 | 1.7941 | 1.7966 | 15.4625 |
| 2.8352 | 10.0 | 30000 | 3.1471 | 2.175 | 0.3661 | 1.7927 | 1.7951 | 15.3252 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
RyanQin/k2c
|
RyanQin
| 2022-09-01T19:49:21Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"keytotext",
"k2t",
"Keywords to Contract",
"en",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-09-01T19:28:39Z |
---
language: "en"
thumbnail: "Keywords to Contract"
tags:
- keytotext
- k2t
- Keywords to Contract
license: mit
---
Idea is to build a model which will take keywords as inputs and generate contract clauses as outputs.
Potential use case can include:
- contract generation assistance
|
sgugger/distilbert-base-uncased-finetuned-cola
|
sgugger
| 2022-09-01T18:54:01Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5158855550567928
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7572
- Matthews Correlation: 0.5159
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5256 | 1.0 | 535 | 0.5197 | 0.4033 |
| 0.3534 | 2.0 | 1070 | 0.5301 | 0.4912 |
| 0.2402 | 3.0 | 1605 | 0.6680 | 0.5033 |
| 0.1762 | 4.0 | 2140 | 0.7572 | 0.5159 |
| 0.1389 | 5.0 | 2675 | 0.8584 | 0.5127 |
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.13.4.dev0
- Tokenizers 0.10.3
|
sgugger/glue-mrpc
|
sgugger
| 2022-09-01T18:48:23Z | 125 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: glue-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8553921568627451
- name: F1
type: f1
value: 0.897391304347826
- task:
type: natural-language-inference
name: Natural Language Inference
dataset:
name: glue
type: glue
config: mrpc
split: validation
metrics:
- name: Accuracy
type: accuracy
value: 0.8553921568627451
verified: true
- name: Precision
type: precision
value: 0.8716216216216216
verified: true
- name: Recall
type: recall
value: 0.9247311827956989
verified: true
- name: AUC
type: auc
value: 0.90464282737351
verified: true
- name: F1
type: f1
value: 0.897391304347826
verified: true
- name: loss
type: loss
value: 0.6564616560935974
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# glue-mrpc
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6566
- Accuracy: 0.8554
- F1: 0.8974
- Combined Score: 0.8764
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.13.0.dev0
- Pytorch 1.10.0+cu102
- Datasets 1.15.2.dev0
- Tokenizers 0.10.3
|
oananovac/distilbert-base-uncased-finetuned-squad
|
oananovac
| 2022-09-01T18:31:31Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-08-31T17:33:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9615
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 97 | 3.2690 |
| No log | 2.0 | 194 | 3.0873 |
| No log | 3.0 | 291 | 2.9615 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
shoop/ppo-LunarLander-v2
|
shoop
| 2022-09-01T18:01:46Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-09-01T18:01:17Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 216.31 +/- 79.92
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mhyatt000/YOLOv5
|
mhyatt000
| 2022-09-01T15:25:36Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"seals/CartPole-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"object-detection",
"dataset:coco",
"license:gpl-2.0",
"model-index",
"region:us"
] |
object-detection
| 2022-06-20T16:37:08Z |
---
license: gpl-2.0
datasets:
- coco
library_name: stable-baselines3
tags:
- seals/CartPole-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
- object-detection
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: True
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: seals/CartPole-v0
type: seals/CartPole-v0
---
# YOLOv5
Ultralytics YOLOv5 model in Pytorch.
Proof of concept for (TypoSquatting, Niche Squatting) security flaw on Hugging Face.
## Model Description
## How to use
```python
from transformers import YolosFeatureExtractor, YolosForObjectDetection
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = YolosFeatureExtractor.from_pretrained('mhyatt000/yolov5')
model = YolosForObjectDetection.from_pretrained('mhyatt000/yolov5')
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
# model predicts bounding boxes and corresponding COCO classes
logits = outputs.logits
bboxes = outputs.pred_boxes
```
## Training Data
### Training
## Evaluation
Model was evaluated on [COCO2017](https://cocodataset.org/#home) dataset.
| Model | size (pixels) | mAPval | Speed | params | FLOPS |
|---------------|-------------------|-----------|-----------|-----------|-----------|
| YOLOv5s6 | 1280 | 43.3 | 4.3 | 12.7 | 17.4 |
| YOLOv5m6 | 1280 | 50.5 | 8.4 | 35.9 | 52.4 |
| YOLOv5l6 | 1280 | 53.4 | 12.3 | 77.2 | 117.7 |
| YOLOv5x6 | 1280 | 54.4 | 22.4 | 141.8 | 222.9 |
### Bibtex and citation info
```bibtex
@software{glenn_jocher_2022_6222936,
author = {Glenn Jocher and
Ayush Chaurasia and
Alex Stoken and
Jirka Borovec and
NanoCode012 and
Yonghye Kwon and
TaoXie and
Jiacong Fang and
imyhxy and
Kalen Michael and
Lorna and
Abhiram V and
Diego Montes and
Jebastin Nadar and
Laughing and
tkianai and
yxNONG and
Piotr Skalski and
Zhiqiang Wang and
Adam Hogan and
Cristi Fati and
Lorenzo Mammana and
AlexWang1900 and
Deep Patel and
Ding Yiwei and
Felix You and
Jan Hajek and
Laurentiu Diaconu and
Mai Thanh Minh},
title = {{ultralytics/yolov5: v6.1 - TensorRT, TensorFlow
Edge TPU and OpenVINO Export and Inference}},
month = feb,
year = 2022,
publisher = {Zenodo},
version = {v6.1},
doi = {10.5281/zenodo.6222936},
url = {https://doi.org/10.5281/zenodo.6222936}
}
```
|
butchland/Reinforce-Cartpole-v1
|
butchland
| 2022-09-01T14:40:46Z | 0 | 0 | null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-09-01T14:23:01Z |
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 95.80 +/- 22.48
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
Psunrise/finetuning-sentiment-model-3000-samples
|
Psunrise
| 2022-09-01T12:37:38Z | 104 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-01T12:24:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.87
- name: F1
type: f1
value: 0.8704318936877077
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3206
- Accuracy: 0.87
- F1: 0.8704
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
asdc/roberta-base-biomedical-clinical-es-finetuned-ner
|
asdc
| 2022-09-01T11:31:00Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-05-23T23:38:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: roberta-base-biomedical-clinical-es-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-biomedical-clinical-es-finetuned-ner
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-biomedical-clinical-es](https://huggingface.co/PlanTL-GOB-ES/roberta-base-biomedical-clinical-es) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1382
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 14
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.3315 | 1.0 | 12 | 0.3878 |
| 0.2419 | 2.0 | 24 | 0.2655 |
| 0.175 | 3.0 | 36 | 0.1888 |
| 0.1441 | 4.0 | 48 | 0.1808 |
| 0.1111 | 5.0 | 60 | 0.1544 |
| 0.1064 | 6.0 | 72 | 0.1506 |
| 0.0962 | 7.0 | 84 | 0.1396 |
| 0.1367 | 8.0 | 96 | 0.1403 |
| 0.0303 | 9.0 | 108 | 0.1415 |
| 0.0741 | 10.0 | 120 | 0.1355 |
| 0.0554 | 11.0 | 132 | 0.1355 |
| 0.0542 | 12.0 | 144 | 0.1360 |
| 0.0292 | 13.0 | 156 | 0.1369 |
| 0.0188 | 14.0 | 168 | 0.1382 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
PhucLe/LRO_v1.0.2a
|
PhucLe
| 2022-09-01T09:56:58Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain",
"en",
"dataset:PhucLe/autotrain-data-LRO_v1.0.2",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-09-01T09:55:28Z |
---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- PhucLe/autotrain-data-LRO_v1.0.2
co2_eq_emissions:
emissions: 1.2585708613878817
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 1345851607
- CO2 Emissions (in grams): 1.2586
## Validation Metrics
- Loss: 0.523
- Accuracy: 0.818
- Macro F1: 0.817
- Micro F1: 0.818
- Weighted F1: 0.817
- Macro Precision: 0.824
- Micro Precision: 0.818
- Weighted Precision: 0.824
- Macro Recall: 0.818
- Micro Recall: 0.818
- Weighted Recall: 0.818
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/PhucLe/autotrain-LRO_v1.0.2-1345851607
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("PhucLe/autotrain-LRO_v1.0.2-1345851607", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("PhucLe/autotrain-LRO_v1.0.2-1345851607", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
falkne/bert-europarl-en
|
falkne
| 2022-09-01T09:38:53Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-09-01T08:54:17Z |
# Bert Europarl (bert-europarl-en)
This model is a fine-tuned version of the [BERT base model](https://huggingface.co/bert-base-uncased). It was introduced in [this paper](https://aclanthology.org/2022.acl-long.379/).
## Model description
The BERT base language model was fine-tuned on an English portion of the [Europarl corpus](https://aclanthology.org/2005.mtsummit-papers.11.pdf).
The model was trained on 1,076,468 sentences of the Europarl corpus. The sentences were sampled from the English portion of Europarl.
|
huggingtweets/buckeshot-onlinepete
|
huggingtweets
| 2022-09-01T09:35:19Z | 103 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-04-18T07:03:11Z |
---
language: en
thumbnail: http://www.huggingtweets.com/buckeshot-onlinepete/1662024914888/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1545140847259406337/bTk2lL6O_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/456958582731603969/QZKpv6eI_400x400.jpeg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">BUCKSHOT & im pete online</div>
<div style="text-align: center; font-size: 14px;">@buckeshot-onlinepete</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from BUCKSHOT & im pete online.
| Data | BUCKSHOT | im pete online |
| --- | --- | --- |
| Tweets downloaded | 311 | 3190 |
| Retweets | 77 | 94 |
| Short tweets | 46 | 1003 |
| Tweets kept | 188 | 2093 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1wyw1egj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @buckeshot-onlinepete's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1bnj1d4d) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1bnj1d4d/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/buckeshot-onlinepete')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
shed-e/MLM
|
shed-e
| 2022-09-01T08:14:51Z | 161 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-24T11:29:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4353
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6954 | 1.0 | 157 | 2.5243 |
| 2.563 | 2.0 | 314 | 2.4738 |
| 2.5258 | 3.0 | 471 | 2.4369 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Christoph911/GELECTRA-base-LegalQuAD
|
Christoph911
| 2022-09-01T08:02:32Z | 102 | 0 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"question-answering",
"qa",
"de",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-07-03T12:08:32Z |
---
language:
- "de"
tags:
- "qa"
widget:
- text: ""
context: ""
example_title: "Extractive QA"
---
# GELECTRA-base-LegalQuAD
## Overview
**Language model:** GELECTRA-base
**Language:** German
**Downstream-task:** Extractive QA
**Training data:** German-legal-SQuAD
**Eval data:** German-legal-SQuAD testset
## Hyperparameters
```
batch_size = 10
n_epochs = 2
max_seq_len=256,
learning_rate=1e-5,
## Eval results
Evaluated on German-legal-SQuAD testset
"exact": 33.984
"f1": 64.025
|
herokiller/bert-finetuned-ner
|
herokiller
| 2022-09-01T06:54:10Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-09-01T06:34:17Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0830
- Precision: 0.5380
- Recall: 0.7056
- F1: 0.6105
- Accuracy: 0.9883
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 120 | 0.0813 | 0.5446 | 0.7042 | 0.6142 | 0.9876 |
| No log | 2.0 | 240 | 0.0830 | 0.5380 | 0.7056 | 0.6105 | 0.9883 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
dvalbuena1/a2c-AntBulletEnv-v0
|
dvalbuena1
| 2022-09-01T04:21:12Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"AntBulletEnv-v0",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-09-01T04:19:59Z |
---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- metrics:
- type: mean_reward
value: 836.44 +/- 139.46
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Langboat/bloom-2b5-zh
|
Langboat
| 2022-09-01T02:06:21Z | 21 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bloom",
"text-generation",
"zh",
"license:bigscience-bloom-rail-1.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-31T12:14:13Z |
---
license: bigscience-bloom-rail-1.0
language:
- zh
pipeline_tag: text-generation
widget:
- text: "中国的首都是"
---
This model is based on [bigscience/bloom-3b](https://huggingface.co/bigscience/bloom-3b).
We pruned its vocabulary from 250880 to 46145 with Chinese corpus to reduce GPU memory usage. So the total parameter is 2b5 now.
# How to use
```python
from transformers import BloomTokenizerFast, BloomForCausalLM
tokenizer = BloomTokenizerFast.from_pretrained('Langboat/bloom-2b5-zh')
model = BloomForCausalLM.from_pretrained('Langboat/bloom-2b5-zh')
print(tokenizer.batch_decode(model.generate(tokenizer.encode('中国的首都是', return_tensors='pt'))))
```
|
SharpAI/mal-net-traffic-t5-l12
|
SharpAI
| 2022-09-01T01:17:03Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-09-01T01:16:05Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: mal-net-traffic-t5-l12
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# mal-net-traffic-t5-l12
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
huggingtweets/rxmaybike
|
huggingtweets
| 2022-09-01T00:45:13Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/rxmaybike/1661993108145/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1562630988836081664/KoPBffJt_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">jamar " Lord Of Cinder ” majima 🇵🇸</div>
<div style="text-align: center; font-size: 14px;">@rxmaybike</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from jamar " Lord Of Cinder ” majima 🇵🇸.
| Data | jamar " Lord Of Cinder ” majima 🇵🇸 |
| --- | --- |
| Tweets downloaded | 3003 |
| Retweets | 1795 |
| Short tweets | 326 |
| Tweets kept | 882 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/kc8cff5s/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @rxmaybike's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/eias9psv) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/eias9psv/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/rxmaybike')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
nawage/dragons-test
|
nawage
| 2022-08-31T22:35:54Z | 4 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-08-31T21:44:48Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# dragons-test
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/nawage/dragons-test/tensorboard?#scalars)
|
NinaErlacher/ClimateBERTqa
|
NinaErlacher
| 2022-08-31T21:55:19Z | 103 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-08-31T17:57:28Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: ClimateBertQA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ClimateBertQA
This model is a fine-tuned version of [climatebert/distilroberta-base-climate-f](https://huggingface.co/climatebert/distilroberta-base-climate-f) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3251
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.1604 | 1.0 | 4081 | 1.1894 |
| 0.8577 | 2.0 | 8162 | 1.1763 |
| 0.6395 | 3.0 | 12243 | 1.1118 |
| 0.5015 | 4.0 | 16324 | 1.3251 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
freddyaboulton/tabular-playground
|
freddyaboulton
| 2022-08-31T21:43:50Z | 0 | 1 |
sklearn
|
[
"sklearn",
"skops",
"tabular-classification",
"region:us"
] |
tabular-classification
| 2022-08-31T20:40:17Z |
---
library_name: sklearn
tags:
- sklearn
- skops
- tabular-classification
widget:
structuredData:
attribute_0:
attribute_1:
attribute_2:
attribute_3:
loading:
measurement_0:
measurement_1:
measurement_10:
measurement_11:
measurement_12:
measurement_13:
measurement_14:
measurement_15:
measurement_16:
measurement_17:
measurement_2:
measurement_3:
measurement_4:
measurement_5:
measurement_6:
measurement_7:
measurement_8:
measurement_9:
product_code:
---
# Model description
This is a copy of (tabular-playground)[https://huggingface.co/scikit-learn/tabular-playground] for testing purposes.
## Intended uses & limitations
This model is not ready to be used in production.
## Training Procedure
### Hyperparameters
The model is trained with below hyperparameters.
<details>
<summary> Click to expand </summary>
| Hyperparameter | Value |
|-----------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| memory | |
| steps | [('transformation', ColumnTransformer(transformers=[('loading_missing_value_imputer',
SimpleImputer(), ['loading']),
('numerical_missing_value_imputer',
SimpleImputer(),
['loading', 'measurement_3', 'measurement_4',
'measurement_5', 'measurement_6',
'measurement_7', 'measurement_8',
'measurement_9', 'measurement_10',
'measurement_11', 'measurement_12',
'measurement_13', 'measurement_14',
'measurement_15', 'measurement_16',
'measurement_17']),
('attribute_0_encoder', OneHotEncoder(),
['attribute_0']),
('attribute_1_encoder', OneHotEncoder(),
['attribute_1']),
('product_code_encoder', OneHotEncoder(),
['product_code'])])), ('model', DecisionTreeClassifier(max_depth=4))] |
| verbose | False |
| transformation | ColumnTransformer(transformers=[('loading_missing_value_imputer',
SimpleImputer(), ['loading']),
('numerical_missing_value_imputer',
SimpleImputer(),
['loading', 'measurement_3', 'measurement_4',
'measurement_5', 'measurement_6',
'measurement_7', 'measurement_8',
'measurement_9', 'measurement_10',
'measurement_11', 'measurement_12',
'measurement_13', 'measurement_14',
'measurement_15', 'measurement_16',
'measurement_17']),
('attribute_0_encoder', OneHotEncoder(),
['attribute_0']),
('attribute_1_encoder', OneHotEncoder(),
['attribute_1']),
('product_code_encoder', OneHotEncoder(),
['product_code'])]) |
| model | DecisionTreeClassifier(max_depth=4) |
| transformation__n_jobs | |
| transformation__remainder | drop |
| transformation__sparse_threshold | 0.3 |
| transformation__transformer_weights | |
| transformation__transformers | [('loading_missing_value_imputer', SimpleImputer(), ['loading']), ('numerical_missing_value_imputer', SimpleImputer(), ['loading', 'measurement_3', 'measurement_4', 'measurement_5', 'measurement_6', 'measurement_7', 'measurement_8', 'measurement_9', 'measurement_10', 'measurement_11', 'measurement_12', 'measurement_13', 'measurement_14', 'measurement_15', 'measurement_16', 'measurement_17']), ('attribute_0_encoder', OneHotEncoder(), ['attribute_0']), ('attribute_1_encoder', OneHotEncoder(), ['attribute_1']), ('product_code_encoder', OneHotEncoder(), ['product_code'])] |
| transformation__verbose | False |
| transformation__verbose_feature_names_out | True |
| transformation__loading_missing_value_imputer | SimpleImputer() |
| transformation__numerical_missing_value_imputer | SimpleImputer() |
| transformation__attribute_0_encoder | OneHotEncoder() |
| transformation__attribute_1_encoder | OneHotEncoder() |
| transformation__product_code_encoder | OneHotEncoder() |
| transformation__loading_missing_value_imputer__add_indicator | False |
| transformation__loading_missing_value_imputer__copy | True |
| transformation__loading_missing_value_imputer__fill_value | |
| transformation__loading_missing_value_imputer__missing_values | nan |
| transformation__loading_missing_value_imputer__strategy | mean |
| transformation__loading_missing_value_imputer__verbose | 0 |
| transformation__numerical_missing_value_imputer__add_indicator | False |
| transformation__numerical_missing_value_imputer__copy | True |
| transformation__numerical_missing_value_imputer__fill_value | |
| transformation__numerical_missing_value_imputer__missing_values | nan |
| transformation__numerical_missing_value_imputer__strategy | mean |
| transformation__numerical_missing_value_imputer__verbose | 0 |
| transformation__attribute_0_encoder__categories | auto |
| transformation__attribute_0_encoder__drop | |
| transformation__attribute_0_encoder__dtype | <class 'numpy.float64'> |
| transformation__attribute_0_encoder__handle_unknown | error |
| transformation__attribute_0_encoder__sparse | True |
| transformation__attribute_1_encoder__categories | auto |
| transformation__attribute_1_encoder__drop | |
| transformation__attribute_1_encoder__dtype | <class 'numpy.float64'> |
| transformation__attribute_1_encoder__handle_unknown | error |
| transformation__attribute_1_encoder__sparse | True |
| transformation__product_code_encoder__categories | auto |
| transformation__product_code_encoder__drop | |
| transformation__product_code_encoder__dtype | <class 'numpy.float64'> |
| transformation__product_code_encoder__handle_unknown | error |
| transformation__product_code_encoder__sparse | True |
| model__ccp_alpha | 0.0 |
| model__class_weight | |
| model__criterion | gini |
| model__max_depth | 4 |
| model__max_features | |
| model__max_leaf_nodes | |
| model__min_impurity_decrease | 0.0 |
| model__min_samples_leaf | 1 |
| model__min_samples_split | 2 |
| model__min_weight_fraction_leaf | 0.0 |
| model__random_state | |
| model__splitter | best |
</details>
### Model Plot
The model plot is below.
<style>#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f {color: black;background-color: white;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f pre{padding: 0;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-toggleable {background-color: white;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-estimator:hover {background-color: #d4ebff;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-item {z-index: 1;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-parallel::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-parallel-item {display: flex;flex-direction: column;position: relative;background-color: white;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-parallel-item:only-child::after {width: 0;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;position: relative;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-label label {font-family: monospace;font-weight: bold;background-color: white;display: inline-block;line-height: 1.2em;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-label-container {position: relative;z-index: 2;text-align: center;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-b8914d13-cacb-404b-89fd-48f0ed8d671f div.sk-text-repr-fallback {display: none;}</style><div id="sk-b8914d13-cacb-404b-89fd-48f0ed8d671f" class="sk-top-container"><div class="sk-text-repr-fallback"><pre>Pipeline(steps=[('transformation',ColumnTransformer(transformers=[('loading_missing_value_imputer',SimpleImputer(),['loading']),('numerical_missing_value_imputer',SimpleImputer(),['loading', 'measurement_3','measurement_4','measurement_5','measurement_6','measurement_7','measurement_8','measurement_9','measurement_10','measurement_11','measurement_12','measurement_13','measurement_14','measurement_15','measurement_16','measurement_17']),('attribute_0_encoder',OneHotEncoder(),['attribute_0']),('attribute_1_encoder',OneHotEncoder(),['attribute_1']),('product_code_encoder',OneHotEncoder(),['product_code'])])),('model', DecisionTreeClassifier(max_depth=4))])</pre><b>Please rerun this cell to show the HTML repr or trust the notebook.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="fe201304-214c-493b-8896-11cea0894f6e" type="checkbox" ><label for="fe201304-214c-493b-8896-11cea0894f6e" class="sk-toggleable__label sk-toggleable__label-arrow">Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[('transformation',ColumnTransformer(transformers=[('loading_missing_value_imputer',SimpleImputer(),['loading']),('numerical_missing_value_imputer',SimpleImputer(),['loading', 'measurement_3','measurement_4','measurement_5','measurement_6','measurement_7','measurement_8','measurement_9','measurement_10','measurement_11','measurement_12','measurement_13','measurement_14','measurement_15','measurement_16','measurement_17']),('attribute_0_encoder',OneHotEncoder(),['attribute_0']),('attribute_1_encoder',OneHotEncoder(),['attribute_1']),('product_code_encoder',OneHotEncoder(),['product_code'])])),('model', DecisionTreeClassifier(max_depth=4))])</pre></div></div></div><div class="sk-serial"><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="19136b49-925c-40a2-b4d1-37039bb014a9" type="checkbox" ><label for="19136b49-925c-40a2-b4d1-37039bb014a9" class="sk-toggleable__label sk-toggleable__label-arrow">transformation: ColumnTransformer</label><div class="sk-toggleable__content"><pre>ColumnTransformer(transformers=[('loading_missing_value_imputer',SimpleImputer(), ['loading']),('numerical_missing_value_imputer',SimpleImputer(),['loading', 'measurement_3', 'measurement_4','measurement_5', 'measurement_6','measurement_7', 'measurement_8','measurement_9', 'measurement_10','measurement_11', 'measurement_12','measurement_13', 'measurement_14','measurement_15', 'measurement_16','measurement_17']),('attribute_0_encoder', OneHotEncoder(),['attribute_0']),('attribute_1_encoder', OneHotEncoder(),['attribute_1']),('product_code_encoder', OneHotEncoder(),['product_code'])])</pre></div></div></div><div class="sk-parallel"><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="c8ec7f92-b10a-41e7-b673-1239572ea00e" type="checkbox" ><label for="c8ec7f92-b10a-41e7-b673-1239572ea00e" class="sk-toggleable__label sk-toggleable__label-arrow">loading_missing_value_imputer</label><div class="sk-toggleable__content"><pre>['loading']</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="70fec50e-9c49-4818-a58f-ef8de932035c" type="checkbox" ><label for="70fec50e-9c49-4818-a58f-ef8de932035c" class="sk-toggleable__label sk-toggleable__label-arrow">SimpleImputer</label><div class="sk-toggleable__content"><pre>SimpleImputer()</pre></div></div></div></div></div></div><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="ac8a6641-4222-4b12-b691-928201d9af73" type="checkbox" ><label for="ac8a6641-4222-4b12-b691-928201d9af73" class="sk-toggleable__label sk-toggleable__label-arrow">numerical_missing_value_imputer</label><div class="sk-toggleable__content"><pre>['loading', 'measurement_3', 'measurement_4', 'measurement_5', 'measurement_6', 'measurement_7', 'measurement_8', 'measurement_9', 'measurement_10', 'measurement_11', 'measurement_12', 'measurement_13', 'measurement_14', 'measurement_15', 'measurement_16', 'measurement_17']</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="a14b63c1-fecb-445e-9a74-8229a531f0ea" type="checkbox" ><label for="a14b63c1-fecb-445e-9a74-8229a531f0ea" class="sk-toggleable__label sk-toggleable__label-arrow">SimpleImputer</label><div class="sk-toggleable__content"><pre>SimpleImputer()</pre></div></div></div></div></div></div><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="80227cfc-e001-4c0d-b495-e4e0631a49d5" type="checkbox" ><label for="80227cfc-e001-4c0d-b495-e4e0631a49d5" class="sk-toggleable__label sk-toggleable__label-arrow">attribute_0_encoder</label><div class="sk-toggleable__content"><pre>['attribute_0']</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="c52efc0c-08b7-467a-a0a1-f07cb6cecebc" type="checkbox" ><label for="c52efc0c-08b7-467a-a0a1-f07cb6cecebc" class="sk-toggleable__label sk-toggleable__label-arrow">OneHotEncoder</label><div class="sk-toggleable__content"><pre>OneHotEncoder()</pre></div></div></div></div></div></div><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="6da0ab07-3d41-459c-a8a6-a56960b775f2" type="checkbox" ><label for="6da0ab07-3d41-459c-a8a6-a56960b775f2" class="sk-toggleable__label sk-toggleable__label-arrow">attribute_1_encoder</label><div class="sk-toggleable__content"><pre>['attribute_1']</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="b515fbe5-466a-4eb7-84d9-35227a1e862a" type="checkbox" ><label for="b515fbe5-466a-4eb7-84d9-35227a1e862a" class="sk-toggleable__label sk-toggleable__label-arrow">OneHotEncoder</label><div class="sk-toggleable__content"><pre>OneHotEncoder()</pre></div></div></div></div></div></div><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="72c4b8e6-3110-486f-8b33-a7db1f5e822f" type="checkbox" ><label for="72c4b8e6-3110-486f-8b33-a7db1f5e822f" class="sk-toggleable__label sk-toggleable__label-arrow">product_code_encoder</label><div class="sk-toggleable__content"><pre>['product_code']</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="f3bfb5a1-317d-4ff4-8dd0-804ef1d7fd61" type="checkbox" ><label for="f3bfb5a1-317d-4ff4-8dd0-804ef1d7fd61" class="sk-toggleable__label sk-toggleable__label-arrow">OneHotEncoder</label><div class="sk-toggleable__content"><pre>OneHotEncoder()</pre></div></div></div></div></div></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="dbcb65f9-3068-4263-9c1c-2e6413804681" type="checkbox" ><label for="dbcb65f9-3068-4263-9c1c-2e6413804681" class="sk-toggleable__label sk-toggleable__label-arrow">DecisionTreeClassifier</label><div class="sk-toggleable__content"><pre>DecisionTreeClassifier(max_depth=4)</pre></div></div></div></div></div></div></div>
## Evaluation Results
You can find the details about evaluation process and the evaluation results.
| Metric | Value |
|----------|---------|
| accuracy | 0.7888 |
| f1 score | 0.7888 |
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
import pickle
with open(decision-tree-playground-kaggle/model.pkl, 'rb') as file:
clf = pickle.load(file)
```
</details>
# Model Card Authors
This model card is written by following authors:
huggingface
# Model Card Contact
You can contact the model card authors through following channels:
[More Information Needed]
# Citation
Below you can find information related to citation.
**BibTeX:**
```
[More Information Needed]
```
Tree Plot

Confusion Matrix

|
theojolliffe/bart-paraphrase-v4-e1-feedback-feedback-e1
|
theojolliffe
| 2022-08-31T20:28:28Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-31T19:49:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-paraphrase-v4-e1-feedback-feedback-e1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-paraphrase-v4-e1-feedback-feedback-e1
This model is a fine-tuned version of [theojolliffe/bart-paraphrase-v4-e1-feedback](https://huggingface.co/theojolliffe/bart-paraphrase-v4-e1-feedback) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 34 | 2.9415 | 60.8992 | 38.9444 | 51.1386 | 52.0048 | 19.75 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.0
- Datasets 1.18.0
- Tokenizers 0.10.3
|
muhtasham/tajberto-ner
|
muhtasham
| 2022-08-31T20:15:40Z | 110 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"dataset:wikiann",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-27T15:15:25Z |
---
widget:
- text: " Исмоили Сомонӣ - намояндаи бузурги форсу-тоҷик"
- text: "Ин фурудгоҳ дар кишвари Индонезия қарор дорад."
- text: " Бобоҷон Ғафуров – солҳои 1946-1956"
- text: " Лоиқ Шералӣ дар васфи Модар шеър"
tags:
- generated_from_trainer
datasets:
- wikiann
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: tajberto-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wikiann
type: wikiann
config: tg
split: train+test
args: tg
metrics:
- name: Precision
type: precision
value: 0.576
- name: Recall
type: recall
value: 0.6923076923076923
- name: F1
type: f1
value: 0.62882096069869
- name: Accuracy
type: accuracy
value: 0.8934049079754601
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tajberto-ner
This model is a fine-tuned version of [muhtasham/TajBERTo](https://huggingface.co/muhtasham/TajBERTo) on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6129
- Precision: 0.576
- Recall: 0.6923
- F1: 0.6288
- Accuracy: 0.8934
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 2.0 | 50 | 0.6171 | 0.1667 | 0.2885 | 0.2113 | 0.7646 |
| No log | 4.0 | 100 | 0.4733 | 0.2824 | 0.4615 | 0.3504 | 0.8344 |
| No log | 6.0 | 150 | 0.3857 | 0.3372 | 0.5577 | 0.4203 | 0.8589 |
| No log | 8.0 | 200 | 0.4523 | 0.4519 | 0.5865 | 0.5105 | 0.8765 |
| No log | 10.0 | 250 | 0.3870 | 0.44 | 0.6346 | 0.5197 | 0.8834 |
| No log | 12.0 | 300 | 0.4512 | 0.5267 | 0.6635 | 0.5872 | 0.8865 |
| No log | 14.0 | 350 | 0.4934 | 0.4789 | 0.6538 | 0.5528 | 0.8819 |
| No log | 16.0 | 400 | 0.4924 | 0.4783 | 0.6346 | 0.5455 | 0.8842 |
| No log | 18.0 | 450 | 0.5355 | 0.4595 | 0.6538 | 0.5397 | 0.8788 |
| 0.1682 | 20.0 | 500 | 0.5440 | 0.5547 | 0.6827 | 0.6121 | 0.8942 |
| 0.1682 | 22.0 | 550 | 0.5299 | 0.5794 | 0.7019 | 0.6348 | 0.9003 |
| 0.1682 | 24.0 | 600 | 0.5735 | 0.5691 | 0.6731 | 0.6167 | 0.8926 |
| 0.1682 | 26.0 | 650 | 0.6027 | 0.5833 | 0.6731 | 0.6250 | 0.8796 |
| 0.1682 | 28.0 | 700 | 0.6119 | 0.568 | 0.6827 | 0.6201 | 0.8934 |
| 0.1682 | 30.0 | 750 | 0.6098 | 0.5635 | 0.6827 | 0.6174 | 0.8911 |
| 0.1682 | 32.0 | 800 | 0.6237 | 0.5469 | 0.6731 | 0.6034 | 0.8834 |
| 0.1682 | 34.0 | 850 | 0.6215 | 0.5530 | 0.7019 | 0.6186 | 0.8842 |
| 0.1682 | 36.0 | 900 | 0.6179 | 0.5802 | 0.7308 | 0.6468 | 0.8888 |
| 0.1682 | 38.0 | 950 | 0.6201 | 0.5373 | 0.6923 | 0.6050 | 0.8873 |
| 0.0007 | 40.0 | 1000 | 0.6114 | 0.5952 | 0.7212 | 0.6522 | 0.8911 |
| 0.0007 | 42.0 | 1050 | 0.6073 | 0.5625 | 0.6923 | 0.6207 | 0.8896 |
| 0.0007 | 44.0 | 1100 | 0.6327 | 0.5620 | 0.6538 | 0.6044 | 0.8896 |
| 0.0007 | 46.0 | 1150 | 0.6129 | 0.576 | 0.6923 | 0.6288 | 0.8934 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ykilcher/totally-harmless-model
|
ykilcher
| 2022-08-31T20:10:22Z | 101 | 15 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"feature-extraction",
"en",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-08-28T10:18:50Z |
---
language:
- en
---
Just a regular old model...
## Usage
```python
from transformers import AutoModel
model = AutoModel.from_pretrained("ykilcher/totally-harmless-model")
```
|
castorini/monot5-3b-msmarco-10k
|
castorini
| 2022-08-31T19:20:16Z | 497 | 12 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"arxiv:2206.02873",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-28T15:08:54Z |
This model is a T5-3B reranker fine-tuned on the MS MARCO passage dataset for 10k steps (or 1 epoch).
For more details on how to use it, check [pygaggle.ai](pygaggle.ai)
Paper describing the model: [Document Ranking with a Pretrained Sequence-to-Sequence Model](https://www.aclweb.org/anthology/2020.findings-emnlp.63/)
This model is also the state of the art on the BEIR Benchmark.
- Paper: [No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrieval](https://arxiv.org/abs/2206.02873)
- [Repository](https://github.com/guilhermemr04/scaling-zero-shot-retrieval)
|
rwang5688/distilbert-base-uncased-finetuned-cola
|
rwang5688
| 2022-08-31T18:50:58Z | 15 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-27T02:31:31Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.542244787638552
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7166
- Matthews Correlation: 0.5422
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5239 | 1.0 | 535 | 0.5124 | 0.4240 |
| 0.3472 | 2.0 | 1070 | 0.4966 | 0.5180 |
| 0.2359 | 3.0 | 1605 | 0.6474 | 0.5174 |
| 0.1723 | 4.0 | 2140 | 0.7166 | 0.5422 |
| 0.1285 | 5.0 | 2675 | 0.8366 | 0.5367 |
### Framework versions
- Transformers 4.12.0
- Pytorch 1.8.1+cpu
- Datasets 2.4.0
- Tokenizers 0.10.3
|
huggingtweets/chrisjbakke
|
huggingtweets
| 2022-08-31T18:23:13Z | 82 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-31T18:21:59Z |
---
language: en
thumbnail: http://www.huggingtweets.com/chrisjbakke/1661970188984/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1491961095136763905/Sd8XRL5__400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Chris Bakke</div>
<div style="text-align: center; font-size: 14px;">@chrisjbakke</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Chris Bakke.
| Data | Chris Bakke |
| --- | --- |
| Tweets downloaded | 2930 |
| Retweets | 32 |
| Short tweets | 142 |
| Tweets kept | 2756 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1h6arh7n/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @chrisjbakke's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1me7w8ck) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1me7w8ck/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/chrisjbakke')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ericntay/stbl_clinical_bert_ft
|
ericntay
| 2022-08-31T15:31:41Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-31T15:14:00Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: stbl_clinical_bert_ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# stbl_clinical_bert_ft
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1789
- F1: 0.8523
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2786 | 1.0 | 95 | 0.1083 | 0.8090 |
| 0.0654 | 2.0 | 190 | 0.1005 | 0.8475 |
| 0.0299 | 3.0 | 285 | 0.1207 | 0.8481 |
| 0.0146 | 4.0 | 380 | 0.1432 | 0.8454 |
| 0.0088 | 5.0 | 475 | 0.1362 | 0.8475 |
| 0.0056 | 6.0 | 570 | 0.1527 | 0.8518 |
| 0.0037 | 7.0 | 665 | 0.1617 | 0.8519 |
| 0.0022 | 8.0 | 760 | 0.1726 | 0.8495 |
| 0.0018 | 9.0 | 855 | 0.1743 | 0.8527 |
| 0.0014 | 10.0 | 950 | 0.1750 | 0.8463 |
| 0.0014 | 11.0 | 1045 | 0.1775 | 0.8522 |
| 0.001 | 12.0 | 1140 | 0.1789 | 0.8523 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
farleyknight-org-username/vit-base-mnist
|
farleyknight-org-username
| 2022-08-31T14:55:56Z | 1,370 | 8 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"vision",
"generated_from_trainer",
"dataset:mnist",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-08-21T16:48:27Z |
---
license: apache-2.0
tags:
- image-classification
- vision
- generated_from_trainer
datasets:
- mnist
metrics:
- accuracy
model-index:
- name: vit-base-mnist
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: mnist
type: mnist
config: mnist
split: train
args: mnist
metrics:
- name: Accuracy
type: accuracy
value: 0.9948888888888889
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-mnist
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the mnist dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0236
- Accuracy: 0.9949
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3717 | 1.0 | 6375 | 0.0522 | 0.9893 |
| 0.3453 | 2.0 | 12750 | 0.0370 | 0.9906 |
| 0.3736 | 3.0 | 19125 | 0.0308 | 0.9916 |
| 0.3224 | 4.0 | 25500 | 0.0269 | 0.9939 |
| 0.2846 | 5.0 | 31875 | 0.0236 | 0.9949 |
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.11.0a0+17540c5
- Datasets 2.4.0
- Tokenizers 0.12.1
|
chinoll/ACGTTS
|
chinoll
| 2022-08-31T13:59:25Z | 0 | 4 | null |
[
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | 2022-08-06T10:02:11Z |
---
license: cc-by-nc-sa-4.0
---
# ACGTTS 模型库
### old支持的语音
```
0 - 绫地宁宁
1 - 因幡巡
2 - 户隐憧子
```
### new支持的语音
```
0 - 绫地宁宁
1 - 户隐憧子
2 - 因幡巡
3 - 明月栞那
4 - 四季夏目
5 - 墨染希
6 - 火打谷爱衣
7 - 汐山凉音
8 - 中文注入声线
9 - 二条院羽月
10 - 在原七海
11 - 式部茉优
12 - 三司绫濑
13 - 壬生千咲
14 - 朝武芳乃
15 - 常陆茉子
16 - 丛雨
17 - 蕾娜·列支敦瑙尔
18 - 鞍马小春
```
目前模型支持的语言有中文(方言味浓重)和日语
# 代码地址
[ACGTTS](https://github.com/chinoll/ACGTTS)
|
autoevaluate/glue-mnli
|
autoevaluate
| 2022-08-31T13:27:39Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-29T09:59:14Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: glue-mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mnli
split: train
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.6772287315333673
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# glue-mnli
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8003
- Accuracy: 0.6772
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9378 | 1.0 | 625 | 0.7896 | 0.6585 |
| 0.7086 | 2.0 | 1250 | 0.7850 | 0.6712 |
| 0.5758 | 3.0 | 1875 | 0.8003 | 0.6772 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
SharpAI/net-traffic-t5-l12
|
SharpAI
| 2022-08-31T13:23:07Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-29T23:03:06Z |
---
tags:
- generated_from_keras_callback
model-index:
- name: net-traffic-t5-l12
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# net-traffic-t5-l12
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
|
huggingtweets/piechocinski
|
huggingtweets
| 2022-08-31T12:41:46Z | 108 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/piechocinski/1661949692555/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/873475061700857856/UBow_zRi_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Janusz Piechociński</div>
<div style="text-align: center; font-size: 14px;">@piechocinski</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Janusz Piechociński.
| Data | Janusz Piechociński |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 252 |
| Short tweets | 92 |
| Tweets kept | 2906 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/bvhq8c57/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @piechocinski's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/suwedfjd) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/suwedfjd/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/piechocinski')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/donaldtusk
|
huggingtweets
| 2022-08-31T12:29:41Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-07-05T20:21:21Z |
---
language: en
thumbnail: http://www.huggingtweets.com/donaldtusk/1661948958135/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/990605878993793024/7uuCR4hP_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Donald Tusk</div>
<div style="text-align: center; font-size: 14px;">@donaldtusk</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Donald Tusk.
| Data | Donald Tusk |
| --- | --- |
| Tweets downloaded | 928 |
| Retweets | 194 |
| Short tweets | 35 |
| Tweets kept | 699 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/pyk96dcl/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @donaldtusk's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2ivs2zls) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2ivs2zls/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/donaldtusk')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
shed-e/Translation
|
shed-e
| 2022-08-31T12:16:58Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-08-31T10:32:40Z |
---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-fr
split: train
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 52.83113187001415
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8560
- Bleu: 52.8311
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
mrm8488/data2vec-text-base-finetuned-sst2
|
mrm8488
| 2022-08-31T10:25:53Z | 15 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"data2vec-text",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-03T18:18:26Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: data2vec-text-base-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9231651376146789
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# data2vec-text-base-finetuned-sst2
This model is a fine-tuned version of [facebook/data2vec-text-base](https://huggingface.co/facebook/data2vec-text-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3600
- Accuracy: 0.9232
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.1519343408010398e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2865 | 1.0 | 4210 | 0.2662 | 0.9128 |
| 0.2256 | 2.0 | 8420 | 0.3698 | 0.9002 |
| 0.1676 | 3.0 | 12630 | 0.3107 | 0.9186 |
| 0.1481 | 4.0 | 16840 | 0.3425 | 0.9186 |
| 0.1429 | 5.0 | 21050 | 0.3600 | 0.9232 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
mrm8488/data2vec-text-base-finetuned-mrpc
|
mrm8488
| 2022-08-31T10:25:37Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"data2vec-text",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-03T16:59:55Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: data2vec-text-base-finetuned-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8627450980392157
- name: F1
type: f1
value: 0.8992805755395683
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# data2vec-text-base-finetuned-mrpc
This model is a fine-tuned version of [facebook/data2vec-text-base](https://huggingface.co/facebook/data2vec-text-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4087
- Accuracy: 0.8627
- F1: 0.8993
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9.486061628311107e-06
- train_batch_size: 4
- eval_batch_size: 16
- seed: 19
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.6197 | 1.0 | 917 | 0.4720 | 0.8039 | 0.8606 |
| 0.4763 | 2.0 | 1834 | 0.4087 | 0.8627 | 0.8993 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
mrm8488/data2vec-text-base-finetuned-mnli
|
mrm8488
| 2022-08-31T10:25:23Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"data2vec-text",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-04-29T16:27:18Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: data2vec-text-base-finetuned-mnli
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.7862455425369332
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# data2vec-text-base-finetuned-mnli
This model is a fine-tuned version of [facebook/data2vec-text-base](https://huggingface.co/facebook/data2vec-text-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5521
- Accuracy: 0.7862
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 1.099 | 1.0 | 24544 | 1.0987 | 0.3182 |
| 1.0993 | 2.0 | 49088 | 1.0979 | 0.3545 |
| 0.7481 | 3.0 | 73632 | 0.7197 | 0.7046 |
| 0.5671 | 4.0 | 98176 | 0.5862 | 0.7728 |
| 0.5505 | 5.0 | 122720 | 0.5521 | 0.7862 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
mrm8488/data2vec-text-base-finetuned-rte
|
mrm8488
| 2022-08-31T10:25:04Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"data2vec-text",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-05-04T08:55:56Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: data2vec-text-base-finetuned-rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.6209386281588448
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# data2vec-text-base-finetuned-rte
This model is a fine-tuned version of [facebook/data2vec-text-base](https://huggingface.co/facebook/data2vec-text-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6670
- Accuracy: 0.6209
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 156 | 0.7091 | 0.4729 |
| No log | 2.0 | 312 | 0.6893 | 0.5271 |
| No log | 3.0 | 468 | 0.6670 | 0.6209 |
| 0.6919 | 4.0 | 624 | 0.6740 | 0.5921 |
| 0.6919 | 5.0 | 780 | 0.6644 | 0.6101 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.1.0
- Tokenizers 0.12.1
|
livingmagic/ddpm-butterflies-128
|
livingmagic
| 2022-08-31T10:05:27Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-08-31T07:36:18Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/livingmagic/ddpm-butterflies-128/tensorboard?#scalars)
|
mrp/bert-finetuned-squad
|
mrp
| 2022-08-31T09:52:58Z | 17 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: squad
type: squad
config: plain_text
split: validation
metrics:
- name: Loss
type: loss
value: 1.073493242263794
verified: true
- name: Exact Match
type: exact_match
value: 80.0853
verified: true
- name: F1
type: f1
value: 87.606
verified: true
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
VioletaMG/dtu-scan114-128_50epochs
|
VioletaMG
| 2022-08-31T09:29:52Z | 2 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:imagefolder",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-08-31T09:08:22Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# dtu-scan114-128_50epochs
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/VioletaMG/dtu-scan114-128_50epochs/tensorboard?#scalars)
|
merkalo-ziri/qa_model
|
merkalo-ziri
| 2022-08-31T09:20:35Z | 26 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"question answering",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-08-23T20:14:41Z |
---
tags:
- "question answering"
widget:
- context: "Привет, меня зовут Артем. Завтра заеду к вам"
- question: "Что я планирую делать?"
---
|
osanseviero/flair_test4
|
osanseviero
| 2022-08-31T09:04:18Z | 1 | 0 |
flair
|
[
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"region:us"
] |
token-classification
| 2022-08-31T09:02:30Z |
---
tags:
- flair
- token-classification
- sequence-tagger-model
---
### Demo: How to use in Flair
Requires:
- **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("osanseviero/flair_test4")
# make example sentence
sentence = Sentence("On September 1st George won 1 dollar while watching Game of Thrones.")
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('ner'):
print(entity)
```
|
cynthiachan/finetuned-deberta-base-10pct
|
cynthiachan
| 2022-08-31T08:56:14Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta",
"token-classification",
"generated_from_trainer",
"dataset:cynthiachan/FeedRef_10pct",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-31T08:40:40Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- cynthiachan/FeedRef_10pct
model-index:
- name: training
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# training
This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the cynthiachan/FeedRef_10pct dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0810
- Attackid Precision: 1.0
- Attackid Recall: 1.0
- Attackid F1: 1.0
- Attackid Number: 6
- Cve Precision: 1.0
- Cve Recall: 1.0
- Cve F1: 1.0
- Cve Number: 11
- Defenderthreat Precision: 0.0
- Defenderthreat Recall: 0.0
- Defenderthreat F1: 0.0
- Defenderthreat Number: 2
- Domain Precision: 1.0
- Domain Recall: 0.9565
- Domain F1: 0.9778
- Domain Number: 23
- Email Precision: 1.0
- Email Recall: 1.0
- Email F1: 1.0
- Email Number: 3
- Filepath Precision: 0.8841
- Filepath Recall: 0.8788
- Filepath F1: 0.8815
- Filepath Number: 165
- Hostname Precision: 1.0
- Hostname Recall: 1.0
- Hostname F1: 1.0
- Hostname Number: 12
- Ipv4 Precision: 1.0
- Ipv4 Recall: 1.0
- Ipv4 F1: 1.0
- Ipv4 Number: 12
- Md5 Precision: 0.8333
- Md5 Recall: 0.9615
- Md5 F1: 0.8929
- Md5 Number: 52
- Sha1 Precision: 0.6667
- Sha1 Recall: 0.8571
- Sha1 F1: 0.75
- Sha1 Number: 7
- Sha256 Precision: 0.9565
- Sha256 Recall: 1.0
- Sha256 F1: 0.9778
- Sha256 Number: 44
- Uri Precision: 0.0
- Uri Recall: 0.0
- Uri F1: 0.0
- Uri Number: 1
- Overall Precision: 0.9014
- Overall Recall: 0.9201
- Overall F1: 0.9107
- Overall Accuracy: 0.9851
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Attackid Precision | Attackid Recall | Attackid F1 | Attackid Number | Cve Precision | Cve Recall | Cve F1 | Cve Number | Defenderthreat Precision | Defenderthreat Recall | Defenderthreat F1 | Defenderthreat Number | Domain Precision | Domain Recall | Domain F1 | Domain Number | Email Precision | Email Recall | Email F1 | Email Number | Filepath Precision | Filepath Recall | Filepath F1 | Filepath Number | Hostname Precision | Hostname Recall | Hostname F1 | Hostname Number | Ipv4 Precision | Ipv4 Recall | Ipv4 F1 | Ipv4 Number | Md5 Precision | Md5 Recall | Md5 F1 | Md5 Number | Sha1 Precision | Sha1 Recall | Sha1 F1 | Sha1 Number | Sha256 Precision | Sha256 Recall | Sha256 F1 | Sha256 Number | Uri Precision | Uri Recall | Uri F1 | Uri Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------------------:|:---------------:|:-----------:|:---------------:|:-------------:|:----------:|:------:|:----------:|:------------------------:|:---------------------:|:-----------------:|:---------------------:|:----------------:|:-------------:|:---------:|:-------------:|:---------------:|:------------:|:--------:|:------------:|:------------------:|:---------------:|:-----------:|:---------------:|:------------------:|:---------------:|:-----------:|:---------------:|:--------------:|:-----------:|:-------:|:-----------:|:-------------:|:----------:|:------:|:----------:|:--------------:|:-----------:|:-------:|:-----------:|:----------------:|:-------------:|:---------:|:-------------:|:-------------:|:----------:|:------:|:----------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.3797 | 0.37 | 500 | 0.1998 | 0.0 | 0.0 | 0.0 | 6 | 0.0 | 0.0 | 0.0 | 11 | 0.0 | 0.0 | 0.0 | 2 | 0.0286 | 0.0435 | 0.0345 | 23 | 0.0 | 0.0 | 0.0 | 3 | 0.5108 | 0.7152 | 0.5960 | 165 | 0.1774 | 0.9167 | 0.2973 | 12 | 0.4 | 0.5 | 0.4444 | 12 | 0.3194 | 0.4423 | 0.3710 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.4588 | 0.8864 | 0.6047 | 44 | 0.0 | 0.0 | 0.0 | 1 | 0.3875 | 0.5858 | 0.4664 | 0.9593 |
| 0.1713 | 0.75 | 1000 | 0.1619 | 0.6 | 0.5 | 0.5455 | 6 | 0.5 | 0.6364 | 0.56 | 11 | 0.0 | 0.0 | 0.0 | 2 | 0.6957 | 0.6957 | 0.6957 | 23 | 0.0 | 0.0 | 0.0 | 3 | 0.6879 | 0.6545 | 0.6708 | 165 | 0.5217 | 1.0 | 0.6857 | 12 | 0.5714 | 1.0 | 0.7273 | 12 | 0.6667 | 0.8846 | 0.7603 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.7692 | 0.9091 | 0.8333 | 44 | 0.0 | 0.0 | 0.0 | 1 | 0.6685 | 0.7219 | 0.6942 | 0.9664 |
| 0.1152 | 1.12 | 1500 | 0.1096 | 0.8333 | 0.8333 | 0.8333 | 6 | 1.0 | 1.0 | 1.0 | 11 | 0.0 | 0.0 | 0.0 | 2 | 0.7826 | 0.7826 | 0.7826 | 23 | 1.0 | 1.0 | 1.0 | 3 | 0.7202 | 0.8424 | 0.7765 | 165 | 1.0 | 1.0 | 1.0 | 12 | 0.4444 | 1.0 | 0.6154 | 12 | 0.6944 | 0.9615 | 0.8065 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.8723 | 0.9318 | 0.9011 | 44 | 0.0 | 0.0 | 0.0 | 1 | 0.7312 | 0.8609 | 0.7908 | 0.9751 |
| 0.1089 | 1.5 | 2000 | 0.1243 | 1.0 | 1.0 | 1.0 | 6 | 0.9167 | 1.0 | 0.9565 | 11 | 0.0 | 0.0 | 0.0 | 2 | 0.9048 | 0.8261 | 0.8636 | 23 | 1.0 | 1.0 | 1.0 | 3 | 0.8011 | 0.8788 | 0.8382 | 165 | 0.6667 | 1.0 | 0.8 | 12 | 0.9091 | 0.8333 | 0.8696 | 12 | 0.7812 | 0.9615 | 0.8621 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.7857 | 1.0 | 0.88 | 44 | 0.0 | 0.0 | 0.0 | 1 | 0.8065 | 0.8876 | 0.8451 | 0.9750 |
| 0.0947 | 1.87 | 2500 | 0.0913 | 0.75 | 1.0 | 0.8571 | 6 | 1.0 | 1.0 | 1.0 | 11 | 0.0 | 0.0 | 0.0 | 2 | 0.8462 | 0.9565 | 0.8980 | 23 | 0.3333 | 0.6667 | 0.4444 | 3 | 0.8035 | 0.8424 | 0.8225 | 165 | 0.6 | 1.0 | 0.7500 | 12 | 1.0 | 1.0 | 1.0 | 12 | 0.7969 | 0.9808 | 0.8793 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.8302 | 1.0 | 0.9072 | 44 | 0.0 | 0.0 | 0.0 | 1 | 0.7952 | 0.8846 | 0.8375 | 0.9792 |
| 0.0629 | 2.25 | 3000 | 0.0940 | 1.0 | 0.8333 | 0.9091 | 6 | 1.0 | 1.0 | 1.0 | 11 | 0.0 | 0.0 | 0.0 | 2 | 0.9565 | 0.9565 | 0.9565 | 23 | 1.0 | 1.0 | 1.0 | 3 | 0.8671 | 0.8303 | 0.8483 | 165 | 1.0 | 1.0 | 1.0 | 12 | 1.0 | 1.0 | 1.0 | 12 | 0.9273 | 0.9808 | 0.9533 | 52 | 0.25 | 0.1429 | 0.1818 | 7 | 0.8776 | 0.9773 | 0.9247 | 44 | 0.0 | 0.0 | 0.0 | 1 | 0.8946 | 0.8787 | 0.8866 | 0.9825 |
| 0.0442 | 2.62 | 3500 | 0.1012 | 1.0 | 1.0 | 1.0 | 6 | 0.9167 | 1.0 | 0.9565 | 11 | 0.0 | 0.0 | 0.0 | 2 | 0.9091 | 0.8696 | 0.8889 | 23 | 0.75 | 1.0 | 0.8571 | 3 | 0.8182 | 0.8727 | 0.8446 | 165 | 1.0 | 1.0 | 1.0 | 12 | 1.0 | 1.0 | 1.0 | 12 | 0.92 | 0.8846 | 0.9020 | 52 | 0.5 | 1.0 | 0.6667 | 7 | 0.9565 | 1.0 | 0.9778 | 44 | 0.0 | 0.0 | 0.0 | 1 | 0.8616 | 0.9024 | 0.8815 | 0.9818 |
| 0.0401 | 3.0 | 4000 | 0.0810 | 1.0 | 1.0 | 1.0 | 6 | 1.0 | 1.0 | 1.0 | 11 | 0.0 | 0.0 | 0.0 | 2 | 1.0 | 0.9565 | 0.9778 | 23 | 1.0 | 1.0 | 1.0 | 3 | 0.8841 | 0.8788 | 0.8815 | 165 | 1.0 | 1.0 | 1.0 | 12 | 1.0 | 1.0 | 1.0 | 12 | 0.8333 | 0.9615 | 0.8929 | 52 | 0.6667 | 0.8571 | 0.75 | 7 | 0.9565 | 1.0 | 0.9778 | 44 | 0.0 | 0.0 | 0.0 | 1 | 0.9014 | 0.9201 | 0.9107 | 0.9851 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu102
- Datasets 2.4.0
- Tokenizers 0.12.1
|
VanHoan/bert-fine-tuned-cola
|
VanHoan
| 2022-08-31T08:48:56Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-20T02:35:11Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-fine-tuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: train
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5981073556597793
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-fine-tuned-cola
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8408
- Matthews Correlation: 0.5981
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4729 | 1.0 | 1069 | 0.5311 | 0.5154 |
| 0.3134 | 2.0 | 2138 | 0.6336 | 0.6007 |
| 0.1686 | 3.0 | 3207 | 0.8408 | 0.5981 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Sandeepanie/clinical-finetuned-AgitationModel
|
Sandeepanie
| 2022-08-31T05:31:57Z | 109 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-31T04:28:14Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: clinical-finetuned-AgitationModel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clinical-finetuned-AgitationModel
This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9746
- Accuracy: 0.88
- Precision: 0.9178
- Recall: 0.9178
- F1: 0.9178
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.0949 | 1.0 | 50 | 1.0393 | 0.85 | 0.8816 | 0.9178 | 0.8993 |
| 0.0475 | 2.0 | 100 | 1.0619 | 0.85 | 0.8816 | 0.9178 | 0.8993 |
| 0.0149 | 3.0 | 150 | 0.9746 | 0.88 | 0.9178 | 0.9178 | 0.9178 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Tokenizers 0.12.1
|
jannatul17/squad-bn-qgen-mt5-all-metric
|
jannatul17
| 2022-08-31T04:50:05Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:squad_bn",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-30T23:46:07Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_bn
metrics:
- sacrebleu
model-index:
- name: squad-bn-qgen-mt5-all-metric
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: squad_bn
type: squad_bn
args: squad_bn
metrics:
- name: Sacrebleu
type: sacrebleu
value: 6.4143
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# squad-bn-qgen-mt5-all-metric
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the squad_bn dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7273
- Rouge1 Precision: 35.8589
- Rouge1 Recall: 29.7041
- Rouge1 Fmeasure: 31.6373
- Rouge2 Precision: 15.4203
- Rouge2 Recall: 12.5155
- Rouge2 Fmeasure: 13.3978
- Rougel Precision: 34.4684
- Rougel Recall: 28.5887
- Rougel Fmeasure: 30.4627
- Rougelsum Precision: 34.4252
- Rougelsum Recall: 28.5362
- Rougelsum Fmeasure: 30.4053
- Sacrebleu: 6.4143
- Meteor: 0.1416
- Gen Len: 16.7199
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 Precision | Rouge1 Recall | Rouge1 Fmeasure | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure | Rougel Precision | Rougel Recall | Rougel Fmeasure | Rougelsum Precision | Rougelsum Recall | Rougelsum Fmeasure | Sacrebleu | Meteor | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|:----------------:|:-------------:|:---------------:|:----------------:|:-------------:|:---------------:|:-------------------:|:----------------:|:------------------:|:---------:|:------:|:-------:|
| 0.8449 | 1.0 | 16396 | 0.7340 | 31.6476 | 26.8901 | 28.2871 | 13.621 | 11.3545 | 11.958 | 30.3276 | 25.7754 | 27.1048 | 30.3426 | 25.7489 | 27.0991 | 5.9655 | 0.1336 | 16.8685 |
| 0.7607 | 2.0 | 32792 | 0.7182 | 33.7173 | 28.6115 | 30.1049 | 14.8227 | 12.2059 | 12.9453 | 32.149 | 27.2036 | 28.6617 | 32.2479 | 27.2261 | 28.7272 | 6.6093 | 0.138 | 16.8522 |
| 0.7422 | 3.0 | 49188 | 0.7083 | 34.6128 | 29.0223 | 30.7248 | 14.9888 | 12.3092 | 13.1021 | 33.2507 | 27.8154 | 29.4599 | 33.2848 | 27.812 | 29.5064 | 6.2407 | 0.1416 | 16.5806 |
| 0.705 | 4.0 | 65584 | 0.7035 | 34.156 | 29.0012 | 30.546 | 14.72 | 12.0251 | 12.8161 | 32.7527 | 27.6511 | 29.1955 | 32.7692 | 27.6627 | 29.231 | 6.1784 | 0.1393 | 16.7793 |
| 0.6859 | 5.0 | 81980 | 0.7038 | 35.1405 | 29.6033 | 31.2614 | 15.5108 | 12.6414 | 13.5059 | 33.8335 | 28.4264 | 30.0745 | 33.8782 | 28.4349 | 30.0901 | 6.5896 | 0.144 | 16.6651 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
js05212/ddpm-butterflies-128
|
js05212
| 2022-08-31T04:49:25Z | 1 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-08-31T04:08:43Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/js05212/ddpm-butterflies-128/tensorboard?#scalars)
|
Late-potato/distilbert-base-uncased-finetuned-imdb
|
Late-potato
| 2022-08-31T04:47:26Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-31T04:10:28Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2999
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4977 | 1.0 | 782 | 2.3318 |
| 2.4232 | 2.0 | 1564 | 2.3005 |
| 2.386 | 3.0 | 2346 | 2.2721 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.12.1+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
|
thammarat-th/distilbert-base-uncased-finetuned-imdb
|
thammarat-th
| 2022-08-31T04:46:34Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-31T04:01:15Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4216 | 1.0 | 782 | 2.2803 |
| 2.3719 | 2.0 | 1564 | 2.2577 |
| 2.3407 | 3.0 | 2346 | 2.2320 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.12.1+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
|
earthanan/distilbert-base-uncased-finetuned-imdb
|
earthanan
| 2022-08-31T04:13:43Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"fill-mask",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-31T04:05:46Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4721
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7086 | 1.0 | 157 | 2.4898 |
| 2.5796 | 2.0 | 314 | 2.4230 |
| 2.5269 | 3.0 | 471 | 2.4354 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.12.1+cu113
- Datasets 1.17.0
- Tokenizers 0.10.3
|
mooface/xlm-roberta-base-finetuned-panx-de
|
mooface
| 2022-08-31T02:07:15Z | 116 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-31T01:43:13Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8648740833380706
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1365
- F1: 0.8649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2553 | 1.0 | 525 | 0.1575 | 0.8279 |
| 0.1284 | 2.0 | 1050 | 0.1386 | 0.8463 |
| 0.0813 | 3.0 | 1575 | 0.1365 | 0.8649 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
abhitopia/question-answer-generation
|
abhitopia
| 2022-08-31T00:30:48Z | 89 | 7 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"question-answer-generation",
"dataset:squad",
"arxiv:1910.10683",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-30T21:54:29Z |
---
datasets:
- squad
tags:
- question-answer-generation
widget:
- text: "generate question: <hl> 42 <hl> is the answer to life, the universe and everything. </s>"
- text: "question: What is 42 context: 42 is the answer to life, the universe and everything. </s>"
license: mit
---
## T5 for multi-task QA and QG
This is multi-task [t5-base](https://arxiv.org/abs/1910.10683) model trained for question answering and answer aware question generation tasks.
For question generation the answer spans are highlighted within the text with special highlight tokens (`<hl>`) and prefixed with 'generate question: '. For QA the input is processed like this `question: question_text context: context_text </s>`
You can play with the model using the inference API. Here's how you can use it
For QG
`generate question: <hl> 42 <hl> is the answer to life, the universe and everything. </s>`
For QA
`question: What is 42 context: 42 is the answer to life, the universe and everything. </s>`
For more deatils see [this](https://github.com/patil-suraj/question_generation) repo.
|
microsoft/bloom-deepspeed-inference-int8
|
microsoft
| 2022-08-30T23:01:17Z | 7 | 28 |
transformers
|
[
"transformers",
"bloom",
"feature-extraction",
"license:bigscience-bloom-rail-1.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-08-18T18:26:43Z |
---
license: bigscience-bloom-rail-1.0
---
This is a custom INT8 version of the original [BLOOM weights](https://huggingface.co/bigscience/bloom) to make it fast to use with the [DeepSpeed-Inference](https://www.deepspeed.ai/tutorials/inference-tutorial/) engine which uses Tensor Parallelism. In this repo the tensors are split into 8 shards to target 8 GPUs.
The full BLOOM documentation is [here](https://huggingface.co/bigscience/bloom).
To use the weights in repo, you can adapt to your needs the scripts found [here](https://github.com/bigscience-workshop/Megatron-DeepSpeed/tree/main/scripts/inference) (XXX: they are going to migrate soon to HF Transformers code base, so will need to update the link once moved).
|
ruse40folly/distilbert-base-uncased-finetuned-emotion
|
ruse40folly
| 2022-08-30T22:15:45Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-30T21:58:02Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9235
- name: F1
type: f1
value: 0.9235310384339321
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2236
- Accuracy: 0.9235
- F1: 0.9235
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8521 | 1.0 | 250 | 0.3251 | 0.9085 | 0.9063 |
| 0.2489 | 2.0 | 500 | 0.2236 | 0.9235 | 0.9235 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
|
nawage/ddpm-butterflies-128
|
nawage
| 2022-08-30T20:43:21Z | 2 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-08-30T19:29:51Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/nawage/ddpm-butterflies-128/tensorboard?#scalars)
|
RussianNLP/ruRoBERTa-large-rucola
|
RussianNLP
| 2022-08-30T20:23:10Z | 586 | 5 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"ru",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-30T19:54:51Z |
---
language: ru
license: apache-2.0
tags:
- transformers
thumbnail: "https://github.com/RussianNLP/RuCoLA/blob/main/logo.png"
widget:
- text: "Он решил ту или иную сложную задачу."
---
This is a finetuned version of [RuRoBERTa-large](https://huggingface.co/sberbank-ai/ruRoberta-large) for the task of linguistic acceptability classification on the [RuCoLA](https://rucola-benchmark.com/) benchmark.
The hyperparameters used for finetuning are as follows:
* 5 training epochs (with early stopping based on validation MCC)
* Peak learning rate: 1e-5, linear warmup for 10% of total training time
* Weight decay: 1e-4
* Batch size: 32
* Random seed: 5
* Optimizer: [torch.optim.AdamW](https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html)
|
vendorabc/tabular-playground
|
vendorabc
| 2022-08-30T19:13:16Z | 0 | 0 |
sklearn
|
[
"sklearn",
"skops",
"tabular-classification",
"region:us"
] |
tabular-classification
| 2022-08-30T19:12:26Z |
---
library_name: sklearn
tags:
- sklearn
- skops
- tabular-classification
widget:
structuredData:
attribute_0:
- material_7
- material_7
- material_7
attribute_1:
- material_6
- material_5
- material_6
attribute_2:
- 6
- 6
- 6
attribute_3:
- 9
- 6
- 9
loading:
- 101.52
- 91.34
- 167.03
measurement_0:
- 9
- 10
- 11
measurement_1:
- 11
- 11
- 5
measurement_10:
- 14.926
- 15.162
- 16.398
measurement_11:
- 20.394
- 19.46
- 20.613
measurement_12:
- 11.829
- 9.114
- 11.007
measurement_13:
- 16.195
- 16.024
- 16.061
measurement_14:
- 16.517
- 17.132
- 15.18
measurement_15:
- 13.826
- 12.257
- 15.758
measurement_16:
- 14.206
- 15.094
- .nan
measurement_17:
- 723.712
- 896.835
- 893.454
measurement_2:
- 2
- 10
- 6
measurement_3:
- 17.492
- 18.114
- 18.42
measurement_4:
- 13.962
- 10.185
- 13.565
measurement_5:
- 15.716
- 18.06
- 16.916
measurement_6:
- 17.104
- 18.283
- 17.917
measurement_7:
- 12.377
- 10.957
- 10.394
measurement_8:
- 19.221
- 20.638
- 19.805
measurement_9:
- 11.613
- 11.804
- 12.012
product_code:
- E
- D
- E
---
# Model description
This is a DecisionTreeClassifier model built for Kaggle Tabular Playground Series August 2022, trained on supersoaker production failures dataset.
## Intended uses & limitations
This model is not ready to be used in production.
## Training Procedure
### Hyperparameters
The model is trained with below hyperparameters.
<details>
<summary> Click to expand </summary>
| Hyperparameter | Value |
|-----------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| memory | |
| steps | [('transformation', ColumnTransformer(transformers=[('loading_missing_value_imputer',
SimpleImputer(), ['loading']),
('numerical_missing_value_imputer',
SimpleImputer(),
['loading', 'measurement_3', 'measurement_4',
'measurement_5', 'measurement_6',
'measurement_7', 'measurement_8',
'measurement_9', 'measurement_10',
'measurement_11', 'measurement_12',
'measurement_13', 'measurement_14',
'measurement_15', 'measurement_16',
'measurement_17']),
('attribute_0_encoder', OneHotEncoder(),
['attribute_0']),
('attribute_1_encoder', OneHotEncoder(),
['attribute_1']),
('product_code_encoder', OneHotEncoder(),
['product_code'])])), ('model', DecisionTreeClassifier(max_depth=4))] |
| verbose | False |
| transformation | ColumnTransformer(transformers=[('loading_missing_value_imputer',
SimpleImputer(), ['loading']),
('numerical_missing_value_imputer',
SimpleImputer(),
['loading', 'measurement_3', 'measurement_4',
'measurement_5', 'measurement_6',
'measurement_7', 'measurement_8',
'measurement_9', 'measurement_10',
'measurement_11', 'measurement_12',
'measurement_13', 'measurement_14',
'measurement_15', 'measurement_16',
'measurement_17']),
('attribute_0_encoder', OneHotEncoder(),
['attribute_0']),
('attribute_1_encoder', OneHotEncoder(),
['attribute_1']),
('product_code_encoder', OneHotEncoder(),
['product_code'])]) |
| model | DecisionTreeClassifier(max_depth=4) |
| transformation__n_jobs | |
| transformation__remainder | drop |
| transformation__sparse_threshold | 0.3 |
| transformation__transformer_weights | |
| transformation__transformers | [('loading_missing_value_imputer', SimpleImputer(), ['loading']), ('numerical_missing_value_imputer', SimpleImputer(), ['loading', 'measurement_3', 'measurement_4', 'measurement_5', 'measurement_6', 'measurement_7', 'measurement_8', 'measurement_9', 'measurement_10', 'measurement_11', 'measurement_12', 'measurement_13', 'measurement_14', 'measurement_15', 'measurement_16', 'measurement_17']), ('attribute_0_encoder', OneHotEncoder(), ['attribute_0']), ('attribute_1_encoder', OneHotEncoder(), ['attribute_1']), ('product_code_encoder', OneHotEncoder(), ['product_code'])] |
| transformation__verbose | False |
| transformation__verbose_feature_names_out | True |
| transformation__loading_missing_value_imputer | SimpleImputer() |
| transformation__numerical_missing_value_imputer | SimpleImputer() |
| transformation__attribute_0_encoder | OneHotEncoder() |
| transformation__attribute_1_encoder | OneHotEncoder() |
| transformation__product_code_encoder | OneHotEncoder() |
| transformation__loading_missing_value_imputer__add_indicator | False |
| transformation__loading_missing_value_imputer__copy | True |
| transformation__loading_missing_value_imputer__fill_value | |
| transformation__loading_missing_value_imputer__missing_values | nan |
| transformation__loading_missing_value_imputer__strategy | mean |
| transformation__loading_missing_value_imputer__verbose | 0 |
| transformation__numerical_missing_value_imputer__add_indicator | False |
| transformation__numerical_missing_value_imputer__copy | True |
| transformation__numerical_missing_value_imputer__fill_value | |
| transformation__numerical_missing_value_imputer__missing_values | nan |
| transformation__numerical_missing_value_imputer__strategy | mean |
| transformation__numerical_missing_value_imputer__verbose | 0 |
| transformation__attribute_0_encoder__categories | auto |
| transformation__attribute_0_encoder__drop | |
| transformation__attribute_0_encoder__dtype | <class 'numpy.float64'> |
| transformation__attribute_0_encoder__handle_unknown | error |
| transformation__attribute_0_encoder__sparse | True |
| transformation__attribute_1_encoder__categories | auto |
| transformation__attribute_1_encoder__drop | |
| transformation__attribute_1_encoder__dtype | <class 'numpy.float64'> |
| transformation__attribute_1_encoder__handle_unknown | error |
| transformation__attribute_1_encoder__sparse | True |
| transformation__product_code_encoder__categories | auto |
| transformation__product_code_encoder__drop | |
| transformation__product_code_encoder__dtype | <class 'numpy.float64'> |
| transformation__product_code_encoder__handle_unknown | error |
| transformation__product_code_encoder__sparse | True |
| model__ccp_alpha | 0.0 |
| model__class_weight | |
| model__criterion | gini |
| model__max_depth | 4 |
| model__max_features | |
| model__max_leaf_nodes | |
| model__min_impurity_decrease | 0.0 |
| model__min_samples_leaf | 1 |
| model__min_samples_split | 2 |
| model__min_weight_fraction_leaf | 0.0 |
| model__random_state | |
| model__splitter | best |
</details>
### Model Plot
The model plot is below.
<style>#sk-b5518c10-fd7e-49af-b124-60d3dd3d0f86 {color: black;background-color: white;}#sk-b5518c10-fd7e-49af-b124-60d3dd3d0f86 pre{padding: 0;}#sk-b5518c10-fd7e-49af-b124-60d3dd3d0f86 div.sk-toggleable {background-color: white;}#sk-b5518c10-fd7e-49af-b124-60d3dd3d0f86 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-b5518c10-fd7e-49af-b124-60d3dd3d0f86 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-b5518c10-fd7e-49af-b124-60d3dd3d0f86 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-b5518c10-fd7e-49af-b124-60d3dd3d0f86 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-b5518c10-fd7e-49af-b124-60d3dd3d0f86 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-b5518c10-fd7e-49af-b124-60d3dd3d0f86 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-b5518c10-fd7e-49af-b124-60d3dd3d0f86 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-b5518c10-fd7e-49af-b124-60d3dd3d0f86 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-b5518c10-fd7e-49af-b124-60d3dd3d0f86 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-b5518c10-fd7e-49af-b124-60d3dd3d0f86 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-b5518c10-fd7e-49af-b124-60d3dd3d0f86 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-b5518c10-fd7e-49af-b124-60d3dd3d0f86 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-b5518c10-fd7e-49af-b124-60d3dd3d0f86 div.sk-estimator:hover {background-color: #d4ebff;}#sk-b5518c10-fd7e-49af-b124-60d3dd3d0f86 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-b5518c10-fd7e-49af-b124-60d3dd3d0f86 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-b5518c10-fd7e-49af-b124-60d3dd3d0f86 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-b5518c10-fd7e-49af-b124-60d3dd3d0f86 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;}#sk-b5518c10-fd7e-49af-b124-60d3dd3d0f86 div.sk-item {z-index: 1;}#sk-b5518c10-fd7e-49af-b124-60d3dd3d0f86 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;}#sk-b5518c10-fd7e-49af-b124-60d3dd3d0f86 div.sk-parallel::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-b5518c10-fd7e-49af-b124-60d3dd3d0f86 div.sk-parallel-item {display: flex;flex-direction: column;position: relative;background-color: white;}#sk-b5518c10-fd7e-49af-b124-60d3dd3d0f86 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-b5518c10-fd7e-49af-b124-60d3dd3d0f86 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-b5518c10-fd7e-49af-b124-60d3dd3d0f86 div.sk-parallel-item:only-child::after {width: 0;}#sk-b5518c10-fd7e-49af-b124-60d3dd3d0f86 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;position: relative;}#sk-b5518c10-fd7e-49af-b124-60d3dd3d0f86 div.sk-label label {font-family: monospace;font-weight: bold;background-color: white;display: inline-block;line-height: 1.2em;}#sk-b5518c10-fd7e-49af-b124-60d3dd3d0f86 div.sk-label-container {position: relative;z-index: 2;text-align: center;}#sk-b5518c10-fd7e-49af-b124-60d3dd3d0f86 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-b5518c10-fd7e-49af-b124-60d3dd3d0f86 div.sk-text-repr-fallback {display: none;}</style><div id="sk-b5518c10-fd7e-49af-b124-60d3dd3d0f86" class="sk-top-container"><div class="sk-text-repr-fallback"><pre>Pipeline(steps=[('transformation',ColumnTransformer(transformers=[('loading_missing_value_imputer',SimpleImputer(),['loading']),('numerical_missing_value_imputer',SimpleImputer(),['loading', 'measurement_3','measurement_4','measurement_5','measurement_6','measurement_7','measurement_8','measurement_9','measurement_10','measurement_11','measurement_12','measurement_13','measurement_14','measurement_15','measurement_16','measurement_17']),('attribute_0_encoder',OneHotEncoder(),['attribute_0']),('attribute_1_encoder',OneHotEncoder(),['attribute_1']),('product_code_encoder',OneHotEncoder(),['product_code'])])),('model', DecisionTreeClassifier(max_depth=4))])</pre><b>Please rerun this cell to show the HTML repr or trust the notebook.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="48fbfeb0-e954-46f7-9a36-8dfe86284fca" type="checkbox" ><label for="48fbfeb0-e954-46f7-9a36-8dfe86284fca" class="sk-toggleable__label sk-toggleable__label-arrow">Pipeline</label><div class="sk-toggleable__content"><pre>Pipeline(steps=[('transformation',ColumnTransformer(transformers=[('loading_missing_value_imputer',SimpleImputer(),['loading']),('numerical_missing_value_imputer',SimpleImputer(),['loading', 'measurement_3','measurement_4','measurement_5','measurement_6','measurement_7','measurement_8','measurement_9','measurement_10','measurement_11','measurement_12','measurement_13','measurement_14','measurement_15','measurement_16','measurement_17']),('attribute_0_encoder',OneHotEncoder(),['attribute_0']),('attribute_1_encoder',OneHotEncoder(),['attribute_1']),('product_code_encoder',OneHotEncoder(),['product_code'])])),('model', DecisionTreeClassifier(max_depth=4))])</pre></div></div></div><div class="sk-serial"><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="157828b7-30d1-4b5b-b25e-971143379fff" type="checkbox" ><label for="157828b7-30d1-4b5b-b25e-971143379fff" class="sk-toggleable__label sk-toggleable__label-arrow">transformation: ColumnTransformer</label><div class="sk-toggleable__content"><pre>ColumnTransformer(transformers=[('loading_missing_value_imputer',SimpleImputer(), ['loading']),('numerical_missing_value_imputer',SimpleImputer(),['loading', 'measurement_3', 'measurement_4','measurement_5', 'measurement_6','measurement_7', 'measurement_8','measurement_9', 'measurement_10','measurement_11', 'measurement_12','measurement_13', 'measurement_14','measurement_15', 'measurement_16','measurement_17']),('attribute_0_encoder', OneHotEncoder(),['attribute_0']),('attribute_1_encoder', OneHotEncoder(),['attribute_1']),('product_code_encoder', OneHotEncoder(),['product_code'])])</pre></div></div></div><div class="sk-parallel"><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="3bde7e44-3687-4b99-a3b7-b4e87023ec85" type="checkbox" ><label for="3bde7e44-3687-4b99-a3b7-b4e87023ec85" class="sk-toggleable__label sk-toggleable__label-arrow">loading_missing_value_imputer</label><div class="sk-toggleable__content"><pre>['loading']</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="ef9279cb-7d77-4ef1-aafe-26e433e2a615" type="checkbox" ><label for="ef9279cb-7d77-4ef1-aafe-26e433e2a615" class="sk-toggleable__label sk-toggleable__label-arrow">SimpleImputer</label><div class="sk-toggleable__content"><pre>SimpleImputer()</pre></div></div></div></div></div></div><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="b079e8d7-f789-4622-ad66-197193ef0061" type="checkbox" ><label for="b079e8d7-f789-4622-ad66-197193ef0061" class="sk-toggleable__label sk-toggleable__label-arrow">numerical_missing_value_imputer</label><div class="sk-toggleable__content"><pre>['loading', 'measurement_3', 'measurement_4', 'measurement_5', 'measurement_6', 'measurement_7', 'measurement_8', 'measurement_9', 'measurement_10', 'measurement_11', 'measurement_12', 'measurement_13', 'measurement_14', 'measurement_15', 'measurement_16', 'measurement_17']</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="969f6026-8077-468a-b332-8ceb69bac4e9" type="checkbox" ><label for="969f6026-8077-468a-b332-8ceb69bac4e9" class="sk-toggleable__label sk-toggleable__label-arrow">SimpleImputer</label><div class="sk-toggleable__content"><pre>SimpleImputer()</pre></div></div></div></div></div></div><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="5bb6cc8f-c971-47b8-a1bc-fe8053602d5c" type="checkbox" ><label for="5bb6cc8f-c971-47b8-a1bc-fe8053602d5c" class="sk-toggleable__label sk-toggleable__label-arrow">attribute_0_encoder</label><div class="sk-toggleable__content"><pre>['attribute_0']</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="8a841657-38e1-41bb-b8f9-5ad2cc25f7d3" type="checkbox" ><label for="8a841657-38e1-41bb-b8f9-5ad2cc25f7d3" class="sk-toggleable__label sk-toggleable__label-arrow">OneHotEncoder</label><div class="sk-toggleable__content"><pre>OneHotEncoder()</pre></div></div></div></div></div></div><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="be08add7-98fc-40b5-a259-d462d738780a" type="checkbox" ><label for="be08add7-98fc-40b5-a259-d462d738780a" class="sk-toggleable__label sk-toggleable__label-arrow">attribute_1_encoder</label><div class="sk-toggleable__content"><pre>['attribute_1']</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="cf07a6c2-b92e-40b1-9862-2c1ca3baab47" type="checkbox" ><label for="cf07a6c2-b92e-40b1-9862-2c1ca3baab47" class="sk-toggleable__label sk-toggleable__label-arrow">OneHotEncoder</label><div class="sk-toggleable__content"><pre>OneHotEncoder()</pre></div></div></div></div></div></div><div class="sk-parallel-item"><div class="sk-item"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="244735dc-f1e1-458c-a1c6-60ef847b9cae" type="checkbox" ><label for="244735dc-f1e1-458c-a1c6-60ef847b9cae" class="sk-toggleable__label sk-toggleable__label-arrow">product_code_encoder</label><div class="sk-toggleable__content"><pre>['product_code']</pre></div></div></div><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="2f1a1c41-e1c4-40ce-afd9-9658030b3423" type="checkbox" ><label for="2f1a1c41-e1c4-40ce-afd9-9658030b3423" class="sk-toggleable__label sk-toggleable__label-arrow">OneHotEncoder</label><div class="sk-toggleable__content"><pre>OneHotEncoder()</pre></div></div></div></div></div></div></div></div><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="25044b48-b814-45f9-a75b-9ee472bdc79c" type="checkbox" ><label for="25044b48-b814-45f9-a75b-9ee472bdc79c" class="sk-toggleable__label sk-toggleable__label-arrow">DecisionTreeClassifier</label><div class="sk-toggleable__content"><pre>DecisionTreeClassifier(max_depth=4)</pre></div></div></div></div></div></div></div>
## Evaluation Results
You can find the details about evaluation process and the evaluation results.
| Metric | Value |
|----------|----------|
| accuracy | 0.791961 |
| f1 score | 0.791961 |
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
import pickle
with open(decision-tree-playground-kaggle/model.pkl, 'rb') as file:
clf = pickle.load(file)
```
</details>
# Model Card Authors
This model card is written by following authors:
huggingface
# Model Card Contact
You can contact the model card authors through following channels:
[More Information Needed]
# Citation
Below you can find information related to citation.
**BibTeX:**
```
[More Information Needed]
```
# Additional Content
## Tree Plot

## Confusion Matrix

|
vendorabc/modeltest
|
vendorabc
| 2022-08-30T19:01:03Z | 0 | 0 |
sklearn
|
[
"sklearn",
"skops",
"tabular-classification",
"license:mit",
"region:us"
] |
tabular-classification
| 2022-08-30T19:00:59Z |
---
license: mit
library_name: sklearn
tags:
- sklearn
- skops
- tabular-classification
widget:
structuredData:
area error:
- 30.29
- 96.05
- 48.31
compactness error:
- 0.01911
- 0.01652
- 0.01484
concave points error:
- 0.01037
- 0.0137
- 0.01093
concavity error:
- 0.02701
- 0.02269
- 0.02813
fractal dimension error:
- 0.003586
- 0.001698
- 0.002461
mean area:
- 481.9
- 1130.0
- 748.9
mean compactness:
- 0.1058
- 0.1029
- 0.1223
mean concave points:
- 0.03821
- 0.07951
- 0.08087
mean concavity:
- 0.08005
- 0.108
- 0.1466
mean fractal dimension:
- 0.06373
- 0.05461
- 0.05796
mean perimeter:
- 81.09
- 123.6
- 101.7
mean radius:
- 12.47
- 18.94
- 15.46
mean smoothness:
- 0.09965
- 0.09009
- 0.1092
mean symmetry:
- 0.1925
- 0.1582
- 0.1931
mean texture:
- 18.6
- 21.31
- 19.48
perimeter error:
- 2.497
- 5.486
- 3.094
radius error:
- 0.3961
- 0.7888
- 0.4743
smoothness error:
- 0.006953
- 0.004444
- 0.00624
symmetry error:
- 0.01782
- 0.01386
- 0.01397
texture error:
- 1.044
- 0.7975
- 0.7859
worst area:
- 677.9
- 1866.0
- 1156.0
worst compactness:
- 0.2378
- 0.2336
- 0.2394
worst concave points:
- 0.1015
- 0.1789
- 0.1514
worst concavity:
- 0.2671
- 0.2687
- 0.3791
worst fractal dimension:
- 0.0875
- 0.06589
- 0.08019
worst perimeter:
- 96.05
- 165.9
- 124.9
worst radius:
- 14.97
- 24.86
- 19.26
worst smoothness:
- 0.1426
- 0.1193
- 0.1546
worst symmetry:
- 0.3014
- 0.2551
- 0.2837
worst texture:
- 24.64
- 26.58
- 26.0
---
# Model description
This is a HistGradientBoostingClassifier model trained on breast cancer dataset. It's trained with Halving Grid Search Cross Validation, with parameter grids on max_leaf_nodes and max_depth.
## Intended uses & limitations
This model is not ready to be used in production.
## Training Procedure
### Hyperparameters
The model is trained with below hyperparameters.
<details>
<summary> Click to expand </summary>
| Hyperparameter | Value |
|---------------------------------|----------------------------------------------------------|
| aggressive_elimination | False |
| cv | 5 |
| error_score | nan |
| estimator__categorical_features | |
| estimator__early_stopping | auto |
| estimator__l2_regularization | 0.0 |
| estimator__learning_rate | 0.1 |
| estimator__loss | auto |
| estimator__max_bins | 255 |
| estimator__max_depth | |
| estimator__max_iter | 100 |
| estimator__max_leaf_nodes | 31 |
| estimator__min_samples_leaf | 20 |
| estimator__monotonic_cst | |
| estimator__n_iter_no_change | 10 |
| estimator__random_state | |
| estimator__scoring | loss |
| estimator__tol | 1e-07 |
| estimator__validation_fraction | 0.1 |
| estimator__verbose | 0 |
| estimator__warm_start | False |
| estimator | HistGradientBoostingClassifier() |
| factor | 3 |
| max_resources | auto |
| min_resources | exhaust |
| n_jobs | -1 |
| param_grid | {'max_leaf_nodes': [5, 10, 15], 'max_depth': [2, 5, 10]} |
| random_state | 42 |
| refit | True |
| resource | n_samples |
| return_train_score | True |
| scoring | |
| verbose | 0 |
</details>
### Model Plot
The model plot is below.
<style>#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 {color: black;background-color: white;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 pre{padding: 0;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-toggleable {background-color: white;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-estimator:hover {background-color: #d4ebff;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-item {z-index: 1;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-parallel::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 2em;bottom: 0;left: 50%;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-parallel-item {display: flex;flex-direction: column;position: relative;background-color: white;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-parallel-item:only-child::after {width: 0;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;position: relative;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-label label {font-family: monospace;font-weight: bold;background-color: white;display: inline-block;line-height: 1.2em;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-label-container {position: relative;z-index: 2;text-align: center;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04 div.sk-text-repr-fallback {display: none;}</style><div id="sk-72410a5a-f2ab-48e8-8d36-6c2ba8f6eb04" class="sk-top-container"><div class="sk-text-repr-fallback"><pre>HalvingGridSearchCV(estimator=HistGradientBoostingClassifier(), n_jobs=-1,param_grid={'max_depth': [2, 5, 10],'max_leaf_nodes': [5, 10, 15]},random_state=42)</pre><b>Please rerun this cell to show the HTML repr or trust the notebook.</b></div><div class="sk-container" hidden><div class="sk-item sk-dashed-wrapped"><div class="sk-label-container"><div class="sk-label sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="ab167486-be7e-4eb5-be01-ba21adbd7469" type="checkbox" ><label for="ab167486-be7e-4eb5-be01-ba21adbd7469" class="sk-toggleable__label sk-toggleable__label-arrow">HalvingGridSearchCV</label><div class="sk-toggleable__content"><pre>HalvingGridSearchCV(estimator=HistGradientBoostingClassifier(), n_jobs=-1,param_grid={'max_depth': [2, 5, 10],'max_leaf_nodes': [5, 10, 15]},random_state=42)</pre></div></div></div><div class="sk-parallel"><div class="sk-parallel-item"><div class="sk-item"><div class="sk-serial"><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="e9df9f06-8d9e-4379-ad72-52f461408663" type="checkbox" ><label for="e9df9f06-8d9e-4379-ad72-52f461408663" class="sk-toggleable__label sk-toggleable__label-arrow">HistGradientBoostingClassifier</label><div class="sk-toggleable__content"><pre>HistGradientBoostingClassifier()</pre></div></div></div></div></div></div></div></div></div></div>
## Evaluation Results
You can find the details about evaluation process and the evaluation results.
| Metric | Value |
|----------|----------|
| accuracy | 0.959064 |
| f1 score | 0.959064 |
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
import pickle
with open(pkl_filename, 'rb') as file:
clf = pickle.load(file)
```
</details>
# Model Card Authors
This model card is written by following authors:
skops_user
# Model Card Contact
You can contact the model card authors through following channels:
[More Information Needed]
# Citation
Below you can find information related to citation.
**BibTeX:**
```
bibtex
@inproceedings{...,year={2020}}
```
# Additional Content
## Confusion matrix

## Hyperparameter search results
<details>
<summary> Click to expand </summary>
| iter | n_resources | mean_fit_time | std_fit_time | mean_score_time | std_score_time | param_max_depth | param_max_leaf_nodes | params | split0_test_score | split1_test_score | split2_test_score | split3_test_score | split4_test_score | mean_test_score | std_test_score | rank_test_score | split0_train_score | split1_train_score | split2_train_score | split3_train_score | split4_train_score | mean_train_score | std_train_score |
|--------|---------------|-----------------|----------------|-------------------|------------------|-------------------|------------------------|-----------------------------------------|---------------------|---------------------|---------------------|---------------------|---------------------|-------------------|------------------|-------------------|----------------------|----------------------|----------------------|----------------------|----------------------|--------------------|-------------------|
| 0 | 44 | 0.0498069 | 0.0107112 | 0.0121156 | 0.0061838 | 2 | 5 | {'max_depth': 2, 'max_leaf_nodes': 5} | 0.875 | 0.5 | 0.625 | 0.75 | 0.375 | 0.625 | 0.176777 | 5 | 0.628571 | 0.628571 | 0.628571 | 0.514286 | 0.514286 | 0.582857 | 0.0559883 |
| 0 | 44 | 0.0492636 | 0.0187271 | 0.00738611 | 0.00245441 | 2 | 10 | {'max_depth': 2, 'max_leaf_nodes': 10} | 0.875 | 0.5 | 0.625 | 0.75 | 0.375 | 0.625 | 0.176777 | 5 | 0.628571 | 0.628571 | 0.628571 | 0.514286 | 0.514286 | 0.582857 | 0.0559883 |
| 0 | 44 | 0.0572055 | 0.0153176 | 0.0111395 | 0.0010297 | 2 | 15 | {'max_depth': 2, 'max_leaf_nodes': 15} | 0.875 | 0.5 | 0.625 | 0.75 | 0.375 | 0.625 | 0.176777 | 5 | 0.628571 | 0.628571 | 0.628571 | 0.514286 | 0.514286 | 0.582857 | 0.0559883 |
| 0 | 44 | 0.0498482 | 0.0177091 | 0.00857358 | 0.00415935 | 5 | 5 | {'max_depth': 5, 'max_leaf_nodes': 5} | 0.875 | 0.5 | 0.625 | 0.75 | 0.375 | 0.625 | 0.176777 | 5 | 0.628571 | 0.628571 | 0.628571 | 0.514286 | 0.514286 | 0.582857 | 0.0559883 |
| 0 | 44 | 0.0500658 | 0.00992094 | 0.00998321 | 0.00527031 | 5 | 10 | {'max_depth': 5, 'max_leaf_nodes': 10} | 0.875 | 0.5 | 0.625 | 0.75 | 0.375 | 0.625 | 0.176777 | 5 | 0.628571 | 0.628571 | 0.628571 | 0.514286 | 0.514286 | 0.582857 | 0.0559883 |
| 0 | 44 | 0.0525903 | 0.0151616 | 0.00874681 | 0.00462998 | 5 | 15 | {'max_depth': 5, 'max_leaf_nodes': 15} | 0.875 | 0.5 | 0.625 | 0.75 | 0.375 | 0.625 | 0.176777 | 5 | 0.628571 | 0.628571 | 0.628571 | 0.514286 | 0.514286 | 0.582857 | 0.0559883 |
| 0 | 44 | 0.0512018 | 0.0130152 | 0.00881834 | 0.00500514 | 10 | 5 | {'max_depth': 10, 'max_leaf_nodes': 5} | 0.875 | 0.5 | 0.625 | 0.75 | 0.375 | 0.625 | 0.176777 | 5 | 0.628571 | 0.628571 | 0.628571 | 0.514286 | 0.514286 | 0.582857 | 0.0559883 |
| 0 | 44 | 0.0566921 | 0.0186051 | 0.00513492 | 0.000498488 | 10 | 10 | {'max_depth': 10, 'max_leaf_nodes': 10} | 0.875 | 0.5 | 0.625 | 0.75 | 0.375 | 0.625 | 0.176777 | 5 | 0.628571 | 0.628571 | 0.628571 | 0.514286 | 0.514286 | 0.582857 | 0.0559883 |
| 0 | 44 | 0.060587 | 0.04041 | 0.00987453 | 0.00529624 | 10 | 15 | {'max_depth': 10, 'max_leaf_nodes': 15} | 0.875 | 0.5 | 0.625 | 0.75 | 0.375 | 0.625 | 0.176777 | 5 | 0.628571 | 0.628571 | 0.628571 | 0.514286 | 0.514286 | 0.582857 | 0.0559883 |
| 1 | 132 | 0.232459 | 0.0479878 | 0.0145514 | 0.00856422 | 10 | 5 | {'max_depth': 10, 'max_leaf_nodes': 5} | 0.961538 | 0.923077 | 0.923077 | 0.961538 | 0.961538 | 0.946154 | 0.0188422 | 2 | 1 | 1 | 1 | 1 | 1 | 1 | 0 |
| 1 | 132 | 0.272297 | 0.0228833 | 0.011561 | 0.0068272 | 10 | 10 | {'max_depth': 10, 'max_leaf_nodes': 10} | 0.961538 | 0.923077 | 0.923077 | 0.961538 | 0.961538 | 0.946154 | 0.0188422 | 2 | 1 | 1 | 1 | 1 | 1 | 1 | 0 |
| 1 | 132 | 0.239161 | 0.0330412 | 0.0116591 | 0.003554 | 10 | 15 | {'max_depth': 10, 'max_leaf_nodes': 15} | 0.961538 | 0.923077 | 0.923077 | 0.961538 | 0.961538 | 0.946154 | 0.0188422 | 2 | 1 | 1 | 1 | 1 | 1 | 1 | 0 |
| 2 | 396 | 0.920334 | 0.18198 | 0.0166654 | 0.00776263 | 10 | 15 | {'max_depth': 10, 'max_leaf_nodes': 15} | 0.962025 | 0.911392 | 0.987342 | 0.974359 | 0.935897 | 0.954203 | 0.0273257 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 |
</details>
## Classification report
<details>
<summary> Click to expand </summary>
| index | precision | recall | f1-score | support |
|--------------|-------------|----------|------------|-----------|
| malignant | 0.951613 | 0.936508 | 0.944 | 63 |
| benign | 0.963303 | 0.972222 | 0.967742 | 108 |
| macro avg | 0.957458 | 0.954365 | 0.955871 | 171 |
| weighted avg | 0.958996 | 0.959064 | 0.958995 | 171 |
</details>
|
agustina/museo
|
agustina
| 2022-08-30T18:25:36Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-08-30T18:24:37Z |
museo de mariposis y insectos moderno, con muebles blancos yiluminados
|
epsil/Health_Psychology_Analysis
|
epsil
| 2022-08-30T17:49:10Z | 0 | 1 | null |
[
"region:us"
] | null | 2022-08-30T15:49:29Z |
### TO BE ADDED
widget:
- text: "I am going through lot of stress"
|
TingChenChang/make-multilingual-en-zh-tw-20220825062338
|
TingChenChang
| 2022-08-30T17:26:18Z | 7 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-08-25T15:35:33Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11898 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
maxpe/bertin-roberta-base-spanish_sem_eval_2018_task_1
|
maxpe
| 2022-08-30T16:01:41Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"doi:10.57967/hf/0032",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-08-30T15:53:12Z |
# BERTIN-roBERTa-base-Spanish_sem_eval_2018_task_1
This is a [BERTIN-roBERTa-base-Spanish](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) model finetuned on ~3500 tweets in Spanish annotated for 11 emotion categories in [SemEval-2018 Task 1: Affect in Tweets: SubTask 5: Emotion Classification](https://competitions.codalab.org/competitions/17751) (also available on the [Hugging Face Dataset Hub](https://huggingface.co/datasets/sem_eval_2018_task_1)).
To quickly test it locally, use a pipeline:
```python
from transformers import pipeline
pipe = pipeline("text-classification",model="maxpe/bertin-roberta-base-spanish_sem_eval_2018_task_1")
pipe("¡Odio tener tanto estrés!",top_k=11)
```
|
maxpe/twitter-roberta-base_semeval18_emodetection
|
maxpe
| 2022-08-30T15:14:04Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
# UPDATE: NEW AND IMPROVED MODEL AVAILABLE AT https://huggingface.co/maxpe/twitter-roberta-base-jun2022_sem_eval_2018_task_1
# Twitter-roBERTa-base_SemEval18_Emodetection
This is a Twitter-roBERTa-base model trained on ~7000 tweets in English annotated for 11 emotion categories in [SemEval-2018 Task 1: Affect in Tweets: SubTask 5: Emotion Classification](https://competitions.codalab.org/competitions/17751).
Run the classifier on the test set of the competition:
```python
from datasets import load_dataset
from transformers import AutoTokenizer, AutoModel
from torch.utils.data import DataLoader
import torch
import pandas as pd
# choose GPU when available
device = 'cuda' if torch.cuda.is_available() else 'cpu'
tokenizer = AutoTokenizer.from_pretrained("cardiffnlp/twitter-roberta-base",model_max_length=512)
# build custom model with classification layer on top and a dropout layer before
class RobertaClass(torch.nn.Module):
def __init__(self):
super(RobertaClass, self).__init__()
self.l1 = AutoModel.from_pretrained("cardiffnlp/twitter-roberta-base",return_dict=False)
self.l2 = torch.nn.Dropout(0.3)
self.l3 = torch.nn.Linear(768, 11)
def forward(self, input_ids, attention_mask):
_, output_1= self.l1(input_ids=input_ids, attention_mask=attention_mask)
output_2 = self.l2(output_1)
output = self.l3(output_2)
return output
model_name="twitter-roberta-base_semeval18_emodetection/pytorch_model.bin"
model=RobertaClass()
model.load_state_dict(torch.load(model_name,map_location=torch.device(device)))
model.eval()
# run on more than 1 GPU
model = torch.nn.DataParallel(model)
model.to(device)
twnames=['anger','anticipation','disgust','fear','joy','love','optimism','pessimism','sadness','surprise','trust']
# load from hugging face dataset hub
testset_raw = load_dataset('sem_eval_2018_task_1','subtask5.english',split='test')
# remove old columns
testset=testset_raw.remove_columns(twnames+["ID"])
# tokenize
testset_tokenized = testset.map(lambda e: tokenizer(e['Tweet'], truncation=True, padding='max_length'), batched=True)
testset_tokenized=testset_tokenized.remove_columns("Tweet")
testset_tokenized.set_format(type='torch', columns=['input_ids', 'attention_mask'])
outfile="predicted_2018-E-c-En-test-gold.txt"
MAX_LEN = 512
VALID_BATCH_SIZE = 8
# set batch size according to available RAM
# VALID_BATCH_SIZE = 1000
# set num_workers for parallel processing
inference_params = {'batch_size': VALID_BATCH_SIZE,
'shuffle': False,
# 'num_workers': 1
}
inference_loader = DataLoader(testset_tokenized, **inference_params)
open(outfile,"w").close()
with torch.no_grad():
# change lines for progress manager
# for _, data in tqdm(enumerate(inference_loader, 0),total=len(inference_loader)):
for _, data in enumerate(inference_loader, 0):
outputs = model(input_ids=data['input_ids'],attention_mask=data['attention_mask'])
fin_outputs=torch.sigmoid(outputs).cpu().detach().numpy().tolist()
pd.DataFrame(fin_outputs).to_csv(outfile,index=False,header=False,sep="\t",mode='a')
# # dataset from file (one text per line)
# from datasets import Dataset
# with open(linesoftextfile,"rb") as textfile:
# textdict={"text":[x.decode().rstrip("\n") for x in textfile.readlines()]}
# inference_dataset=Dataset.from_dict(textdict)
# del(textdict)
```
|
muhtasham/bert-small-finer-longer
|
muhtasham
| 2022-08-30T14:26:44Z | 180 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-29T12:21:01Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-small-finer-longer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-small-finer-longer
This model is a fine-tuned version of [muhtasham/bert-small-finer](https://huggingface.co/muhtasham/bert-small-finer) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4264
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| No log | 0.49 | 500 | 1.6683 |
| 1.5941 | 0.97 | 1000 | 1.6569 |
| 1.5941 | 1.46 | 1500 | 1.6436 |
| 1.5605 | 1.94 | 2000 | 1.6173 |
| 1.5605 | 2.43 | 2500 | 1.6073 |
| 1.5297 | 2.91 | 3000 | 1.6001 |
| 1.5297 | 3.4 | 3500 | 1.5815 |
| 1.5022 | 3.89 | 4000 | 1.5756 |
| 1.5022 | 4.37 | 4500 | 1.5568 |
| 1.4753 | 4.86 | 5000 | 1.5458 |
| 1.4753 | 5.34 | 5500 | 1.5399 |
| 1.4537 | 5.83 | 6000 | 1.5273 |
| 1.4537 | 6.32 | 6500 | 1.5192 |
| 1.433 | 6.8 | 7000 | 1.5099 |
| 1.433 | 7.29 | 7500 | 1.5083 |
| 1.4169 | 7.77 | 8000 | 1.4957 |
| 1.4169 | 8.26 | 8500 | 1.4914 |
| 1.3982 | 8.75 | 9000 | 1.4859 |
| 1.3982 | 9.23 | 9500 | 1.4697 |
| 1.3877 | 9.72 | 10000 | 1.4711 |
| 1.3877 | 10.2 | 10500 | 1.4608 |
| 1.3729 | 10.69 | 11000 | 1.4583 |
| 1.3729 | 11.18 | 11500 | 1.4513 |
| 1.3627 | 11.66 | 12000 | 1.4498 |
| 1.3627 | 12.15 | 12500 | 1.4396 |
| 1.357 | 12.63 | 13000 | 1.4415 |
| 1.357 | 13.12 | 13500 | 1.4347 |
| 1.3484 | 13.61 | 14000 | 1.4316 |
| 1.3484 | 14.09 | 14500 | 1.4319 |
| 1.3442 | 14.58 | 15000 | 1.4268 |
| 1.3442 | 15.06 | 15500 | 1.4293 |
| 1.3387 | 15.55 | 16000 | 1.4217 |
| 1.3387 | 16.03 | 16500 | 1.4241 |
| 1.3358 | 16.52 | 17000 | 1.4250 |
| 1.3358 | 17.01 | 17500 | 1.4196 |
| 1.3344 | 17.49 | 18000 | 1.4193 |
| 1.3344 | 17.98 | 18500 | 1.4200 |
| 1.3274 | 18.46 | 19000 | 1.4250 |
| 1.3274 | 18.95 | 19500 | 1.4168 |
| 1.3348 | 19.44 | 20000 | 1.4164 |
| 1.3348 | 19.92 | 20500 | 1.4264 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
jcmc/reinforce-Pong
|
jcmc
| 2022-08-30T14:22:45Z | 0 | 0 | null |
[
"Pong-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-30T14:22:37Z |
---
tags:
- Pong-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: reinforce-Pong
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pong-PLE-v0
type: Pong-PLE-v0
metrics:
- type: mean_reward
value: -16.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pong-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pong-PLE-v0** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
igpaub/q-FrozenLake-v1-8x8
|
igpaub
| 2022-08-30T14:03:55Z | 0 | 0 | null |
[
"FrozenLake-v1-8x8",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-30T12:58:49Z |
---
tags:
- FrozenLake-v1-8x8
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-8x8
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-8x8
type: FrozenLake-v1-8x8
metrics:
- type: mean_reward
value: 0.35 +/- 0.48
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="igpaub/q-FrozenLake-v1-8x8", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
jannatul17/squad-bn-qgen-banglat5
|
jannatul17
| 2022-08-30T12:35:21Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:squad_bn",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-30T03:12:00Z |
---
tags:
- generated_from_trainer
datasets:
- squad_bn
metrics:
- sacrebleu
model-index:
- name: squad-bn-qgen-banglat5
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: squad_bn
type: squad_bn
args: squad_bn
metrics:
- name: Sacrebleu
type: sacrebleu
value: 8.0898
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# squad-bn-qgen-banglat5
This model is a fine-tuned version of [csebuetnlp/banglat5](https://huggingface.co/csebuetnlp/banglat5) on the squad_bn dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4808
- Rouge1 Precision: 37.7366
- Rouge1 Recall: 34.2712
- Rouge1 Fmeasure: 34.8738
- Rouge2 Precision: 16.2055
- Rouge2 Recall: 14.568
- Rouge2 Fmeasure: 14.852
- Rougel Precision: 35.4241
- Rougel Recall: 32.2011
- Rougel Fmeasure: 32.7617
- Rougelsum Precision: 35.4167
- Rougelsum Recall: 32.1978
- Rougelsum Fmeasure: 32.7572
- Sacrebleu: 8.0898
- Meteor: 0.1782
- Gen Len: 9.8299
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 Precision | Rouge1 Recall | Rouge1 Fmeasure | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure | Rougel Precision | Rougel Recall | Rougel Fmeasure | Rougelsum Precision | Rougelsum Recall | Rougelsum Fmeasure | Sacrebleu | Meteor | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|:----------------:|:-------------:|:---------------:|:----------------:|:-------------:|:---------------:|:-------------------:|:----------------:|:------------------:|:---------:|:------:|:-------:|
| 0.5208 | 1.0 | 16396 | 0.4683 | 38.566 | 35.5094 | 35.9216 | 17.0701 | 15.3916 | 15.6829 | 36.4433 | 33.5298 | 33.958 | 36.4637 | 33.5496 | 33.9913 | 8.6055 | 0.1799 | 9.8340 |
| 0.479 | 2.0 | 32792 | 0.4815 | 40.7475 | 35.8163 | 37.0498 | 17.9002 | 15.2742 | 15.9601 | 38.6977 | 33.8607 | 35.1258 | 38.7261 | 33.8717 | 35.1537 | 9.0561 | 0.1835 | 9.4338 |
| 0.4577 | 3.0 | 49188 | 0.4879 | 40.6712 | 36.2763 | 37.2775 | 18.5942 | 16.0689 | 16.7206 | 38.8546 | 34.5013 | 35.5491 | 38.8633 | 34.5255 | 35.5682 | 9.7947 | 0.1879 | 9.6324 |
| 0.4389 | 4.0 | 65584 | 0.4881 | 41.4251 | 36.2873 | 37.6272 | 18.561 | 15.7067 | 16.5358 | 39.434 | 34.3496 | 35.7457 | 39.533 | 34.4702 | 35.8347 | 9.7612 | 0.1881 | 9.3944 |
| 0.4321 | 5.0 | 81980 | 0.4937 | 41.1197 | 36.0568 | 37.4121 | 18.7179 | 15.8348 | 16.6644 | 39.3386 | 34.3177 | 35.7088 | 39.3171 | 34.3015 | 35.6748 | 9.8263 | 0.1887 | 9.4040 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
huggingbase/xlm-roberta-base-finetuned-panx-all
|
huggingbase
| 2022-08-30T12:29:00Z | 106 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-30T11:59:48Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1745
- F1: 0.8505
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3055 | 1.0 | 835 | 0.1842 | 0.8099 |
| 0.1561 | 2.0 | 1670 | 0.1711 | 0.8452 |
| 0.1016 | 3.0 | 2505 | 0.1745 | 0.8505 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
huggingbase/xlm-roberta-base-finetuned-panx-de
|
huggingbase
| 2022-08-30T10:28:31Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-30T10:03:25Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8648740833380706
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1365
- F1: 0.8649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2553 | 1.0 | 525 | 0.1575 | 0.8279 |
| 0.1284 | 2.0 | 1050 | 0.1386 | 0.8463 |
| 0.0813 | 3.0 | 1575 | 0.1365 | 0.8649 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Conrad747/lg-en-v2
|
Conrad747
| 2022-08-30T10:06:19Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-30T09:47:44Z |
---
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: lg-en-test-version
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lg-en-test-version
This model is a fine-tuned version of [AI-Lab-Makerere/lg_en](https://huggingface.co/AI-Lab-Makerere/lg_en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5803
- Bleu: 31.3111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9.687717341785184e-05
- train_batch_size: 15
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| No log | 1.0 | 24 | 1.0100 | 28.5722 |
| No log | 2.0 | 48 | 0.7758 | 27.7506 |
| No log | 3.0 | 72 | 0.6459 | 40.3866 |
| No log | 4.0 | 96 | 0.5803 | 31.3111 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ML-unipi/bart-large-tos
|
ML-unipi
| 2022-08-30T09:39:27Z | 9 | 2 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"summarization",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2022-08-29T07:26:49Z |
---
tags:
- summarization
license: mit
---
# BART (large-sized model), fine-tuned on custom contracts dataset
BART model pre-trained on English language, and fine-tuned for terms of service abstractive summarization
## Model description
BART is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text.
BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering). This particular checkpoint has been fine-tuned on CNN Daily Mail, a large collection of text-summary pairs.
## Intended uses & limitations
You can use this model for text summarization.
### How to use
Here is how to use this model with the [pipeline API](https://huggingface.co/transformers/main_classes/pipelines.html):
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="ML-unipi/bart-large-tos")
ARTICLE = """ New York (CNN)When Liana Barrientos was 23 years old, she got married in Westchester County, New York.
A year later, she got married again in Westchester County, but to a different man and without divorcing her first husband.
Only 18 days after that marriage, she got hitched yet again. Then, Barrientos declared "I do" five more times, sometimes only within two weeks of each other.
In 2010, she married once more, this time in the Bronx. In an application for a marriage license, she stated it was her "first and only" marriage.
Barrientos, now 39, is facing two criminal counts of "offering a false instrument for filing in the first degree," referring to her false statements on the
2010 marriage license application, according to court documents.
Prosecutors said the marriages were part of an immigration scam.
On Friday, she pleaded not guilty at State Supreme Court in the Bronx, according to her attorney, Christopher Wright, who declined to comment further.
After leaving court, Barrientos was arrested and charged with theft of service and criminal trespass for allegedly sneaking into the New York subway through an emergency exit, said Detective
Annette Markowski, a police spokeswoman. In total, Barrientos has been married 10 times, with nine of her marriages occurring between 1999 and 2002.
All occurred either in Westchester County, Long Island, New Jersey or the Bronx. She is believed to still be married to four men, and at one time, she was married to eight men at once, prosecutors say.
Prosecutors said the immigration scam involved some of her husbands, who filed for permanent residence status shortly after the marriages.
Any divorces happened only after such filings were approved. It was unclear whether any of the men will be prosecuted.
The case was referred to the Bronx District Attorney\'s Office by Immigration and Customs Enforcement and the Department of Homeland Security\'s
Investigation Division. Seven of the men are from so-called "red-flagged" countries, including Egypt, Turkey, Georgia, Pakistan and Mali.
Her eighth husband, Rashid Rajput, was deported in 2006 to his native Pakistan after an investigation by the Joint Terrorism Task Force.
If convicted, Barrientos faces up to four years in prison. Her next court appearance is scheduled for May 18.
"""
print(summarizer(ARTICLE, max_length=130, min_length=30, do_sample=False))
>>> [{'summary_text': 'Liana Barrientos, 39, is charged with two counts of "offering a false instrument for filing in the first degree" In total, she has been married 10 times, with nine of her marriages occurring between 1999 and 2002. She is believed to still be married to four men.'}]
```
|
xud/ddpm-butterflies-128
|
xud
| 2022-08-30T09:21:47Z | 1 | 2 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-08-30T08:09:01Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/xud/ddpm-butterflies-128/tensorboard?#scalars)
|
igpaub/q-FrozenLake-v1-8x8-noSlippery
|
igpaub
| 2022-08-30T09:15:32Z | 0 | 0 | null |
[
"FrozenLake-v1-8x8-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-30T09:15:24Z |
---
tags:
- FrozenLake-v1-8x8-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-8x8-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-8x8-no_slippery
type: FrozenLake-v1-8x8-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="igpaub/q-FrozenLake-v1-8x8-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
hisaoka/pegasus-samsum
|
hisaoka
| 2022-08-30T07:52:51Z | 10 | 0 |
transformers
|
[
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-29T04:25:46Z |
---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4886
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6853 | 0.54 | 500 | 1.4886 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu102
- Datasets 2.4.0
- Tokenizers 0.12.1
|
cynthiachan/finetuned-roberta-base-10pct
|
cynthiachan
| 2022-08-30T06:49:09Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"token-classification",
"generated_from_trainer",
"dataset:cynthiachan/FeedRef_10pct",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-29T03:56:32Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- cynthiachan/FeedRef_10pct
model-index:
- name: training
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# training
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the cynthiachan/FeedRef_10pct dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1033
- Attackid Precision: 1.0
- Attackid Recall: 1.0
- Attackid F1: 1.0
- Attackid Number: 6
- Cve Precision: 1.0
- Cve Recall: 1.0
- Cve F1: 1.0
- Cve Number: 11
- Defenderthreat Precision: 0.0
- Defenderthreat Recall: 0.0
- Defenderthreat F1: 0.0
- Defenderthreat Number: 2
- Domain Precision: 0.8636
- Domain Recall: 0.8261
- Domain F1: 0.8444
- Domain Number: 23
- Email Precision: 1.0
- Email Recall: 1.0
- Email F1: 1.0
- Email Number: 3
- Filepath Precision: 0.8108
- Filepath Recall: 0.9091
- Filepath F1: 0.8571
- Filepath Number: 165
- Hostname Precision: 0.9231
- Hostname Recall: 1.0
- Hostname F1: 0.9600
- Hostname Number: 12
- Ipv4 Precision: 0.9167
- Ipv4 Recall: 0.9167
- Ipv4 F1: 0.9167
- Ipv4 Number: 12
- Md5 Precision: 0.875
- Md5 Recall: 0.9423
- Md5 F1: 0.9074
- Md5 Number: 52
- Sha1 Precision: 0.75
- Sha1 Recall: 0.8571
- Sha1 F1: 0.8000
- Sha1 Number: 7
- Sha256 Precision: 0.8
- Sha256 Recall: 1.0
- Sha256 F1: 0.8889
- Sha256 Number: 44
- Uri Precision: 0.0
- Uri Recall: 0.0
- Uri F1: 0.0
- Uri Number: 1
- Overall Precision: 0.8383
- Overall Recall: 0.9201
- Overall F1: 0.8773
- Overall Accuracy: 0.9816
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Attackid Precision | Attackid Recall | Attackid F1 | Attackid Number | Cve Precision | Cve Recall | Cve F1 | Cve Number | Defenderthreat Precision | Defenderthreat Recall | Defenderthreat F1 | Defenderthreat Number | Domain Precision | Domain Recall | Domain F1 | Domain Number | Email Precision | Email Recall | Email F1 | Email Number | Filepath Precision | Filepath Recall | Filepath F1 | Filepath Number | Hostname Precision | Hostname Recall | Hostname F1 | Hostname Number | Ipv4 Precision | Ipv4 Recall | Ipv4 F1 | Ipv4 Number | Md5 Precision | Md5 Recall | Md5 F1 | Md5 Number | Sha1 Precision | Sha1 Recall | Sha1 F1 | Sha1 Number | Sha256 Precision | Sha256 Recall | Sha256 F1 | Sha256 Number | Uri Precision | Uri Recall | Uri F1 | Uri Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------------------:|:---------------:|:-----------:|:---------------:|:-------------:|:----------:|:------:|:----------:|:------------------------:|:---------------------:|:-----------------:|:---------------------:|:----------------:|:-------------:|:---------:|:-------------:|:---------------:|:------------:|:--------:|:------------:|:------------------:|:---------------:|:-----------:|:---------------:|:------------------:|:---------------:|:-----------:|:---------------:|:--------------:|:-----------:|:-------:|:-----------:|:-------------:|:----------:|:------:|:----------:|:--------------:|:-----------:|:-------:|:-----------:|:----------------:|:-------------:|:---------:|:-------------:|:-------------:|:----------:|:------:|:----------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.4353 | 0.37 | 500 | 0.3525 | 0.0 | 0.0 | 0.0 | 6 | 0.0 | 0.0 | 0.0 | 11 | 0.0 | 0.0 | 0.0 | 2 | 0.0 | 0.0 | 0.0 | 23 | 0.0 | 0.0 | 0.0 | 3 | 0.3984 | 0.6182 | 0.4846 | 165 | 0.0714 | 0.3333 | 0.1176 | 12 | 0.0 | 0.0 | 0.0 | 12 | 0.8936 | 0.8077 | 0.8485 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.4937 | 0.8864 | 0.6341 | 44 | 0.0 | 0.0 | 0.0 | 1 | 0.4156 | 0.5533 | 0.4746 | 0.9459 |
| 0.2089 | 0.75 | 1000 | 0.1812 | 0.0 | 0.0 | 0.0 | 6 | 0.9 | 0.8182 | 0.8571 | 11 | 0.0 | 0.0 | 0.0 | 2 | 0.15 | 0.2609 | 0.1905 | 23 | 0.0 | 0.0 | 0.0 | 3 | 0.6432 | 0.7758 | 0.7033 | 165 | 0.0 | 0.0 | 0.0 | 12 | 0.6471 | 0.9167 | 0.7586 | 12 | 0.7143 | 0.8654 | 0.7826 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.5286 | 0.8409 | 0.6491 | 44 | 0.0 | 0.0 | 0.0 | 1 | 0.5315 | 0.6982 | 0.6036 | 0.9626 |
| 0.1453 | 1.12 | 1500 | 0.1374 | 0.75 | 0.5 | 0.6 | 6 | 0.9167 | 1.0 | 0.9565 | 11 | 0.0 | 0.0 | 0.0 | 2 | 0.5135 | 0.8261 | 0.6333 | 23 | 0.0 | 0.0 | 0.0 | 3 | 0.6863 | 0.8485 | 0.7588 | 165 | 0.7 | 0.5833 | 0.6364 | 12 | 0.6667 | 0.6667 | 0.6667 | 12 | 0.8167 | 0.9423 | 0.8750 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.8333 | 0.9091 | 0.8696 | 44 | 0.0 | 0.0 | 0.0 | 1 | 0.7048 | 0.8195 | 0.7579 | 0.9745 |
| 0.1277 | 1.5 | 2000 | 0.1400 | 1.0 | 1.0 | 1.0 | 6 | 1.0 | 1.0 | 1.0 | 11 | 0.0 | 0.0 | 0.0 | 2 | 0.7273 | 0.6957 | 0.7111 | 23 | 0.2 | 0.3333 | 0.25 | 3 | 0.7181 | 0.8182 | 0.7649 | 165 | 0.9167 | 0.9167 | 0.9167 | 12 | 0.7857 | 0.9167 | 0.8462 | 12 | 0.8167 | 0.9423 | 0.8750 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.8302 | 1.0 | 0.9072 | 44 | 0.0 | 0.0 | 0.0 | 1 | 0.7634 | 0.8402 | 0.8000 | 0.9735 |
| 0.1074 | 1.87 | 2500 | 0.1101 | 1.0 | 1.0 | 1.0 | 6 | 1.0 | 1.0 | 1.0 | 11 | 0.0 | 0.0 | 0.0 | 2 | 0.72 | 0.7826 | 0.7500 | 23 | 0.2857 | 0.6667 | 0.4 | 3 | 0.7554 | 0.8424 | 0.7966 | 165 | 0.8571 | 1.0 | 0.9231 | 12 | 0.8182 | 0.75 | 0.7826 | 12 | 0.9259 | 0.9615 | 0.9434 | 52 | 0.0 | 0.0 | 0.0 | 7 | 0.6833 | 0.9318 | 0.7885 | 44 | 0.0 | 0.0 | 0.0 | 1 | 0.7660 | 0.8521 | 0.8067 | 0.9762 |
| 0.0758 | 2.25 | 3000 | 0.1161 | 1.0 | 1.0 | 1.0 | 6 | 1.0 | 1.0 | 1.0 | 11 | 0.0 | 0.0 | 0.0 | 2 | 0.9091 | 0.8696 | 0.8889 | 23 | 0.5 | 0.6667 | 0.5714 | 3 | 0.8251 | 0.9152 | 0.8678 | 165 | 1.0 | 1.0 | 1.0 | 12 | 1.0 | 0.6667 | 0.8 | 12 | 0.9259 | 0.9615 | 0.9434 | 52 | 1.0 | 0.5714 | 0.7273 | 7 | 0.8958 | 0.9773 | 0.9348 | 44 | 0.0 | 0.0 | 0.0 | 1 | 0.8722 | 0.9083 | 0.8899 | 0.9814 |
| 0.064 | 2.62 | 3500 | 0.1275 | 1.0 | 1.0 | 1.0 | 6 | 0.8333 | 0.9091 | 0.8696 | 11 | 0.0 | 0.0 | 0.0 | 2 | 0.8947 | 0.7391 | 0.8095 | 23 | 1.0 | 1.0 | 1.0 | 3 | 0.8418 | 0.9030 | 0.8713 | 165 | 0.8571 | 1.0 | 0.9231 | 12 | 1.0 | 0.75 | 0.8571 | 12 | 0.9245 | 0.9423 | 0.9333 | 52 | 0.6667 | 0.5714 | 0.6154 | 7 | 0.8113 | 0.9773 | 0.8866 | 44 | 0.0 | 0.0 | 0.0 | 1 | 0.8580 | 0.8935 | 0.8754 | 0.9793 |
| 0.0522 | 3.0 | 4000 | 0.1033 | 1.0 | 1.0 | 1.0 | 6 | 1.0 | 1.0 | 1.0 | 11 | 0.0 | 0.0 | 0.0 | 2 | 0.8636 | 0.8261 | 0.8444 | 23 | 1.0 | 1.0 | 1.0 | 3 | 0.8108 | 0.9091 | 0.8571 | 165 | 0.9231 | 1.0 | 0.9600 | 12 | 0.9167 | 0.9167 | 0.9167 | 12 | 0.875 | 0.9423 | 0.9074 | 52 | 0.75 | 0.8571 | 0.8000 | 7 | 0.8 | 1.0 | 0.8889 | 44 | 0.0 | 0.0 | 0.0 | 1 | 0.8383 | 0.9201 | 0.8773 | 0.9816 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu102
- Datasets 2.4.0
- Tokenizers 0.12.1
|
philschmid/custom-pipeline-text-classification
|
philschmid
| 2022-08-30T06:43:39Z | 0 | 1 |
generic
|
[
"generic",
"text-classification",
"region:us"
] |
text-classification
| 2022-07-18T12:21:29Z |
---
tags:
- text-classification
library_name: generic
---
# Text Classification repository template
This is a template repository for Text Classification to support generic inference with Hugging Face Hub generic Inference API. There are two required steps:
1. Specify the requirements by defining a `requirements.txt` file.
2. Implement the `pipeline.py` `__init__` and `__call__` methods. These methods are called by the Inference API. The `__init__` method should load the model and preload all the elements needed for inference (model, processors, tokenizers, etc.). This is only called once. The `__call__` method performs the actual inference. Make sure to follow the same input/output specifications defined in the template for the pipeline to work.
## How to start
First create a repo in https://hf.co/new.
Then clone this template and push it to your repo.
```
git clone https://huggingface.co/templates/text-classification
cd text-classification
git remote set-url origin https://huggingface.co/$YOUR_USER/$YOUR_REPO_NAME
git push --force
```
|
jaynlp/t5-large-transferqa
|
jaynlp
| 2022-08-30T02:47:11Z | 103 | 1 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"arxiv:2109.04655",
"arxiv:2203.01552",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
We reproduced the [TransferQA paper's](https://arxiv.org/abs/2109.04655) QA pre-trained weights.
If you use this work for your research, please cite our work [Dialogue Summaries as Dialogue States ({DS}2), Template-Guided Summarization for Few-shot Dialogue State Tracking](https://arxiv.org/abs/2203.01552)
### Citation
```
@inproceedings{shin-etal-2022-dialogue,
title = "Dialogue Summaries as Dialogue States ({DS}2), Template-Guided Summarization for Few-shot Dialogue State Tracking",
author = "Shin, Jamin and
Yu, Hangyeol and
Moon, Hyeongdon and
Madotto, Andrea and
Park, Juneyoung",
booktitle = "Findings of the Association for Computational Linguistics: ACL 2022",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.findings-acl.302",
pages = "3824--3846",
abstract = "Annotating task-oriented dialogues is notorious for the expensive and difficult data collection process. Few-shot dialogue state tracking (DST) is a realistic solution to this problem. In this paper, we hypothesize that dialogue summaries are essentially unstructured dialogue states; hence, we propose to reformulate dialogue state tracking as a dialogue summarization problem. To elaborate, we train a text-to-text language model with synthetic template-based dialogue summaries, generated by a set of rules from the dialogue states. Then, the dialogue states can be recovered by inversely applying the summary generation rules. We empirically show that our method DS2 outperforms previous works on few-shot DST in MultiWoZ 2.0 and 2.1, in both cross-domain and multi-domain settings. Our method also exhibits vast speedup during both training and inference as it can generate all states at once.Finally, based on our analysis, we discover that the naturalness of the summary templates plays a key role for successful training.",
}
```
|
JAlexis/modelv2
|
JAlexis
| 2022-08-30T02:38:24Z | 12 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"question-answering",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-08-30T02:20:27Z |
---
widget:
- text: "How can I protect myself against covid-19?"
context: "Preventative measures consist of recommendations to wear a mask in public, maintain social distancing of at least six feet, wash hands regularly, and use hand sanitizer. To facilitate this aim, we adapt the conceptual model and measures of Liao et al. "
- text: "What are the risk factors for covid-19?"
context: "To identify risk factors for hospital deaths from COVID-19, the OpenSAFELY platform examined electronic health records from 17.4 million UK adults. The authors used multivariable Cox proportional hazards model to identify the association of risk of death with older age, lower socio-economic status, being male, non-white ethnic background and certain clinical conditions (diabetes, obesity, cancer, respiratory diseases, heart, kidney, liver, neurological and autoimmune conditions). Notably, asthma was identified as a risk factor, despite prior suggestion of a potential protective role. Interestingly, higher risks due to ethnicity or lower socio-economic status could not be completely attributed to pre-existing health conditions."
---
|
dvalbuena1/Reinforce-Pong
|
dvalbuena1
| 2022-08-30T01:35:00Z | 0 | 0 | null |
[
"Pong-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-30T01:34:06Z |
---
tags:
- Pong-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pong
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pong-PLE-v0
type: Pong-PLE-v0
metrics:
- type: mean_reward
value: -16.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pong-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pong-PLE-v0** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
Einmalumdiewelt/DistilBART_CNN_GNAD_V2
|
Einmalumdiewelt
| 2022-08-29T23:21:34Z | 14 | 1 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"de",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-29T15:01:52Z |
---
language:
- de
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: DistilBART_CNN_GNAD_V2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DistilBART_CNN_GNAD_V2
This model is a fine-tuned version of [Einmalumdiewelt/DistilBART_CNN_GNAD_V2](https://huggingface.co/Einmalumdiewelt/DistilBART_CNN_GNAD_V2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7281
- Rouge1: 27.7253
- Rouge2: 8.4647
- Rougel: 18.2059
- Rougelsum: 23.238
- Gen Len: 91.6827
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
theunnecessarythings/ddpm-butterflies-128
|
theunnecessarythings
| 2022-08-29T19:31:24Z | 0 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"en",
"dataset:huggan/smithsonian_butterflies_subset",
"license:apache-2.0",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2022-08-29T18:19:26Z |
---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/sreerajr000/ddpm-butterflies-128/tensorboard?#scalars)
|
salmujaiwel/wav2vec2-large-xls-r-300m-arabic-saudi-colab
|
salmujaiwel
| 2022-08-29T19:30:47Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-08-29T19:13:10Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xls-r-300m-arabic-saudi-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-arabic-saudi-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.21.2
- Pytorch 1.10.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
ish97/bert-finetuned-chunking-for-echo-reading
|
ish97
| 2022-08-29T19:27:28Z | 105 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-08-29T18:07:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-chunking-for-echo-reading
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-chunking-for-echo-reading
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3411
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 2 | 0.4490 | 0.0 | 0.0 | 0.0 | 0.875 |
| No log | 2.0 | 4 | 0.3668 | 0.0 | 0.0 | 0.0 | 0.875 |
| No log | 3.0 | 6 | 0.3411 | 0.0 | 0.0 | 0.0 | 0.875 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
fractalego/creak-sense
|
fractalego
| 2022-08-29T19:24:27Z | 13 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"arxiv:2109.01653",
"doi:10.57967/hf/0008",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-27T16:51:07Z |
# Testing whether a sentence is consistent with the CREAK dataset
This framework is trained on the [CREAK dataset](https://arxiv.org/abs/2109.01653).
# Install
pip install creak-sense
# Example
```python
from creak_sense import CreakSense
sense = CreakSense("fractalego/creak-sense")
claim = "Bananas can be found in a grocery list"
sense.make_sense(claim)
```
with output "True".
# Example with explanation
```python
from creak_sense import CreakSense
sense = CreakSense("fractalego/creak-sense")
claim = "Bananas can be found in a grocery list"
sense.get_reason(claim)
```
with output "Bananas are a staple food".
|
ntinosmg/dqn-SpaceInvadersNoFrameskip-v4
|
ntinosmg
| 2022-08-29T19:21:48Z | 2 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2022-08-29T19:21:07Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: 555.50 +/- 234.83
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m utils.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga ntinosmg -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m utils.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga ntinosmg
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
jonaskoenig/xtremedistil-l6-h256-uncased-future-time-references-D1
|
jonaskoenig
| 2022-08-29T18:44:10Z | 9 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"dataset:jonaskoenig/trump_administration_statement",
"dataset:jonaskoenig/future-time-references-static-filter-D1",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-07-15T10:48:03Z |
---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: xtremedistil-l6-h256-uncased-future-time-references-D1
results: []
datasets:
- jonaskoenig/trump_administration_statement
- jonaskoenig/future-time-references-static-filter-D1
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# xtremedistil-l6-h256-uncased-future-time-references-D1
This model is a fine-tuned version of [microsoft/xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased) on the [jonaskoenig/trump_administration_statement](https://huggingface.co/datasets/jonaskoenig/trump_administration_statement) and [jonaskoenig/future-time-refernces-static-filter](https://huggingface.co/datasets/jonaskoenig/future-time-refernces-static-filter) datsets.
It achieves the following results on the evaluation set:
- Train Loss: 0.0099
- Train Sparse Categorical Accuracy: 0.9977
- Validation Loss: 0.0128
- Validation Sparse Categorical Accuracy: 0.9976
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch |
|:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:|
| 0.0276 | 0.9932 | 0.0156 | 0.9968 | 0 |
| 0.0138 | 0.9969 | 0.0125 | 0.9972 | 1 |
| 0.0117 | 0.9974 | 0.0126 | 0.9974 | 2 |
| 0.0099 | 0.9977 | 0.0128 | 0.9976 | 3 |
The test accuracy is: 99.77%
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.9.1
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Dizzykong/Aristotle-8-29
|
Dizzykong
| 2022-08-29T17:46:28Z | 9 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-29T16:31:34Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: Aristotle-8-29
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Aristotle-8-29
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
huggingtweets/chrishildabrant
|
huggingtweets
| 2022-08-29T17:19:30Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-29T17:19:20Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1367991702523437062/x5beyUQ-_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Chris Hildabrant</div>
<div style="text-align: center; font-size: 14px;">@chrishildabrant</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Chris Hildabrant.
| Data | Chris Hildabrant |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 0 |
| Short tweets | 243 |
| Tweets kept | 3007 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3dagd4ww/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @chrishildabrant's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1ctoe6ys) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1ctoe6ys/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/chrishildabrant')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
GItaf/bart-base-finetuned-mbti
|
GItaf
| 2022-08-29T17:08:37Z | 17 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-28T15:05:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-base-finetuned-mbti
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-mbti
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.0025 | 1.0 | 9920 | 0.0000 |
| 0.0005 | 2.0 | 19840 | 0.0000 |
| 0.0002 | 3.0 | 29760 | 0.0000 |
| 0.0001 | 4.0 | 39680 | 0.0000 |
| 0.0001 | 5.0 | 49600 | 0.0000 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
huggingtweets/actbrigitte
|
huggingtweets
| 2022-08-29T16:46:55Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-29T16:45:33Z |
---
language: en
thumbnail: http://www.huggingtweets.com/actbrigitte/1661791610963/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1001845274476797954/TbklBZ1r_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Brigitte Gabriel</div>
<div style="text-align: center; font-size: 14px;">@actbrigitte</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Brigitte Gabriel.
| Data | Brigitte Gabriel |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 716 |
| Short tweets | 105 |
| Tweets kept | 2429 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/w0rkndg8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @actbrigitte's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2jtfv41h) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2jtfv41h/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/actbrigitte')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
cemilcelik/distilgpt2_pubmed
|
cemilcelik
| 2022-08-29T16:34:51Z | 157 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-08-29T13:16:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2_pubmed
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2_pubmed
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8745
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7569 | 1.0 | 528 | 2.0859 |
| 2.1098 | 2.0 | 1056 | 1.9187 |
| 2.0058 | 3.0 | 1584 | 1.8745 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cpu
- Datasets 2.1.0
- Tokenizers 0.12.1
|
Atharvgarg/distilbart-xsum-6-6-finetuned-bbc-news-on-extractive
|
Atharvgarg
| 2022-08-29T14:30:47Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"summarisation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-29T13:43:13Z |
---
license: apache-2.0
tags:
- summarisation
- generated_from_trainer
metrics:
- rouge
model-index:
- name: distilbart-xsum-6-6-finetuned-bbc-news-on-extractive
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-xsum-6-6-finetuned-bbc-news-on-extractive
This model is a fine-tuned version of [sshleifer/distilbart-xsum-6-6](https://huggingface.co/sshleifer/distilbart-xsum-6-6) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5869
- Rouge1: 39.4885
- Rouge2: 31.7487
- Rougel: 31.9013
- Rougelsum: 34.0825
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 1.4649 | 1.0 | 445 | 1.5047 | 39.1053 | 31.6651 | 32.3242 | 33.9332 |
| 1.2224 | 2.0 | 890 | 1.4986 | 39.4115 | 31.7894 | 32.1057 | 34.0454 |
| 1.0099 | 3.0 | 1335 | 1.5322 | 39.5936 | 31.9984 | 32.2283 | 34.1798 |
| 0.8687 | 4.0 | 1780 | 1.5869 | 39.4885 | 31.7487 | 31.9013 | 34.0825 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
mayjul/t5-small-finetuned-xsum
|
mayjul
| 2022-08-29T11:52:46Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-08-28T14:36:56Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
metrics:
- rouge
model-index:
- name: t5-small-finetuned-xsum
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: xsum
type: xsum
config: default
split: train
args: default
metrics:
- name: Rouge1
type: rouge
value: 28.2727
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4789
- Rouge1: 28.2727
- Rouge2: 7.7068
- Rougel: 22.1993
- Rougelsum: 22.2071
- Gen Len: 18.8238
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 2.7189 | 1.0 | 12753 | 2.4789 | 28.2727 | 7.7068 | 22.1993 | 22.2071 | 18.8238 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
muhtasham/bert-small-finer
|
muhtasham
| 2022-08-29T11:42:58Z | 163 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-08-28T21:44:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| No log | 0.49 | 500 | 3.5536 |
| 3.752 | 0.97 | 1000 | 3.0406 |
| 3.752 | 1.46 | 1500 | 2.7601 |
| 2.6844 | 1.94 | 2000 | 2.5655 |
| 2.6844 | 2.43 | 2500 | 2.4174 |
| 2.3487 | 2.91 | 3000 | 2.3163 |
| 2.3487 | 3.4 | 3500 | 2.2146 |
| 2.1554 | 3.89 | 4000 | 2.1560 |
| 2.1554 | 4.37 | 4500 | 2.0935 |
| 2.019 | 4.86 | 5000 | 2.0375 |
| 2.019 | 5.34 | 5500 | 1.9942 |
| 1.9254 | 5.83 | 6000 | 1.9530 |
| 1.9254 | 6.32 | 6500 | 1.9215 |
| 1.8506 | 6.8 | 7000 | 1.8908 |
| 1.8506 | 7.29 | 7500 | 1.8693 |
| 1.793 | 7.77 | 8000 | 1.8399 |
| 1.793 | 8.26 | 8500 | 1.8191 |
| 1.7425 | 8.75 | 9000 | 1.8016 |
| 1.7425 | 9.23 | 9500 | 1.7760 |
| 1.7093 | 9.72 | 10000 | 1.7668 |
| 1.7093 | 10.2 | 10500 | 1.7474 |
| 1.6754 | 10.69 | 11000 | 1.7365 |
| 1.6754 | 11.18 | 11500 | 1.7229 |
| 1.6501 | 11.66 | 12000 | 1.7145 |
| 1.6501 | 12.15 | 12500 | 1.7029 |
| 1.633 | 12.63 | 13000 | 1.6965 |
| 1.633 | 13.12 | 13500 | 1.6878 |
| 1.6153 | 13.61 | 14000 | 1.6810 |
| 1.6153 | 14.09 | 14500 | 1.6775 |
| 1.6043 | 14.58 | 15000 | 1.6720 |
| 1.6043 | 15.06 | 15500 | 1.6719 |
| 1.5942 | 15.55 | 16000 | 1.6602 |
| 1.5942 | 16.03 | 16500 | 1.6643 |
| 1.5869 | 16.52 | 17000 | 1.6632 |
| 1.5869 | 17.01 | 17500 | 1.6551 |
| 1.5834 | 17.49 | 18000 | 1.6557 |
| 1.5834 | 17.98 | 18500 | 1.6561 |
| 1.5755 | 18.46 | 19000 | 1.6620 |
| 1.5755 | 18.95 | 19500 | 1.6524 |
| 1.5823 | 19.44 | 20000 | 1.6536 |
| 1.5823 | 19.92 | 20500 | 1.6627 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
|
StefanSteib/Photographer
|
StefanSteib
| 2022-08-29T11:27:39Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-08-29T11:26:32Z |
Carry plenty cameras
black clothes
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.