modelId
stringlengths 4
81
| tags
list | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
timestamp[ns, tz=UTC] | card
stringlengths 51
438k
|
---|---|---|---|---|---|---|
Declan/HuffPost_model_v3 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert_sst2_int8_xml
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE SST2
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9036697247706422
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sst2_int8_xml
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the GLUE SST2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4463
- Accuracy: 0.9037
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.9.1+cu111
- Datasets 2.6.1
- Tokenizers 0.13.2
|
Declan/WallStreetJournal_model_v4 | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
tags:
- generated_from_trainer
model-index:
- name: SciBERT-WIKI_Lifecycle_Finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SciBERT-WIKI_Lifecycle_Finetuned
This model is a fine-tuned version of [allenai/scibert_scivocab_uncased](https://huggingface.co/allenai/scibert_scivocab_uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1142
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0933 | 1.0 | 2082 | 0.1159 |
| 0.0782 | 2.0 | 4164 | 0.0935 |
| 0.0442 | 3.0 | 6246 | 0.1142 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
Declan/test_push | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: other
tags:
- generated_from_trainer
datasets:
- AlekseyKorshuk/dalio-handwritten-io
metrics:
- accuracy
model-index:
- name: dalio-handwritten-io-1.3b
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: AlekseyKorshuk/dalio-handwritten-io
type: AlekseyKorshuk/dalio-handwritten-io
metrics:
- name: Accuracy
type: accuracy
value: 0.06143479984145858
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dalio-handwritten-io-1.3b
This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on the AlekseyKorshuk/dalio-handwritten-io dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3789
- Accuracy: 0.0614
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.9219 | 0.1 | 1 | 2.6484 | 0.0529 |
| 2.6938 | 0.2 | 2 | 2.6484 | 0.0529 |
| 2.6365 | 0.3 | 3 | 2.5508 | 0.0560 |
| 2.5088 | 0.4 | 4 | 2.5332 | 0.0562 |
| 2.7307 | 0.5 | 5 | 2.5176 | 0.0565 |
| 2.969 | 0.6 | 6 | 2.4941 | 0.0571 |
| 2.7283 | 0.7 | 7 | 2.4883 | 0.0567 |
| 2.6157 | 0.8 | 8 | 2.4766 | 0.0578 |
| 2.6406 | 0.9 | 9 | 2.4590 | 0.0583 |
| 2.5701 | 1.0 | 10 | 2.4375 | 0.0587 |
| 2.2017 | 1.1 | 11 | 2.4238 | 0.0587 |
| 2.0039 | 1.2 | 12 | 2.4219 | 0.0586 |
| 1.8981 | 1.3 | 13 | 2.4160 | 0.0589 |
| 1.7683 | 1.4 | 14 | 2.4160 | 0.0595 |
| 1.6746 | 1.5 | 15 | 2.4121 | 0.0600 |
| 1.8051 | 1.6 | 16 | 2.4102 | 0.0600 |
| 2.0457 | 1.7 | 17 | 2.4043 | 0.0602 |
| 1.8257 | 1.8 | 18 | 2.4004 | 0.0606 |
| 1.744 | 1.9 | 19 | 2.3887 | 0.0607 |
| 1.8232 | 2.0 | 20 | 2.3887 | 0.0607 |
| 1.4741 | 2.1 | 21 | 2.3828 | 0.0610 |
| 1.651 | 2.2 | 22 | 2.3770 | 0.0608 |
| 1.3732 | 2.3 | 23 | 2.3730 | 0.0610 |
| 1.3151 | 2.4 | 24 | 2.3730 | 0.0610 |
| 1.5302 | 2.5 | 25 | 2.3730 | 0.0610 |
| 1.2539 | 2.6 | 26 | 2.375 | 0.0612 |
| 1.6211 | 2.7 | 27 | 2.3770 | 0.0612 |
| 1.6047 | 2.8 | 28 | 2.3770 | 0.0613 |
| 1.1953 | 2.9 | 29 | 2.3789 | 0.0614 |
| 1.1621 | 3.0 | 30 | 2.3789 | 0.0614 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
DeepChem/ChemBERTa-10M-MLM | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 90 | 2022-11-10T12:15:38Z | A magnificent and ancient Blue ice cave at the edge of the known universe in a reflective pond of cosmic stars, cinematic, atmospheric, 8K, mystical, dynamic lighting, landscape photography by Marc Adamus, |
DeepChem/ChemBERTa-10M-MTR | [
"pytorch",
"roberta",
"arxiv:1910.09700",
"transformers"
]
| null | {
"architectures": [
"RobertaForRegression"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 708 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/sbe_sus/1668084101960/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1579111637973336071/MkdCeTeX_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">sberto.eth 📈</div>
<div style="text-align: center; font-size: 14px;">@sbe_sus</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from sberto.eth 📈.
| Data | sberto.eth 📈 |
| --- | --- |
| Tweets downloaded | 1273 |
| Retweets | 648 |
| Short tweets | 221 |
| Tweets kept | 404 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1rwjbirb/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @sbe_sus's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2ejp5m2v) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2ejp5m2v/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/sbe_sus')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
DeskDown/MarianMixFT_en-fil | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: bigscience-openrail-m
---
Bloom-1b7 model finetuned on Bloom-175b generated data for email actionable points extraction |
DevsIA/Devs_IA | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
---
### DISTILBERT RUNNING ON [DEEPSPARSE](https://github.com/neuralmagic/deepsparse) GOES BRHMMMMMMMM. 🚀🚀🚀
This model is 👇
███████╗ ██████╗ █████╗ ██████╗ ███████╗ ███████╗
██╔════╝ ██╔══██╗ ██╔══██╗ ██╔══██╗ ██╔════╝ ██╔════╝
███████╗ ██████╔╝ ███████║ ██████╔╝ ███████╗ █████╗
╚════██║ ██╔═══╝ ██╔══██║ ██╔══██╗ ╚════██║█ █╔══╝
███████║ ██║ ██║ ██║ ██║ ██ ║███████║ ███████╗
╚══════╝ ╚═╝ ╚═╝ ╚═╝ ╚═╝ ╚═ ╝╚══════╝ ╚══════╝

### LOOKS LIKE THIS 👇

### Inference endpoints, outside of outliers (4ms) is avg. latency on 2 vCPUs:

### Handler for access to inference endpoints
```python
class EndpointHandler:
def __init__(self, path=""):
self.pipeline = Pipeline.create(task="text-classification", model_path=path)
def __call__(self, data: Dict[str, Any]) -> Dict[str, str]:
"""
Args:
data (:obj:): prediction input text
"""
inputs = data.pop("inputs", data)
start = perf_counter()
prediction = self.pipeline(inputs)
end = perf_counter()
latency = end - start
return {
"labels": prediction.labels,
"scores": prediction.scores,
"latency (secs.)": latency
}
```
̷͈̍
̵̳͒R̶̙̓i̸̟͘c̴̻̆k̸̑͜ÿ̷̳́
̸̪̚
̷͖̀ |
Waynehillsdev/Waynehills_summary_tensorflow | [
"tf",
"t5",
"text2text-generation",
"transformers",
"generated_from_keras_callback",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | 2022-11-10T17:51:20Z | #!/usr/bin/env python3
from diffusers import DiffusionPipeline
import PIL
import requests
from io import BytesIO
import torch
def download_image(url):
response = requests.get(url)
return PIL.Image.open(BytesIO(response.content)).convert("RGB")
pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", custom_pipeline="stable_diffusion_mega", torch_dtype=torch.float16, revision="fp16")
pipe.to("cuda")
pipe.enable_attention_slicing()
### Text-to-Image
images = pipe.text2img("An astronaut riding a horse").images
### Image-to-Image
init_image = download_image("https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg")
prompt = "A fantasy landscape, trending on artstation"
images = pipe.img2img(prompt=prompt, init_image=init_image, strength=0.75, guidance_scale=7.5).images
### Inpainting
img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
init_image = download_image(img_url).resize((512, 512))
mask_image = download_image(mask_url).resize((512, 512))
prompt = "a cat sitting on a bench"
images = pipe.inpaint(prompt=prompt, init_image=init_image, mask_image=mask_image, strength=0.75).images |
DoyyingFace/bert-asian-hate-tweets-concat-clean | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 25 | 2022-11-10T19:50:38Z | ---
tags:
- espnet
- audio
- automatic-speech-recognition
language: en
datasets:
- librimix
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/simpleoier_librimix_asr_train_asr_transformer_multispkr_raw_en_char_sp`
This model was trained by simpleoier using librimix recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout 28695114f2771ac3d2a9cc0b5fb30a2c3262e49a
pip install -e .
cd egs2/librimix/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/simpleoier_librimix_asr_train_asr_transformer_multispkr_raw_en_char_sp
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Thu Nov 10 14:58:09 EST 2022`
- python version: `3.9.13 (main, Aug 25 2022, 23:26:10) [GCC 11.2.0]`
- espnet version: `espnet 202209`
- pytorch version: `pytorch 1.12.1`
- Git hash: `b3c185d5d707bb385b74f42df2cc59bcf7d7e754`
- Commit date: `Wed Nov 9 22:00:30 2022 -0500`
## asr_train_asr_transformer_multispkr_raw_en_char_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_multi_asrtrue_lm_lm_train_lm_transformer_en_char_valid.loss.ave_asr_model_valid.acc.ave/test|6000|111243|80.4|17.4|2.2|3.8|23.5|88.0|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_multi_asrtrue_lm_lm_train_lm_transformer_en_char_valid.loss.ave_asr_model_valid.acc.ave/test|6000|590408|90.5|6.1|3.5|3.9|13.5|88.0|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_transformer_multispkr.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_transformer_multispkr_raw_en_char_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 45
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 5000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_char_sp/train/speech_shape
- exp/asr_stats_raw_en_char_sp/train/text_shape.char
- exp/asr_stats_raw_en_char_sp/train/text_spk2_shape.char
valid_shape_file:
- exp/asr_stats_raw_en_char_sp/valid/speech_shape
- exp/asr_stats_raw_en_char_sp/valid/text_shape.char
- exp/asr_stats_raw_en_char_sp/valid/text_spk2_shape.char
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_sp/wav.scp
- speech
- sound
- - dump/raw/train_sp/text_spk1
- text
- text
- - dump/raw/train_sp/text_spk2
- text_spk2
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- sound
- - dump/raw/dev/text_spk1
- text
- text
- - dump/raw/dev/text_spk2
- text_spk2
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.001
scheduler: warmuplr
scheduler_conf:
warmup_steps: 25000
token_list:
- <blank>
- <unk>
- <space>
- E
- T
- A
- O
- N
- I
- H
- S
- R
- D
- L
- U
- M
- C
- W
- F
- G
- Y
- P
- B
- V
- K
- ''''
- X
- J
- Q
- Z
- <sos/eos>
init: xavier_uniform
input_size: null
ctc_conf:
reduce: false
joint_net_conf: null
use_preprocessor: true
token_type: char
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
short_noise_thres: 0.5
frontend: default
frontend_conf:
fs: 16k
specaug: null
specaug_conf: {}
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_en_char_sp/train/feats_stats.npz
model: pit_espnet
model_conf:
ctc_weight: 0.2
lsm_weight: 0.1
length_normalized_loss: false
num_inf: 2
num_ref: 2
preencoder: null
preencoder_conf: {}
encoder: transformer_multispkr
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 2048
num_blocks: 8
num_blocks_sd: 4
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
num_inf: 2
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
preprocessor: multi
preprocessor_conf:
text_name:
- text
- text_spk2
required:
- output_dir
- token_list
version: '202209'
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
albert-base-v1 | [
"pytorch",
"tf",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 38,156 | 2022-11-10T19:57:50Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad-a4-q3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad-a4-q3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7767
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 17
- eval_batch_size: 17
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.8135 | 1.0 | 516 | 1.9304 |
| 1.4214 | 2.0 | 1032 | 1.7047 |
| 1.0682 | 3.0 | 1548 | 1.7341 |
| 0.8492 | 4.0 | 2064 | 1.7767 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
albert-base-v2 | [
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4,785,283 | 2022-11-10T19:59:40Z | ---
license: other
tags:
- generated_from_trainer
datasets:
- AlekseyKorshuk/dalio-all-io
metrics:
- accuracy
model-index:
- name: dalio-all-io-1.3b
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: AlekseyKorshuk/dalio-all-io
type: AlekseyKorshuk/dalio-all-io
metrics:
- name: Accuracy
type: accuracy
value: 0.05582538140677676
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dalio-all-io-1.3b
This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on the AlekseyKorshuk/dalio-all-io dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3652
- Accuracy: 0.0558
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.6543 | 0.03 | 1 | 2.6113 | 0.0513 |
| 2.6077 | 0.07 | 2 | 2.6113 | 0.0513 |
| 2.5964 | 0.1 | 3 | 2.5605 | 0.0519 |
| 2.7302 | 0.14 | 4 | 2.5234 | 0.0527 |
| 2.7 | 0.17 | 5 | 2.5078 | 0.0528 |
| 2.5674 | 0.21 | 6 | 2.4941 | 0.0532 |
| 2.6406 | 0.24 | 7 | 2.4883 | 0.0534 |
| 2.5315 | 0.28 | 8 | 2.4805 | 0.0536 |
| 2.7202 | 0.31 | 9 | 2.4727 | 0.0537 |
| 2.5144 | 0.34 | 10 | 2.4648 | 0.0536 |
| 2.4983 | 0.38 | 11 | 2.4512 | 0.0537 |
| 2.7029 | 0.41 | 12 | 2.4414 | 0.0539 |
| 2.5198 | 0.45 | 13 | 2.4336 | 0.0540 |
| 2.5706 | 0.48 | 14 | 2.4258 | 0.0545 |
| 2.5688 | 0.52 | 15 | 2.4180 | 0.0548 |
| 2.3793 | 0.55 | 16 | 2.4102 | 0.0552 |
| 2.4785 | 0.59 | 17 | 2.4043 | 0.0554 |
| 2.4688 | 0.62 | 18 | 2.3984 | 0.0553 |
| 2.5674 | 0.66 | 19 | 2.3984 | 0.0553 |
| 2.5054 | 0.69 | 20 | 2.3945 | 0.0554 |
| 2.452 | 0.72 | 21 | 2.3887 | 0.0555 |
| 2.5999 | 0.76 | 22 | 2.3828 | 0.0556 |
| 2.3665 | 0.79 | 23 | 2.3789 | 0.0556 |
| 2.6223 | 0.83 | 24 | 2.375 | 0.0557 |
| 2.3562 | 0.86 | 25 | 2.3711 | 0.0557 |
| 2.429 | 0.9 | 26 | 2.3691 | 0.0557 |
| 2.563 | 0.93 | 27 | 2.3672 | 0.0558 |
| 2.4573 | 0.97 | 28 | 2.3652 | 0.0558 |
| 2.4883 | 1.0 | 29 | 2.3652 | 0.0558 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
albert-large-v1 | [
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 687 | 2022-11-10T20:11:49Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- audiofolder
metrics:
- wer
model-index:
- name: wav2vec2-xlsr-53-espeak-cv-ft-evn3-ntsema-colab
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: audiofolder
type: audiofolder
config: default
split: train
args: default
metrics:
- name: Wer
type: wer
value: 0.97
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-53-espeak-cv-ft-evn3-ntsema-colab
This model is a fine-tuned version of [facebook/wav2vec2-xlsr-53-espeak-cv-ft](https://huggingface.co/facebook/wav2vec2-xlsr-53-espeak-cv-ft) on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5004
- Wer: 0.97
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.8078 | 7.14 | 400 | 1.3558 | 0.9933 |
| 0.7854 | 14.28 | 800 | 1.2786 | 0.98 |
| 0.3685 | 21.43 | 1200 | 1.4606 | 0.9733 |
| 0.1912 | 28.57 | 1600 | 1.5004 | 0.97 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
albert-large-v2 | [
"pytorch",
"tf",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 26,792 | 2022-11-10T20:16:12Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wl
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-clinical-wl-es-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wl
type: wl
config: WL
split: train
args: WL
metrics:
- name: Precision
type: precision
value: 0.6865079365079365
- name: Recall
type: recall
value: 0.7355442176870748
- name: F1
type: f1
value: 0.7101806239737274
- name: Accuracy
type: accuracy
value: 0.8267950260730044
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-clinical-wl-es-finetuned-ner
This model is a fine-tuned version of [plncmm/roberta-clinical-wl-es](https://huggingface.co/plncmm/roberta-clinical-wl-es) on the wl dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6227
- Precision: 0.6865
- Recall: 0.7355
- F1: 0.7102
- Accuracy: 0.8268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 1.028 | 1.0 | 500 | 0.6870 | 0.6558 | 0.6855 | 0.6703 | 0.8035 |
| 0.5923 | 2.0 | 1000 | 0.6248 | 0.6851 | 0.7235 | 0.7038 | 0.8244 |
| 0.4928 | 3.0 | 1500 | 0.6227 | 0.6865 | 0.7355 | 0.7102 | 0.8268 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
albert-xlarge-v1 | [
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 341 | 2022-11-10T20:29:53Z | ---
license: other
tags:
- generated_from_trainer
datasets:
- AlekseyKorshuk/dalio-all-io
metrics:
- accuracy
model-index:
- name: dalio-all-io-1.3b-2-epoch
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: AlekseyKorshuk/dalio-all-io
type: AlekseyKorshuk/dalio-all-io
metrics:
- name: Accuracy
type: accuracy
value: 0.057553854065481976
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dalio-all-io-1.3b-2-epoch
This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on the AlekseyKorshuk/dalio-all-io dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2949
- Accuracy: 0.0576
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 2.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.6543 | 0.03 | 1 | 2.6113 | 0.0513 |
| 2.6077 | 0.07 | 2 | 2.6113 | 0.0513 |
| 2.5964 | 0.1 | 3 | 2.5605 | 0.0519 |
| 2.7302 | 0.14 | 4 | 2.5234 | 0.0527 |
| 2.7002 | 0.17 | 5 | 2.5078 | 0.0529 |
| 2.5674 | 0.21 | 6 | 2.4941 | 0.0533 |
| 2.6399 | 0.24 | 7 | 2.4883 | 0.0534 |
| 2.533 | 0.28 | 8 | 2.4805 | 0.0536 |
| 2.7202 | 0.31 | 9 | 2.4746 | 0.0536 |
| 2.5137 | 0.34 | 10 | 2.4648 | 0.0534 |
| 2.499 | 0.38 | 11 | 2.4512 | 0.0536 |
| 2.7026 | 0.41 | 12 | 2.4414 | 0.0539 |
| 2.5254 | 0.45 | 13 | 2.4336 | 0.0543 |
| 2.5667 | 0.48 | 14 | 2.4238 | 0.0545 |
| 2.5715 | 0.52 | 15 | 2.4160 | 0.0548 |
| 2.3739 | 0.55 | 16 | 2.4102 | 0.0550 |
| 2.4756 | 0.59 | 17 | 2.4043 | 0.0549 |
| 2.4783 | 0.62 | 18 | 2.3984 | 0.0550 |
| 2.5665 | 0.66 | 19 | 2.3906 | 0.0549 |
| 2.4888 | 0.69 | 20 | 2.3906 | 0.0549 |
| 2.4476 | 0.72 | 21 | 2.3828 | 0.0550 |
| 2.604 | 0.76 | 22 | 2.375 | 0.0552 |
| 2.3416 | 0.79 | 23 | 2.3652 | 0.0554 |
| 2.6028 | 0.83 | 24 | 2.3555 | 0.0555 |
| 2.3425 | 0.86 | 25 | 2.3477 | 0.0558 |
| 2.4142 | 0.9 | 26 | 2.3398 | 0.0558 |
| 2.5317 | 0.93 | 27 | 2.3340 | 0.0559 |
| 2.4119 | 0.97 | 28 | 2.3301 | 0.0561 |
| 2.4048 | 1.0 | 29 | 2.3262 | 0.0563 |
| 1.9646 | 1.03 | 30 | 2.3242 | 0.0564 |
| 1.9233 | 1.07 | 31 | 2.3203 | 0.0563 |
| 1.9276 | 1.1 | 32 | 2.3203 | 0.0564 |
| 1.8702 | 1.14 | 33 | 2.3281 | 0.0565 |
| 2.0997 | 1.17 | 34 | 2.3340 | 0.0565 |
| 1.7943 | 1.21 | 35 | 2.3320 | 0.0568 |
| 1.8579 | 1.24 | 36 | 2.3242 | 0.0567 |
| 1.8844 | 1.28 | 37 | 2.3145 | 0.0568 |
| 1.9288 | 1.31 | 38 | 2.3086 | 0.0569 |
| 1.6616 | 1.34 | 39 | 2.3047 | 0.0570 |
| 1.6443 | 1.38 | 40 | 2.3047 | 0.0571 |
| 1.7616 | 1.41 | 41 | 2.3027 | 0.0572 |
| 1.7904 | 1.45 | 42 | 2.3027 | 0.0571 |
| 1.8762 | 1.48 | 43 | 2.3027 | 0.0573 |
| 1.6569 | 1.52 | 44 | 2.3027 | 0.0573 |
| 1.647 | 1.55 | 45 | 2.3027 | 0.0573 |
| 1.8168 | 1.59 | 46 | 2.3027 | 0.0574 |
| 1.7194 | 1.62 | 47 | 2.3027 | 0.0573 |
| 1.7667 | 1.66 | 48 | 2.3027 | 0.0572 |
| 1.7621 | 1.69 | 49 | 2.3027 | 0.0573 |
| 1.7269 | 1.72 | 50 | 2.3008 | 0.0573 |
| 1.7815 | 1.76 | 51 | 2.3008 | 0.0574 |
| 1.8318 | 1.79 | 52 | 2.2988 | 0.0574 |
| 1.9366 | 1.83 | 53 | 2.2988 | 0.0575 |
| 1.736 | 1.86 | 54 | 2.2969 | 0.0576 |
| 1.9984 | 1.9 | 55 | 2.2969 | 0.0575 |
| 1.7203 | 1.93 | 56 | 2.2949 | 0.0575 |
| 1.7391 | 1.97 | 57 | 2.2949 | 0.0576 |
| 1.6611 | 2.0 | 58 | 2.2949 | 0.0576 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
albert-xlarge-v2 | [
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2,973 | 2022-11-10T20:35:15Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: BERiT_2000_custom_architecture
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERiT_2000_custom_architecture
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.0153
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 16.6991 | 0.19 | 500 | 8.9825 |
| 8.259 | 0.39 | 1000 | 7.5650 |
| 7.3895 | 0.58 | 1500 | 7.1084 |
| 7.0328 | 0.77 | 2000 | 6.8799 |
| 6.8743 | 0.97 | 2500 | 6.7598 |
| 6.7775 | 1.16 | 3000 | 6.5915 |
| 6.6348 | 1.36 | 3500 | 6.4513 |
| 6.5759 | 1.55 | 4000 | 6.3394 |
| 6.5243 | 1.74 | 4500 | 6.3336 |
| 6.4492 | 1.94 | 5000 | 6.2714 |
| 6.4472 | 2.13 | 5500 | 6.2921 |
| 6.4283 | 2.32 | 6000 | 6.1922 |
| 6.3508 | 2.52 | 6500 | 6.2112 |
| 6.3838 | 2.71 | 7000 | 6.1727 |
| 6.3303 | 2.9 | 7500 | 6.2093 |
| 6.3067 | 3.1 | 8000 | 6.1984 |
| 6.3099 | 3.29 | 8500 | 6.1589 |
| 6.2806 | 3.49 | 9000 | 6.1732 |
| 6.2861 | 3.68 | 9500 | 6.1257 |
| 6.2645 | 3.87 | 10000 | 6.1655 |
| 6.2992 | 4.07 | 10500 | 6.1156 |
| 6.2331 | 4.26 | 11000 | 6.1212 |
| 6.2247 | 4.45 | 11500 | 6.1991 |
| 6.2235 | 4.65 | 12000 | 6.1181 |
| 6.2354 | 4.84 | 12500 | 6.1469 |
| 6.2157 | 5.03 | 13000 | 6.1170 |
| 6.2076 | 5.23 | 13500 | 6.1128 |
| 6.2085 | 5.42 | 14000 | 6.1079 |
| 6.1917 | 5.62 | 14500 | 6.1511 |
| 6.1917 | 5.81 | 15000 | 6.1032 |
| 6.1887 | 6.0 | 15500 | 6.0877 |
| 6.1895 | 6.2 | 16000 | 6.0876 |
| 6.1685 | 6.39 | 16500 | 6.0734 |
| 6.1709 | 6.58 | 17000 | 6.1039 |
| 6.1442 | 6.78 | 17500 | 6.1347 |
| 6.126 | 6.97 | 18000 | 6.0571 |
| 6.1587 | 7.16 | 18500 | 6.0808 |
| 6.1349 | 7.36 | 19000 | 5.9921 |
| 6.1487 | 7.55 | 19500 | 6.0548 |
| 6.1362 | 7.75 | 20000 | 6.0746 |
| 6.1581 | 7.94 | 20500 | 6.0689 |
| 6.1225 | 8.13 | 21000 | 6.0916 |
| 6.1233 | 8.33 | 21500 | 6.0504 |
| 6.1192 | 8.52 | 22000 | 6.0630 |
| 6.0843 | 8.71 | 22500 | 6.0927 |
| 6.1144 | 8.91 | 23000 | 6.0464 |
| 6.1012 | 9.1 | 23500 | 6.0872 |
| 6.1118 | 9.3 | 24000 | 6.0153 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
albert-xxlarge-v1 | [
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7,091 | null | Access to model luanverissimo/luanverissimo is restricted and you are not in the authorized list. Visit https://huggingface.co/luanverissimo/luanverissimo to ask for access. |
bert-base-cased | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8,621,271 | 2022-11-10T20:52:55Z | ---
license: other
tags:
- generated_from_trainer
datasets:
- AlekseyKorshuk/dalio-all-io
metrics:
- accuracy
model-index:
- name: dalio-all-io-1.3b-3-epoch
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: AlekseyKorshuk/dalio-all-io
type: AlekseyKorshuk/dalio-all-io
metrics:
- name: Accuracy
type: accuracy
value: 0.05841094794583167
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dalio-all-io-1.3b-3-epoch
This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on the AlekseyKorshuk/dalio-all-io dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3008
- Accuracy: 0.0584
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.6543 | 0.03 | 1 | 2.6113 | 0.0513 |
| 2.6077 | 0.07 | 2 | 2.6113 | 0.0513 |
| 2.5964 | 0.1 | 3 | 2.5605 | 0.0519 |
| 2.7302 | 0.14 | 4 | 2.5234 | 0.0526 |
| 2.7004 | 0.17 | 5 | 2.5078 | 0.0529 |
| 2.5681 | 0.21 | 6 | 2.4941 | 0.0532 |
| 2.6404 | 0.24 | 7 | 2.4883 | 0.0534 |
| 2.5325 | 0.28 | 8 | 2.4805 | 0.0536 |
| 2.7205 | 0.31 | 9 | 2.4746 | 0.0536 |
| 2.5149 | 0.34 | 10 | 2.4648 | 0.0533 |
| 2.5017 | 0.38 | 11 | 2.4512 | 0.0535 |
| 2.7026 | 0.41 | 12 | 2.4395 | 0.0539 |
| 2.5259 | 0.45 | 13 | 2.4316 | 0.0543 |
| 2.563 | 0.48 | 14 | 2.4219 | 0.0546 |
| 2.5679 | 0.52 | 15 | 2.4141 | 0.0550 |
| 2.3701 | 0.55 | 16 | 2.4082 | 0.0551 |
| 2.4739 | 0.59 | 17 | 2.4082 | 0.0551 |
| 2.481 | 0.62 | 18 | 2.4023 | 0.0548 |
| 2.5795 | 0.66 | 19 | 2.3945 | 0.0549 |
| 2.4902 | 0.69 | 20 | 2.3867 | 0.0549 |
| 2.4509 | 0.72 | 21 | 2.3809 | 0.0551 |
| 2.6052 | 0.76 | 22 | 2.3730 | 0.0553 |
| 2.3323 | 0.79 | 23 | 2.3633 | 0.0555 |
| 2.5994 | 0.83 | 24 | 2.3555 | 0.0556 |
| 2.3347 | 0.86 | 25 | 2.3477 | 0.0556 |
| 2.421 | 0.9 | 26 | 2.3398 | 0.0559 |
| 2.5337 | 0.93 | 27 | 2.3359 | 0.0560 |
| 2.4102 | 0.97 | 28 | 2.3320 | 0.0563 |
| 2.4309 | 1.0 | 29 | 2.3262 | 0.0564 |
| 1.9305 | 1.03 | 30 | 2.3223 | 0.0564 |
| 1.8601 | 1.07 | 31 | 2.3203 | 0.0567 |
| 1.8682 | 1.1 | 32 | 2.3281 | 0.0564 |
| 1.8657 | 1.14 | 33 | 2.3535 | 0.0564 |
| 2.063 | 1.17 | 34 | 2.3398 | 0.0567 |
| 1.6443 | 1.21 | 35 | 2.3242 | 0.0568 |
| 1.7592 | 1.24 | 36 | 2.3164 | 0.0569 |
| 1.8981 | 1.28 | 37 | 2.3105 | 0.0569 |
| 1.9379 | 1.31 | 38 | 2.3047 | 0.0573 |
| 1.6008 | 1.34 | 39 | 2.3027 | 0.0574 |
| 1.595 | 1.38 | 40 | 2.3027 | 0.0575 |
| 1.7096 | 1.41 | 41 | 2.3027 | 0.0575 |
| 1.7245 | 1.45 | 42 | 2.3027 | 0.0576 |
| 1.795 | 1.48 | 43 | 2.3008 | 0.0577 |
| 1.7241 | 1.52 | 44 | 2.3008 | 0.0576 |
| 1.6356 | 1.55 | 45 | 2.2988 | 0.0576 |
| 1.77 | 1.59 | 46 | 2.2969 | 0.0576 |
| 1.6675 | 1.62 | 47 | 2.2930 | 0.0577 |
| 1.6929 | 1.66 | 48 | 2.2910 | 0.0577 |
| 1.6635 | 1.69 | 49 | 2.2910 | 0.0576 |
| 1.6093 | 1.72 | 50 | 2.2910 | 0.0578 |
| 1.7362 | 1.76 | 51 | 2.2891 | 0.0580 |
| 1.7015 | 1.79 | 52 | 2.2852 | 0.0581 |
| 1.9515 | 1.83 | 53 | 2.2812 | 0.0582 |
| 1.6494 | 1.86 | 54 | 2.2773 | 0.0580 |
| 1.7522 | 1.9 | 55 | 2.2734 | 0.0580 |
| 1.7369 | 1.93 | 56 | 2.2676 | 0.0581 |
| 1.6528 | 1.97 | 57 | 2.2637 | 0.0581 |
| 1.51 | 2.0 | 58 | 2.2617 | 0.0583 |
| 1.4579 | 2.03 | 59 | 2.2637 | 0.0585 |
| 1.2645 | 2.07 | 60 | 2.2695 | 0.0585 |
| 1.2424 | 2.1 | 61 | 2.2773 | 0.0584 |
| 1.2117 | 2.14 | 62 | 2.2891 | 0.0584 |
| 1.4059 | 2.17 | 63 | 2.3008 | 0.0580 |
| 1.328 | 2.21 | 64 | 2.3145 | 0.0581 |
| 1.3436 | 2.24 | 65 | 2.3281 | 0.0580 |
| 1.389 | 2.28 | 66 | 2.3379 | 0.0580 |
| 1.2127 | 2.31 | 67 | 2.3398 | 0.0580 |
| 1.3645 | 2.34 | 68 | 2.3418 | 0.0581 |
| 1.3389 | 2.38 | 69 | 2.3379 | 0.0581 |
| 1.2549 | 2.41 | 70 | 2.3320 | 0.0581 |
| 1.2193 | 2.45 | 71 | 2.3281 | 0.0582 |
| 1.3617 | 2.48 | 72 | 2.3223 | 0.0583 |
| 1.2336 | 2.52 | 73 | 2.3184 | 0.0583 |
| 1.179 | 2.55 | 74 | 2.3145 | 0.0583 |
| 1.2468 | 2.59 | 75 | 2.3125 | 0.0583 |
| 1.3325 | 2.62 | 76 | 2.3086 | 0.0583 |
| 1.1471 | 2.66 | 77 | 2.3066 | 0.0583 |
| 1.3123 | 2.69 | 78 | 2.3066 | 0.0583 |
| 1.3285 | 2.72 | 79 | 2.3047 | 0.0585 |
| 1.3232 | 2.76 | 80 | 2.3027 | 0.0584 |
| 1.1228 | 2.79 | 81 | 2.3027 | 0.0584 |
| 1.3524 | 2.83 | 82 | 2.3027 | 0.0584 |
| 1.2042 | 2.86 | 83 | 2.3027 | 0.0583 |
| 1.3588 | 2.9 | 84 | 2.3008 | 0.0583 |
| 1.2982 | 2.93 | 85 | 2.3008 | 0.0584 |
| 1.4373 | 2.97 | 86 | 2.3008 | 0.0585 |
| 1.3562 | 3.0 | 87 | 2.3008 | 0.0584 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
bert-base-german-dbmdz-uncased | [
"pytorch",
"jax",
"safetensors",
"bert",
"fill-mask",
"de",
"transformers",
"license:mit",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 68,305 | 2022-11-10T21:22:35Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- wl
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: spanish-clinical-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wl
type: wl
config: WL
split: train
args: WL
metrics:
- name: Precision
type: precision
value: 0.6868542362104594
- name: Recall
type: recall
value: 0.7348639455782313
- name: F1
type: f1
value: 0.7100484758853013
- name: Accuracy
type: accuracy
value: 0.8262735659847573
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanish-clinical-ner
This model is a fine-tuned version of [plncmm/roberta-clinical-wl-es](https://huggingface.co/plncmm/roberta-clinical-wl-es) on the wl dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6181
- Precision: 0.6869
- Recall: 0.7349
- F1: 0.7100
- Accuracy: 0.8263
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 1.0283 | 1.0 | 500 | 0.6862 | 0.6690 | 0.6959 | 0.6822 | 0.8091 |
| 0.599 | 2.0 | 1000 | 0.6198 | 0.6856 | 0.7276 | 0.7059 | 0.8252 |
| 0.4973 | 3.0 | 1500 | 0.6181 | 0.6869 | 0.7349 | 0.7100 | 0.8263 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
bert-large-cased-whole-word-masking | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2,316 | 2022-11-10T22:00:34Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad-1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6247
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.9872 | 1.0 | 554 | 1.7933 |
| 1.6189 | 2.0 | 1108 | 1.6159 |
| 1.3125 | 3.0 | 1662 | 1.6247 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
bert-large-uncased-whole-word-masking-finetuned-squad | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"question-answering",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 480,510 | 2022-11-10T22:01:43Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-original-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-original-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6427
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.965 | 1.0 | 554 | 1.8076 |
| 1.6215 | 2.0 | 1108 | 1.6230 |
| 1.298 | 3.0 | 1662 | 1.6427 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
distilbert-base-cased-distilled-squad | [
"pytorch",
"tf",
"rust",
"safetensors",
"openvino",
"distilbert",
"question-answering",
"en",
"dataset:squad",
"arxiv:1910.01108",
"arxiv:1910.09700",
"transformers",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"has_space"
]
| question-answering | {
"architectures": [
"DistilBertForQuestionAnswering"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 257,745 | 2022-11-10T22:28:08Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6620
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5443 | 1.0 | 554 | 1.6070 |
| 1.2504 | 2.0 | 1108 | 1.5107 |
| 0.8091 | 3.0 | 1662 | 1.6620 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
distilroberta-base | [
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"roberta",
"fill-mask",
"en",
"dataset:openwebtext",
"arxiv:1910.01108",
"arxiv:1910.09700",
"transformers",
"exbert",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3,342,240 | 2022-11-11T00:16:08Z | ---
tags:
- generated_from_trainer
model-index:
- name: chemical-bert-uncased-finetuned-cust-c1-cust
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chemical-bert-uncased-finetuned-cust-c1-cust
This model is a fine-tuned version of [shafin/chemical-bert-uncased-finetuned-cust](https://huggingface.co/shafin/chemical-bert-uncased-finetuned-cust) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5420
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.96 | 1.0 | 63 | 1.6719 |
| 1.7095 | 2.0 | 126 | 1.5305 |
| 1.5634 | 3.0 | 189 | 1.2972 |
| 1.4785 | 4.0 | 252 | 1.3354 |
| 1.3991 | 5.0 | 315 | 1.2542 |
| 1.3482 | 6.0 | 378 | 1.1870 |
| 1.2984 | 7.0 | 441 | 1.1844 |
| 1.2589 | 8.0 | 504 | 1.1262 |
| 1.1762 | 9.0 | 567 | 1.1176 |
| 1.1724 | 10.0 | 630 | 1.0312 |
| 1.1222 | 11.0 | 693 | 1.0113 |
| 1.1021 | 12.0 | 756 | 1.0518 |
| 1.0646 | 13.0 | 819 | 1.0433 |
| 1.0273 | 14.0 | 882 | 0.9634 |
| 1.0187 | 15.0 | 945 | 0.9299 |
| 0.9854 | 16.0 | 1008 | 0.9458 |
| 0.9799 | 17.0 | 1071 | 0.9733 |
| 0.95 | 18.0 | 1134 | 0.9169 |
| 0.934 | 19.0 | 1197 | 0.9246 |
| 0.907 | 20.0 | 1260 | 0.8939 |
| 0.8974 | 21.0 | 1323 | 0.8575 |
| 0.8749 | 22.0 | 1386 | 0.8513 |
| 0.8526 | 23.0 | 1449 | 0.8089 |
| 0.8359 | 24.0 | 1512 | 0.8600 |
| 0.8292 | 25.0 | 1575 | 0.8517 |
| 0.8263 | 26.0 | 1638 | 0.8293 |
| 0.8033 | 27.0 | 1701 | 0.7747 |
| 0.7999 | 28.0 | 1764 | 0.8169 |
| 0.7778 | 29.0 | 1827 | 0.7981 |
| 0.7574 | 30.0 | 1890 | 0.7457 |
| 0.7581 | 31.0 | 1953 | 0.7504 |
| 0.7404 | 32.0 | 2016 | 0.7637 |
| 0.7332 | 33.0 | 2079 | 0.7902 |
| 0.7314 | 34.0 | 2142 | 0.7185 |
| 0.7209 | 35.0 | 2205 | 0.7534 |
| 0.6902 | 36.0 | 2268 | 0.7334 |
| 0.6973 | 37.0 | 2331 | 0.7069 |
| 0.687 | 38.0 | 2394 | 0.6820 |
| 0.6658 | 39.0 | 2457 | 0.7155 |
| 0.6697 | 40.0 | 2520 | 0.7149 |
| 0.6584 | 41.0 | 2583 | 0.7413 |
| 0.6638 | 42.0 | 2646 | 0.7245 |
| 0.6282 | 43.0 | 2709 | 0.7177 |
| 0.6418 | 44.0 | 2772 | 0.6653 |
| 0.6323 | 45.0 | 2835 | 0.7715 |
| 0.6256 | 46.0 | 2898 | 0.7269 |
| 0.6109 | 47.0 | 2961 | 0.6744 |
| 0.6133 | 48.0 | 3024 | 0.6816 |
| 0.595 | 49.0 | 3087 | 0.6969 |
| 0.6058 | 50.0 | 3150 | 0.6965 |
| 0.5961 | 51.0 | 3213 | 0.6988 |
| 0.587 | 52.0 | 3276 | 0.6727 |
| 0.5861 | 53.0 | 3339 | 0.6327 |
| 0.5758 | 54.0 | 3402 | 0.6538 |
| 0.5692 | 55.0 | 3465 | 0.6612 |
| 0.567 | 56.0 | 3528 | 0.5989 |
| 0.5514 | 57.0 | 3591 | 0.6776 |
| 0.5526 | 58.0 | 3654 | 0.6440 |
| 0.556 | 59.0 | 3717 | 0.6682 |
| 0.5476 | 60.0 | 3780 | 0.6254 |
| 0.536 | 61.0 | 3843 | 0.6239 |
| 0.526 | 62.0 | 3906 | 0.6606 |
| 0.532 | 63.0 | 3969 | 0.6565 |
| 0.5189 | 64.0 | 4032 | 0.6586 |
| 0.5075 | 65.0 | 4095 | 0.6286 |
| 0.5131 | 66.0 | 4158 | 0.6646 |
| 0.498 | 67.0 | 4221 | 0.6486 |
| 0.4979 | 68.0 | 4284 | 0.6313 |
| 0.4885 | 69.0 | 4347 | 0.6419 |
| 0.4875 | 70.0 | 4410 | 0.6313 |
| 0.4904 | 71.0 | 4473 | 0.6602 |
| 0.4712 | 72.0 | 4536 | 0.6200 |
| 0.4798 | 73.0 | 4599 | 0.5912 |
| 0.4802 | 74.0 | 4662 | 0.6001 |
| 0.4704 | 75.0 | 4725 | 0.6303 |
| 0.4709 | 76.0 | 4788 | 0.5871 |
| 0.465 | 77.0 | 4851 | 0.6344 |
| 0.4651 | 78.0 | 4914 | 0.6030 |
| 0.4501 | 79.0 | 4977 | 0.5998 |
| 0.4584 | 80.0 | 5040 | 0.5926 |
| 0.4651 | 81.0 | 5103 | 0.6134 |
| 0.438 | 82.0 | 5166 | 0.6254 |
| 0.448 | 83.0 | 5229 | 0.6260 |
| 0.4295 | 84.0 | 5292 | 0.5866 |
| 0.434 | 85.0 | 5355 | 0.5740 |
| 0.4261 | 86.0 | 5418 | 0.5691 |
| 0.4312 | 87.0 | 5481 | 0.6243 |
| 0.4289 | 88.0 | 5544 | 0.5781 |
| 0.4255 | 89.0 | 5607 | 0.6226 |
| 0.4254 | 90.0 | 5670 | 0.5538 |
| 0.4231 | 91.0 | 5733 | 0.5874 |
| 0.4107 | 92.0 | 5796 | 0.6054 |
| 0.4082 | 93.0 | 5859 | 0.5898 |
| 0.4144 | 94.0 | 5922 | 0.5826 |
| 0.4225 | 95.0 | 5985 | 0.5501 |
| 0.3964 | 96.0 | 6048 | 0.5886 |
| 0.3972 | 97.0 | 6111 | 0.5831 |
| 0.4165 | 98.0 | 6174 | 0.5164 |
| 0.4024 | 99.0 | 6237 | 0.5714 |
| 0.4013 | 100.0 | 6300 | 0.5734 |
| 0.3933 | 101.0 | 6363 | 0.5727 |
| 0.3821 | 102.0 | 6426 | 0.5985 |
| 0.3904 | 103.0 | 6489 | 0.5571 |
| 0.3965 | 104.0 | 6552 | 0.5837 |
| 0.3789 | 105.0 | 6615 | 0.5989 |
| 0.3733 | 106.0 | 6678 | 0.5405 |
| 0.3907 | 107.0 | 6741 | 0.6059 |
| 0.3794 | 108.0 | 6804 | 0.5602 |
| 0.3689 | 109.0 | 6867 | 0.5590 |
| 0.3603 | 110.0 | 6930 | 0.5886 |
| 0.3747 | 111.0 | 6993 | 0.5294 |
| 0.3667 | 112.0 | 7056 | 0.5759 |
| 0.3754 | 113.0 | 7119 | 0.5821 |
| 0.3676 | 114.0 | 7182 | 0.5653 |
| 0.3524 | 115.0 | 7245 | 0.5537 |
| 0.3624 | 116.0 | 7308 | 0.5523 |
| 0.3527 | 117.0 | 7371 | 0.5799 |
| 0.3588 | 118.0 | 7434 | 0.6346 |
| 0.3539 | 119.0 | 7497 | 0.5116 |
| 0.3553 | 120.0 | 7560 | 0.5716 |
| 0.3483 | 121.0 | 7623 | 0.5721 |
| 0.3625 | 122.0 | 7686 | 0.5393 |
| 0.3354 | 123.0 | 7749 | 0.5800 |
| 0.3392 | 124.0 | 7812 | 0.5389 |
| 0.344 | 125.0 | 7875 | 0.5455 |
| 0.3451 | 126.0 | 7938 | 0.5428 |
| 0.3374 | 127.0 | 8001 | 0.5580 |
| 0.3428 | 128.0 | 8064 | 0.5339 |
| 0.3386 | 129.0 | 8127 | 0.5447 |
| 0.3318 | 130.0 | 8190 | 0.5738 |
| 0.3388 | 131.0 | 8253 | 0.5667 |
| 0.3335 | 132.0 | 8316 | 0.5407 |
| 0.3383 | 133.0 | 8379 | 0.5679 |
| 0.3299 | 134.0 | 8442 | 0.5846 |
| 0.327 | 135.0 | 8505 | 0.5511 |
| 0.3354 | 136.0 | 8568 | 0.5649 |
| 0.32 | 137.0 | 8631 | 0.5358 |
| 0.3265 | 138.0 | 8694 | 0.5528 |
| 0.319 | 139.0 | 8757 | 0.5926 |
| 0.3304 | 140.0 | 8820 | 0.5531 |
| 0.3191 | 141.0 | 8883 | 0.5379 |
| 0.3298 | 142.0 | 8946 | 0.5468 |
| 0.3134 | 143.0 | 9009 | 0.5623 |
| 0.3186 | 144.0 | 9072 | 0.5162 |
| 0.3179 | 145.0 | 9135 | 0.5570 |
| 0.3175 | 146.0 | 9198 | 0.5379 |
| 0.3051 | 147.0 | 9261 | 0.5437 |
| 0.312 | 148.0 | 9324 | 0.5301 |
| 0.3093 | 149.0 | 9387 | 0.5393 |
| 0.3227 | 150.0 | 9450 | 0.5531 |
| 0.3125 | 151.0 | 9513 | 0.5794 |
| 0.3162 | 152.0 | 9576 | 0.5677 |
| 0.3006 | 153.0 | 9639 | 0.5668 |
| 0.3011 | 154.0 | 9702 | 0.5797 |
| 0.3208 | 155.0 | 9765 | 0.5450 |
| 0.3048 | 156.0 | 9828 | 0.5465 |
| 0.3092 | 157.0 | 9891 | 0.5358 |
| 0.3125 | 158.0 | 9954 | 0.5043 |
| 0.3083 | 159.0 | 10017 | 0.5321 |
| 0.3 | 160.0 | 10080 | 0.5526 |
| 0.2968 | 161.0 | 10143 | 0.5324 |
| 0.3068 | 162.0 | 10206 | 0.5471 |
| 0.3129 | 163.0 | 10269 | 0.5575 |
| 0.3061 | 164.0 | 10332 | 0.5796 |
| 0.2943 | 165.0 | 10395 | 0.5544 |
| 0.2967 | 166.0 | 10458 | 0.5422 |
| 0.2959 | 167.0 | 10521 | 0.5149 |
| 0.2987 | 168.0 | 10584 | 0.5685 |
| 0.3045 | 169.0 | 10647 | 0.5176 |
| 0.2975 | 170.0 | 10710 | 0.5044 |
| 0.2948 | 171.0 | 10773 | 0.5264 |
| 0.3 | 172.0 | 10836 | 0.5174 |
| 0.2967 | 173.0 | 10899 | 0.5658 |
| 0.2873 | 174.0 | 10962 | 0.4988 |
| 0.2939 | 175.0 | 11025 | 0.5512 |
| 0.2954 | 176.0 | 11088 | 0.5139 |
| 0.301 | 177.0 | 11151 | 0.6007 |
| 0.2948 | 178.0 | 11214 | 0.5167 |
| 0.2898 | 179.0 | 11277 | 0.5443 |
| 0.2869 | 180.0 | 11340 | 0.5544 |
| 0.2973 | 181.0 | 11403 | 0.5644 |
| 0.2985 | 182.0 | 11466 | 0.5153 |
| 0.2904 | 183.0 | 11529 | 0.5561 |
| 0.2872 | 184.0 | 11592 | 0.5610 |
| 0.2894 | 185.0 | 11655 | 0.5511 |
| 0.297 | 186.0 | 11718 | 0.5408 |
| 0.2904 | 187.0 | 11781 | 0.5574 |
| 0.2818 | 188.0 | 11844 | 0.5182 |
| 0.2873 | 189.0 | 11907 | 0.5425 |
| 0.2973 | 190.0 | 11970 | 0.5198 |
| 0.2913 | 191.0 | 12033 | 0.5119 |
| 0.2931 | 192.0 | 12096 | 0.5585 |
| 0.2859 | 193.0 | 12159 | 0.5368 |
| 0.2853 | 194.0 | 12222 | 0.5274 |
| 0.294 | 195.0 | 12285 | 0.5685 |
| 0.2885 | 196.0 | 12348 | 0.5581 |
| 0.295 | 197.0 | 12411 | 0.4987 |
| 0.2807 | 198.0 | 12474 | 0.5168 |
| 0.289 | 199.0 | 12537 | 0.5284 |
| 0.2893 | 200.0 | 12600 | 0.5420 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
roberta-large-mnli | [
"pytorch",
"tf",
"jax",
"safetensors",
"roberta",
"text-classification",
"en",
"dataset:multi_nli",
"dataset:wikipedia",
"dataset:bookcorpus",
"arxiv:1907.11692",
"arxiv:1806.02847",
"arxiv:1804.07461",
"arxiv:1704.05426",
"arxiv:1508.05326",
"arxiv:1809.05053",
"arxiv:1910.09700",
"transformers",
"autogenerated-modelcard",
"license:mit",
"has_space"
]
| text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 117,700 | 2022-11-11T01:12:15Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1608
- F1: 0.8593
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2888 | 1.0 | 715 | 0.1779 | 0.8233 |
| 0.1437 | 2.0 | 1430 | 0.1570 | 0.8497 |
| 0.0931 | 3.0 | 2145 | 0.1608 | 0.8593 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
xlnet-base-cased | [
"pytorch",
"tf",
"rust",
"xlnet",
"text-generation",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1906.08237",
"transformers",
"license:mit",
"has_space"
]
| text-generation | {
"architectures": [
"XLNetLMHeadModel"
],
"model_type": "xlnet",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 250
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 163,098 | 2022-11-11T03:20:02Z | ---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: frases-roberta-juridico-v0.7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# frases-roberta-juridico-v0.7
This model is a fine-tuned version of [projetocnj/roberta-base-juridico](https://huggingface.co/projetocnj/roberta-base-juridico) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9295
- F1: 0.8703
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20.0
### Training results
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.13.0
- Datasets 2.6.1
- Tokenizers 0.12.1
|
AVSilva/bertimbau-large-fine-tuned-sd | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:mit",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | 2022-11-11T13:47:55Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 40 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 40,
"warmup_steps": 4,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Abab/Test_Albert | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-11-11T15:05:28Z | ---
language:
- pl
pipeline_tag: text-classification
widget:
- text: "Przykro patrzeć, a słuchać się nie da."
example_title: "example 1"
- text: "Oczywiście ze Pan Prezydent to nasza duma narodowa!!"
example_title: "example 2"
tags:
- text
- sentiment
- politics
metrics:
- accuracy
- f1
model-index:
- name: PaReS-sentimenTw-political-PL
results:
- task:
type: sentiment-classification # Required. Example: automatic-speech-recognition
name: Text Classification # Optional. Example: Speech Recognition
dataset:
type: tweets # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
name: tweets_2020_electionsPL # Required. A pretty name for the dataset. Example: Common Voice (French)
metrics:
- type: f1 # Required. Example: wer. Use metric id from https://hf.co/metrics
value: 94.4 # Required. Example: 20.90
---
# PaReS-sentimenTw-political-PL
This model is a fine-tuned version of [dkleczek/bert-base-polish-cased-v1](https://huggingface.co/dkleczek/bert-base-polish-cased-v1) to predict 3-categorical sentiment.
Fine-tuned on 1k sample of manually annotated Twitter data.
Model developed as a part of ComPathos project: https://www.ncn.gov.pl/sites/default/files/listy-rankingowe/2020-09-30apsv2/streszczenia/497124-en.pdf
```
from transformers import pipeline
model_path = "eevvgg/PaReS-sentimenTw-political-PL"
sentiment_task = pipeline(task = "sentiment-analysis", model = model_path, tokenizer = model_path)
sequence = ["Cała ta śmieszna debata była próbą ukrycia problemów gospodarczych jakie są i nadejdą, pytania w większości o mało istotnych sprawach",
"Brawo panie ministrze!"]
result = sentiment_task(sequence)
labels = [i['label'] for i in result] # ['Negative', 'Positive']
```
## Intended uses & limitations
Sentiment detection in Polish data (fine-tuned on tweets from political domain).
## Training and evaluation data
- Trained for 3 epochs, mini-batch size of 8.
- Training results: loss: 0.1358926964368792
It achieves the following results on the test set (10%):
- No. examples = 100
- mini batch size = 8
- accuracy = 0.950
- macro f1 = 0.944
precision recall f1-score support
0 0.960 0.980 0.970 49
1 0.958 0.885 0.920 26
2 0.923 0.960 0.941 25
|
AigizK/wav2vec2-large-xls-r-300m-bashkir-cv7_no_lm | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 457.50 +/- 157.18
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jmsalvi -f logs/
python enjoy.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga jmsalvi -f logs/
rl_zoo3 enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga jmsalvi
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 150000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.15),
('frame_stack', 3),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1200000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
AimB/konlpy_berttokenizer_helsinki | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: BERT-FINETUNE-MBTI-LM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT-FINETUNE-MBTI-LM
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Aimendo/Triage | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: BERT-FINETUNE-MBTI-CLS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT-FINETUNE-MBTI-CLS
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
|
Aimendo/autonlp-triage-35248482 | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:Aimendo/autonlp-data-triage",
"transformers",
"autonlp",
"co2_eq_emissions"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 33 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/babyquakes524/1668231755244/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1501301681191112708/gKRltdLC_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">babyquakes</div>
<div style="text-align: center; font-size: 14px;">@babyquakes524</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from babyquakes.
| Data | babyquakes |
| --- | --- |
| Tweets downloaded | 103 |
| Retweets | 14 |
| Short tweets | 8 |
| Tweets kept | 81 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2lceokfz/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @babyquakes524's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/jqxev7cl) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/jqxev7cl/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/babyquakes524')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Ajteks/Chatbot | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- generated_from_trainer
model-index:
- name: BERT-FINETUNE-MBTI-CLS-BERT-FINETUNE-MBTI-CLS-JointBERT-Warmup-from-CLS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT-FINETUNE-MBTI-CLS-BERT-FINETUNE-MBTI-CLS-JointBERT-Warmup-from-CLS
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.3549
- Cls loss: 2.1311
- Lm loss: 4.8216
- Cls Accuracy: 0.6058
- Cls F1: 0.6037
- Cls Precision: 0.6084
- Cls Recall: 0.6058
- Perplexity: 124.17
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cls loss | Lm loss | Cls Accuracy | Cls F1 | Cls Precision | Cls Recall | Perplexity |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:-------:|:------------:|:------:|:-------------:|:----------:|:----------:|
| 5.778 | 1.0 | 3470 | 5.5656 | 1.9246 | 5.0840 | 0.5931 | 0.5907 | 0.5968 | 0.5931 | 161.43 |
| 5.1443 | 2.0 | 6940 | 5.3831 | 2.0178 | 4.8783 | 0.6069 | 0.6057 | 0.6177 | 0.6069 | 131.40 |
| 4.9386 | 3.0 | 10410 | 5.3549 | 2.1311 | 4.8216 | 0.6058 | 0.6037 | 0.6084 | 0.6058 | 124.17 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1 |
AkaiSnow/Rick_bot | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- generated_from_trainer
model-index:
- name: BERT-FINETUNE-MBTI-LM-BERT-FINETUNE-MBTI-LM-JointBERT-Warmup-from-LM
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT-FINETUNE-MBTI-LM-BERT-FINETUNE-MBTI-LM-JointBERT-Warmup-from-LM
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.7966
- Cls loss: 1.4255
- Lm loss: 4.4398
- Cls Accuracy: 0.6380
- Cls F1: 0.6319
- Cls Precision: 0.6416
- Cls Recall: 0.6380
- Perplexity: 84.76
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cls loss | Lm loss | Cls Accuracy | Cls F1 | Cls Precision | Cls Recall | Perplexity |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:-------:|:------------:|:------:|:-------------:|:----------:|:----------:|
| 5.3087 | 1.0 | 3470 | 4.9005 | 1.4109 | 4.5474 | 0.6075 | 0.5981 | 0.6132 | 0.6075 | 94.39 |
| 4.8274 | 2.0 | 6940 | 4.7987 | 1.3448 | 4.4621 | 0.6242 | 0.6193 | 0.6381 | 0.6242 | 86.67 |
| 4.6472 | 3.0 | 10410 | 4.7966 | 1.4255 | 4.4398 | 0.6380 | 0.6319 | 0.6416 | 0.6380 | 84.76 |
### Framework versions
- Transformers 4.21.2
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1 |
Akari/albert-base-v2-finetuned-squad | [
"pytorch",
"tensorboard",
"albert",
"question-answering",
"dataset:squad_v2",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"AlbertForQuestionAnswering"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | null | ---
tags:
- generated_from_trainer
model-index:
- name: Clinical-Longformer-breastcancer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Clinical-Longformer-breastcancer
This model is a fine-tuned version of [yikuan8/Clinical-Longformer](https://huggingface.co/yikuan8/Clinical-Longformer) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1642
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1392 | 1.3046 |
| No log | 2.0 | 2784 | 1.2224 |
| No log | 3.0 | 4176 | 1.1928 |
| No log | 4.0 | 5568 | 1.1641 |
| No log | 5.0 | 6960 | 1.1507 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.8.0
- Datasets 2.2.2
- Tokenizers 0.11.6
|
Akash7897/distilbert-base-uncased-finetuned-cola | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 31 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0272
- Accuracy: 0.9287
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 0.2337 | 0.6274 |
| 0.3698 | 2.0 | 636 | 0.1052 | 0.8458 |
| 0.3698 | 3.0 | 954 | 0.0650 | 0.8935 |
| 0.1216 | 4.0 | 1272 | 0.0476 | 0.9068 |
| 0.0727 | 5.0 | 1590 | 0.0386 | 0.9181 |
| 0.0727 | 6.0 | 1908 | 0.0336 | 0.9219 |
| 0.0556 | 7.0 | 2226 | 0.0305 | 0.9229 |
| 0.0477 | 8.0 | 2544 | 0.0287 | 0.9287 |
| 0.0477 | 9.0 | 2862 | 0.0276 | 0.9274 |
| 0.0441 | 10.0 | 3180 | 0.0272 | 0.9287 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Tokenizers 0.13.2
|
Akashamba/distilbert-base-uncased-finetuned-ner | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- dreambooth-hackathon
- wildcard
- text-to-image
datasets: BirdL/NGA_Art
inference: true
---
# NGA_Art_SD-V1.5 Model Card
TL;DR:NGA Art is a Dreambooth model trained from public domain images from the National Art Gallery. The token is sks.
# Model Pretraining
This model is trained on top [Stable Diffusion 1.5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
# Data
The data for NGA is located on [this page](https://huggingface.co/datasets/BirdL/NGA_Art) and was scraped from [Wikimedia Commons]. This dataset is 500 images in size. The dataset page goes into more detail.
# Examples
(TBD) |
Akashpb13/Central_kurdish_xlsr | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"ckb",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- audiofolder
metrics:
- wer
model-index:
- name: wav2vec2-xlsr-53-espeak-cv-ft-mhr3-ntsema-colab
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: audiofolder
type: audiofolder
config: default
split: train
args: default
metrics:
- name: Wer
type: wer
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-53-espeak-cv-ft-mhr3-ntsema-colab
This model is a fine-tuned version of [facebook/wav2vec2-xlsr-53-espeak-cv-ft](https://huggingface.co/facebook/wav2vec2-xlsr-53-espeak-cv-ft) on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7701
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 5.329 | 5.79 | 400 | 1.3162 | 1.0 |
| 1.5529 | 11.59 | 800 | 0.6968 | 1.0 |
| 0.8373 | 17.39 | 1200 | 0.7345 | 1.0 |
| 0.4959 | 23.19 | 1600 | 0.7296 | 1.0 |
| 0.3207 | 28.98 | 2000 | 0.7701 | 1.0 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
Akashpb13/Galician_xlsr | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"gl",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9183870967741935
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7720
- Accuracy: 0.9184
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2896 | 1.0 | 318 | 3.2891 | 0.7429 |
| 2.6283 | 2.0 | 636 | 1.8755 | 0.8374 |
| 1.5481 | 3.0 | 954 | 1.1570 | 0.8961 |
| 1.0149 | 4.0 | 1272 | 0.8573 | 0.9132 |
| 0.7952 | 5.0 | 1590 | 0.7720 | 0.9184 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Akashpb13/xlsr_kurmanji_kurdish | [
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"kmr",
"ku",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"robust-speech-event",
"model_for_talk",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
license: mit
---
### Oleg KOG on Stable Diffusion via Dreambooth trained on the [fast-DreamBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
#### model by vronsice
This your the Stable Diffusion model fine-tuned the Oleg KOG concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt(s)`: **oleg** |
AkshatSurolia/ConvNeXt-FaceMask-Finetuned | [
"pytorch",
"safetensors",
"convnext",
"image-classification",
"dataset:Face-Mask18K",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| image-classification | {
"architectures": [
"ConvNextForImageClassification"
],
"model_type": "convnext",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 56 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- audiofolder
metrics:
- wer
model-index:
- name: wav2vec2-xlsr-53-espeak-cv-ft-bak4-ntsema-colab
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: audiofolder
type: audiofolder
config: default
split: train
args: default
metrics:
- name: Wer
type: wer
value: 0.5241343126967472
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xlsr-53-espeak-cv-ft-bak4-ntsema-colab
This model is a fine-tuned version of [facebook/wav2vec2-xlsr-53-espeak-cv-ft](https://huggingface.co/facebook/wav2vec2-xlsr-53-espeak-cv-ft) on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9896
- Wer: 0.5241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4269 | 9.52 | 400 | 0.8771 | 0.6238 |
| 0.3885 | 19.05 | 800 | 0.9461 | 0.5661 |
| 0.1447 | 28.57 | 1200 | 0.9896 | 0.5241 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
AkshatSurolia/DeiT-FaceMask-Finetuned | [
"pytorch",
"deit",
"image-classification",
"dataset:Face-Mask18K",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| image-classification | {
"architectures": [
"DeiTForImageClassification"
],
"model_type": "deit",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 46 | null | ---
license: mit
---
# Bangla Wikipedia Doc2Vec model
Bengali Wikipedia doc2vec model trained on Wikipedia dumps articles with vector size 100.
This model is trained for the [bnlp](https://github.com/sagorbrur/bnlp) library.
## Training details
- Total Wikipedia articles: 110448
- Hyper-parameter: `epochs: 40, min_count=2, vector_size=100`
## Usage
- Get document vector from input document
```py
from bnlp import BengaliDoc2vec
bn_doc2vec = BengaliDoc2vec()
model_path = "bangla_news_article_doc2vec.model" # keep other .npy model files also in same folder
document = "রাষ্ট্রবিরোধী ও উসকানিমূলক বক্তব্য দেওয়ার অভিযোগে গাজীপুরের গাছা থানায় ডিজিটাল নিরাপত্তা আইনে করা মামলায় আলোচিত ‘শিশুবক্তা’ রফিকুল ইসলামের বিরুদ্ধে অভিযোগ গঠন করেছেন আদালত। ফলে মামলার আনুষ্ঠানিক বিচার শুরু হলো। আজ বুধবার (২৬ জানুয়ারি) ঢাকার সাইবার ট্রাইব্যুনালের বিচারক আসসামছ জগলুল হোসেন এ অভিযোগ গঠন করেন। এর আগে, রফিকুল ইসলামকে কারাগার থেকে আদালতে হাজির করা হয়। এরপর তাকে নির্দোষ দাবি করে তার আইনজীবী শোহেল মো. ফজলে রাব্বি অব্যাহতি চেয়ে আবেদন করেন। অন্যদিকে, রাষ্ট্রপক্ষ অভিযোগ গঠনের পক্ষে শুনানি করেন। উভয় পক্ষের শুনানি শেষে আদালত অব্যাহতির আবেদন খারিজ করে অভিযোগ গঠনের মাধ্যমে বিচার শুরুর আদেশ দেন। একইসঙ্গে সাক্ষ্যগ্রহণের জন্য আগামী ২২ ফেব্রুয়ারি দিন ধার্য করেন আদালত।"
vector = bn_doc2vec.get_document_vector(model_path, text)
print(vector)
```
- Find document similarity between two document
```py
from bnlp import BengaliDoc2vec
bn_doc2vec = BengaliDoc2vec()
model_path = "bangla_news_article_doc2vec.model" # keep other .npy model files also in same folder
article_1 = "রাষ্ট্রবিরোধী ও উসকানিমূলক বক্তব্য দেওয়ার অভিযোগে গাজীপুরের গাছা থানায় ডিজিটাল নিরাপত্তা আইনে করা মামলায় আলোচিত ‘শিশুবক্তা’ রফিকুল ইসলামের বিরুদ্ধে অভিযোগ গঠন করেছেন আদালত। ফলে মামলার আনুষ্ঠানিক বিচার শুরু হলো। আজ বুধবার (২৬ জানুয়ারি) ঢাকার সাইবার ট্রাইব্যুনালের বিচারক আসসামছ জগলুল হোসেন এ অভিযোগ গঠন করেন। এর আগে, রফিকুল ইসলামকে কারাগার থেকে আদালতে হাজির করা হয়। এরপর তাকে নির্দোষ দাবি করে তার আইনজীবী শোহেল মো. ফজলে রাব্বি অব্যাহতি চেয়ে আবেদন করেন। অন্যদিকে, রাষ্ট্রপক্ষ অভিযোগ গঠনের পক্ষে শুনানি করেন। উভয় পক্ষের শুনানি শেষে আদালত অব্যাহতির আবেদন খারিজ করে অভিযোগ গঠনের মাধ্যমে বিচার শুরুর আদেশ দেন। একইসঙ্গে সাক্ষ্যগ্রহণের জন্য আগামী ২২ ফেব্রুয়ারি দিন ধার্য করেন আদালত।"
article_2 = "রাষ্ট্রবিরোধী ও উসকানিমূলক বক্তব্য দেওয়ার অভিযোগে গাজীপুরের গাছা থানায় ডিজিটাল নিরাপত্তা আইনে করা মামলায় আলোচিত ‘শিশুবক্তা’ রফিকুল ইসলামের বিরুদ্ধে অভিযোগ গঠন করেছেন আদালত। ফলে মামলার আনুষ্ঠানিক বিচার শুরু হলো। আজ বুধবার (২৬ জানুয়ারি) ঢাকার সাইবার ট্রাইব্যুনালের বিচারক আসসামছ জগলুল হোসেন এ অভিযোগ গঠন করেন। এর আগে, রফিকুল ইসলামকে কারাগার থেকে আদালতে হাজির করা হয়। এরপর তাকে নির্দোষ দাবি করে তার আইনজীবী শোহেল মো. ফজলে রাব্বি অব্যাহতি চেয়ে আবেদন করেন। অন্যদিকে, রাষ্ট্রপক্ষ অভিযোগ গঠনের পক্ষে শুনানি করেন। উভয় পক্ষের শুনানি শেষে আদালত অব্যাহতির আবেদন খারিজ করে অভিযোগ গঠনের মাধ্যমে বিচার শুরুর আদেশ দেন। একইসঙ্গে সাক্ষ্যগ্রহণের জন্য আগামী ২২ ফেব্রুয়ারি দিন ধার্য করেন আদালত।"
similarity = bn_doc2vec.get_document_similarity(
model_path,
article_1,
article_2
)
print(similarity)
``` |
AkshatSurolia/ICD-10-Code-Prediction | [
"pytorch",
"bert",
"transformers",
"text-classification",
"license:apache-2.0",
"has_space"
]
| text-classification | {
"architectures": null,
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 994 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: train
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9325508348487354
- name: Recall
type: recall
value: 0.9493436553349041
- name: F1
type: f1
value: 0.9408723209073472
- name: Accuracy
type: accuracy
value: 0.9862247601106728
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0604
- Precision: 0.9326
- Recall: 0.9493
- F1: 0.9409
- Accuracy: 0.9862
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0902 | 1.0 | 1756 | 0.0673 | 0.9151 | 0.9308 | 0.9229 | 0.9823 |
| 0.0347 | 2.0 | 3512 | 0.0613 | 0.9265 | 0.9478 | 0.9370 | 0.9856 |
| 0.0181 | 3.0 | 5268 | 0.0604 | 0.9326 | 0.9493 | 0.9409 | 0.9862 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
Akuva2001/SocialGraph | [
"has_space"
]
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilroberta-base-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: train
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5788207437251082
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-cola
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7665
- Matthews Correlation: 0.5788
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5226 | 1.0 | 535 | 0.5360 | 0.4620 |
| 0.3597 | 2.0 | 1070 | 0.4694 | 0.5261 |
| 0.2602 | 3.0 | 1605 | 0.5318 | 0.5496 |
| 0.2063 | 4.0 | 2140 | 0.7052 | 0.5701 |
| 0.1659 | 5.0 | 2675 | 0.7665 | 0.5788 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
AlanDev/DallEMiniButBetter | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.95
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3397
- Accuracy: 0.95
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.202 | 1.0 | 318 | 2.3610 | 0.7506 |
| 1.8112 | 2.0 | 636 | 1.1899 | 0.8610 |
| 0.9255 | 3.0 | 954 | 0.6534 | 0.9168 |
| 0.5268 | 4.0 | 1272 | 0.4620 | 0.9368 |
| 0.3624 | 5.0 | 1590 | 0.3941 | 0.9448 |
| 0.2935 | 6.0 | 1908 | 0.3682 | 0.9452 |
| 0.2584 | 7.0 | 2226 | 0.3515 | 0.9497 |
| 0.2393 | 8.0 | 2544 | 0.3453 | 0.9481 |
| 0.2289 | 9.0 | 2862 | 0.3421 | 0.9490 |
| 0.225 | 10.0 | 3180 | 0.3397 | 0.95 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
AlanDev/test | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/ocean-chicken/ddpm-butterflies-128/tensorboard?#scalars)
|
AlbertHSU/BertTEST | [
"pytorch"
]
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: whisper_wermet_0005
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_wermet_0005
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.8954
- Train Accuracy: 0.0240
- Train Wermet: 16.2471
- Validation Loss: 1.4889
- Validation Accuracy: 0.0266
- Validation Wermet: 14.2782
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.0795 | 0.0116 | 43.8776 | 4.4395 | 0.0122 | 35.4119 | 0 |
| 4.3059 | 0.0131 | 29.7976 | 4.0311 | 0.0143 | 26.0070 | 1 |
| 3.8871 | 0.0148 | 19.3999 | 3.6500 | 0.0158 | 19.2186 | 2 |
| 3.0943 | 0.0184 | 18.3704 | 2.3327 | 0.0226 | 22.5034 | 3 |
| 1.8954 | 0.0240 | 16.2471 | 1.4889 | 0.0266 | 14.2782 | 4 |
### Framework versions
- Transformers 4.25.0.dev0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
Ale/Alen | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: whisper_wermet_0010
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whisper_wermet_0010
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5820
- Train Accuracy: 0.0305
- Train Wermet: 1.5323
- Validation Loss: 0.6980
- Validation Accuracy: 0.0305
- Validation Wermet: 1.1238
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train Wermet | Validation Loss | Validation Accuracy | Validation Wermet | Epoch |
|:----------:|:--------------:|:------------:|:---------------:|:-------------------:|:-----------------:|:-----:|
| 5.0795 | 0.0116 | 43.8776 | 4.4395 | 0.0122 | 35.4119 | 0 |
| 4.3059 | 0.0131 | 29.7976 | 4.0311 | 0.0143 | 26.0070 | 1 |
| 3.8871 | 0.0148 | 19.3999 | 3.6500 | 0.0158 | 19.2186 | 2 |
| 3.0943 | 0.0184 | 18.3704 | 2.3327 | 0.0226 | 22.5034 | 3 |
| 1.8954 | 0.0240 | 16.2471 | 1.4889 | 0.0266 | 14.2782 | 4 |
| 1.2781 | 0.0269 | 8.4169 | 1.1273 | 0.0283 | 7.4581 | 5 |
| 0.9797 | 0.0283 | 4.8739 | 0.9481 | 0.0292 | 3.9451 | 6 |
| 0.8006 | 0.0293 | 2.7433 | 0.8371 | 0.0297 | 2.3065 | 7 |
| 0.6764 | 0.0299 | 2.1646 | 0.7554 | 0.0301 | 1.3005 | 8 |
| 0.5820 | 0.0305 | 1.5323 | 0.6980 | 0.0305 | 1.1238 | 9 |
### Framework versions
- Transformers 4.25.0.dev0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
AlekseyKorshuk/horror-scripts | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 19 | null | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
---
**Shichigoro Diffusion** v0.2
This is an experimental Stable Diffusion model trained on artworks by artist Shichigoro (https://shichigoro.com/).
Only for personal use! Please respect the original artist!
Use the token **_shichigoro_** in your prompts for the effect.
_shichigoro, woman, breasts, detailed iris, intricate details, sharp focus, red eyes, detailed black hair_
Steps: 37, Sampler: Euler a, CFG scale: 7, Seed: 323609095, Size: 512x768

_shichigoro, lara croft_
Steps: 35, Sampler: Euler a, CFG scale: 7, Seed: 1483473625, Size: 512x768

_shichigoro, cute cat_
Steps: 35, Sampler: Euler a, CFG scale: 7, Seed: 3304424502, Size: 512x704

_shichigoro, cute dog_
Steps: 35, Sampler: Euler a, CFG scale: 7, Seed: 3304424516, Size: 512x704

_shichigoro, cute dog_
Steps: 35, Sampler: Euler a, CFG scale: 7, Seed: 3304424505, Size: 512x704

_shichigoro, cow_
Steps: 35, Sampler: Euler a, CFG scale: 7, Seed: 3304424524, Size: 512x704

_shichigoro, harry potter_
Steps: 35, Sampler: Euler a, CFG scale: 7, Seed: 3304424516, Size: 512x704

_shichigoro, johnny depp_
Steps: 35, Sampler: Euler a, CFG scale: 7, Seed: 3304424521, Size: 512x704

_shichigoro, keanu reeves_
Steps: 35, Sampler: Euler a, CFG scale: 7, Seed: 3304424524, Size: 512x704

_shichigoro, milla jovovich_
Steps: 35, Sampler: Euler a, CFG scale: 7, Seed: 3304424524, Size: 512x704

_shichigoro, ALICE IN WONDERLAND_
Steps: 35, Sampler: Euler a, CFG scale: 7, Seed: 3876501189, Size: 512x704

### 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
This model was trained using the diffusers based dreambooth training by ShivamShrirao using prior-preservation loss and the _train-text-encoder_ flag in 800 steps.
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
Amalq/roberta-base-finetuned-schizophreniaReddit2 | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"license:mit",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: try-reinforce-cartpole-custom-2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 115.70 +/- 4.03
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
AmanPriyanshu/DistilBert-Sentiment-Analysis | [
"tf",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"DistilBertForMaskedLM"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: cc-by-nc-3.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: finetuning-sentiment-model-bert-multilingual
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-bert-multilingual
This model is a fine-tuned version of [QCRI/bert-base-multilingual-cased-pos-english](https://huggingface.co/QCRI/bert-base-multilingual-cased-pos-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9412
- Accuracy: 0.6624
- F1: 0.6624
- Precision: 0.6624
- Recall: 0.6624
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
AndreLiu1225/t5-news | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 18 | null | ---
tags:
- generated_from_trainer
model-index:
- name: kogpt2test-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kogpt2test-finetuned-wikitext2
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6688
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 227 | 3.6688 |
| No log | 2.0 | 454 | 3.6688 |
| 2.9687 | 3.0 | 681 | 3.6688 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
Andrey1989/mbart-finetuned-en-to-kk | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: creativeml-openrail-m
thumbnail: "https://huggingface.co/wavymulder/overlord-diffusion-HN/resolve/main/images/char_eximg.jpg"
---
**Overlord Diffusion - Hypernetwork**

[*DOWNLOAD LINK*](https://huggingface.co/wavymulder/overlord-diffusion-HN/resolve/main/overlord%20-%20public%20version%201.0.pt) - This is a hypernet trained on screenshots of the anime Overlord.
In your prompt, use the activation token: `overlord screencap anime`
Designed to be used with 1.5, possibly works with other models but might require extra prompting.


I really love how armour looks with this hypernetwork. Not currently trained on any actual characters from the anime. Struggles with modern clothes and settings, naturally. Makes cool skeletons but not Ainz (goal for future versions is to add main cast members) |
Andrija/RobertaFastBPE | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-11-12T19:13:31Z | ---
license: mit
---
### brime on Stable Diffusion via Dreambooth trained on the [fast-DreamBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
#### model by samj
This your the Stable Diffusion model fine-tuned the brime concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt(s)`: **prplbrime**
You can also train your own concepts and upload them to the library by using [the fast-DremaBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:
prplbrime




|
Andrija/SRoBERTa-NLP | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"RobertaForTokenClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9506451612903226
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3448
- Accuracy: 0.9506
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.4007 | 1.0 | 318 | 2.5187 | 0.7490 |
| 1.93 | 2.0 | 636 | 1.2663 | 0.8606 |
| 0.9765 | 3.0 | 954 | 0.6825 | 0.9165 |
| 0.5424 | 4.0 | 1272 | 0.4728 | 0.9361 |
| 0.3632 | 5.0 | 1590 | 0.3989 | 0.9439 |
| 0.289 | 6.0 | 1908 | 0.3729 | 0.9458 |
| 0.2521 | 7.0 | 2226 | 0.3561 | 0.9494 |
| 0.2325 | 8.0 | 2544 | 0.3503 | 0.9490 |
| 0.2216 | 9.0 | 2862 | 0.3474 | 0.9487 |
| 0.2175 | 10.0 | 3180 | 0.3448 | 0.9506 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
Andrija/SRoBERTa-XL-NER | [
"pytorch",
"roberta",
"token-classification",
"hr",
"sr",
"multilingual",
"dataset:hr500k",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"RobertaForTokenClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/imyawnny/1668282121358/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1468088681063931909/D3wxUSZI_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">bigbootydumptruck</div>
<div style="text-align: center; font-size: 14px;">@imyawnny</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from bigbootydumptruck.
| Data | bigbootydumptruck |
| --- | --- |
| Tweets downloaded | 1025 |
| Retweets | 139 |
| Short tweets | 238 |
| Tweets kept | 648 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1r9oa3i1/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @imyawnny's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/66u27v0w) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/66u27v0w/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/imyawnny')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Ankitha/DialoGPT-small-harrypottery | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/bet365/1668290987822/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1514571570630569989/z0NAzgOD_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">bet365</div>
<div style="text-align: center; font-size: 14px;">@bet365</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from bet365.
| Data | bet365 |
| --- | --- |
| Tweets downloaded | 3248 |
| Retweets | 42 |
| Short tweets | 9 |
| Tweets kept | 3197 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/545q6umu/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bet365's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2hex0umv) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2hex0umv/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/bet365')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Ann2020/distilbert-base-uncased-finetuned-ner | [
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"dataset:conll2003",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
license: creativeml-openrail-m
tags:
- text-to-image
---
Trained on around 100 images at 768x768 resolution.
Download "ComplexLA Style.ckpt" and add it to your model folder.
Use prompt: ComplexLA style
Use resolution near 768x768, lower resolution works but quality will not be as good.







|
Anonymous0230/model_name | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- spacy
- token-classification
language:
- pt
license: cc-by-sa-4.0
model-index:
- name: pt_core_news_trf
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.9274830806
- name: NER Recall
type: recall
value: 0.9293805645
- name: NER F Score
type: f_score
value: 0.9284308531
- task:
name: TAG
type: token-classification
metrics:
- name: TAG (XPOS) Accuracy
type: accuracy
value: 0.9782369668
- task:
name: POS
type: token-classification
metrics:
- name: POS (UPOS) Accuracy
type: accuracy
value: 0.9780853081
- task:
name: MORPH
type: token-classification
metrics:
- name: Morph (UFeats) Accuracy
type: accuracy
value: 0.9611192205
- task:
name: LEMMA
type: token-classification
metrics:
- name: Lemma Accuracy
type: accuracy
value: 0.9735006445
- task:
name: UNLABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Unlabeled Attachment Score (UAS)
type: f_score
value: 0.9283559578
- task:
name: LABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Labeled Attachment Score (LAS)
type: f_score
value: 0.8965578424
- task:
name: SENTS
type: token-classification
metrics:
- name: Sentences F-Score
type: f_score
value: 0.9388275276
---
Portuguese transformer pipeline ([neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased)). Components: transformer, morphologizer, parser, ner, attribute_ruler, lemmatizer (trainable_lemmatizer).
| Feature | Description |
| --- | --- |
| **Name** | `pt_core_news_trf` |
| **Version** | `3.4.0` |
| **spaCy** | `>=3.4.3,<3.5.0` |
| **Default Pipeline** | `transformer`, `ner`, `tagger`, `morphologizer`, `trainable_lemmatizer`, `parser` |
| **Components** | `transformer`, `ner`, `tagger`, `morphologizer`, `trainable_lemmatizer`, `parser` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | [UD Portuguese Bosque v2.8](https://github.com/UniversalDependencies/UD_Portuguese-Bosque) (Rademaker, Alexandre; Freitas, Cláudia; de Souza, Elvis; Silveira, Aline; Cavalcanti, Tatiana; Evelyn, Wograine; Rocha, Luisa; Soares-Bastos, Isabela; Bick, Eckhard; Chalub, Fabricio; Paulino-Passos, Guilherme; Real, Livy; de Paiva, Valeria; Zeman, Daniel; Popel, Martin; Mareček, David; Silveira, Natalia; Martins, André)<br />[WikiNER](https://figshare.com/articles/Learning_multilingual_named_entity_recognition_from_Wikipedia/5462500) (Joel Nothman, Nicky Ringland, Will Radford, Tara Murphy, James R Curran) |
| **License** | `CC BY-SA 4.0` |
| **Author** | [Maicon Domingues](http://nlp.rocks) |
### Label Scheme
<details>
<summary>View label scheme (742 labels for 4 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `LOC`, `MISC`, `ORG`, `PER` |
| **`tagger`** | `ADJ`, `ADJ_ADJ`, `ADJ_NOUN`, `ADP`, `ADP_ADV`, `ADP_DET`, `ADP_NUM`, `ADP_PRON`, `ADP_PROPN`, `ADV`, `ADV_PRON`, `AUX`, `AUX_PRON`, `CCONJ`, `CCONJ_PRON`, `DET`, `INTJ`, `NOUN`, `NUM`, `PART`, `PART_NOUN`, `PART_NUM`, `PRON`, `PROPN`, `PROPN_PROPN`, `PUNCT`, `SCONJ`, `SCONJ_DET`, `SCONJ_PRON`, `SYM`, `VERB`, `VERB_PRON`, `VERB_PRON_PRON`, `VERB_SCONJ`, `X` |
| **`morphologizer`** | `Gender=Masc\|Number=Sing\|POS=PROPN`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=ADP\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=NOUN`, `Gender=Fem\|Number=Sing\|POS=PROPN`, `ExtPos=PROPN\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Number=Sing\|POS=PROPN`, `Gender=Fem\|Number=Sing\|POS=VERB\|VerbForm=Part`, `POS=ADV`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=ADJ\|Typo=Yes`, `POS=PUNCT`, `POS=VERB\|VerbForm=Ger`, `Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=NOUN`, `Gender=Fem\|Number=Sing\|POS=ADJ`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `NumType=Card\|POS=NUM`, `POS=SYM`, `Definite=Def\|Gender=Masc\|Number=Plur\|POS=ADP\|PronType=Art`, `Gender=Masc\|Number=Plur\|POS=NOUN`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `ExtPos=PROPN\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Gender=Masc\|Number=Sing\|POS=ADP\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Rel`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=ADP\|PronType=Art`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=CCONJ`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin`, `POS=SCONJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `POS=VERB\|VerbForm=Inf`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=ADV\|Polarity=Neg`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Ind`, `Definite=Def\|Gender=Fem\|Number=Plur\|POS=ADP\|PronType=Art`, `Gender=Fem\|Number=Plur\|POS=NOUN`, `Gender=Masc\|Number=Sing\|POS=ADJ`, `POS=ADP`, `Definite=Def\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `POS=AUX\|VerbForm=Inf`, `Gender=Fem\|Number=Sing\|POS=VERB\|VerbForm=Part\|Voice=Pass`, `Gender=Masc\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `ExtPos=CCONJ\|POS=ADV`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `POS=AUX\|VerbForm=Ger`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Definite=Def\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Plur\|POS=VERB\|VerbForm=Part`, `Mood=Sub\|Number=Sing\|POS=VERB\|Tense=Pres\|VerbForm=Fin`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin`, `POS=VERB\|VerbForm=Part`, `Number=Sing\|POS=VERB\|Person=3\|VerbForm=Inf`, `ExtPos=NOUN\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Gender=Masc\|Number=Sing\|POS=VERB\|VerbForm=Part\|Voice=Pass`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `ExtPos=ADP\|POS=ADV`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `ExtPos=AUX\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Dem`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Rel`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=VERB\|VerbForm=Part`, `ExtPos=CCONJ\|POS=CCONJ`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Number=Sing\|POS=PRON\|PronType=Rel`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Prs`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Int`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Prs`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Tot`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `POS=AUX\|VerbForm=Part`, `Number=Plur\|POS=AUX\|Person=3\|VerbForm=Inf`, `Gender=Fem\|Number=Plur\|POS=VERB\|VerbForm=Part\|Voice=Pass`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Rel`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `ExtPos=INTJ\|POS=AUX`, `Number=Sing\|POS=DET\|PronType=Art`, `NumType=Card\|Number=Sing\|POS=NUM`, `ExtPos=PROPN\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Number=Plur\|POS=VERB\|Person=3\|VerbForm=Inf`, `Gender=Fem\|Number=Sing\|POS=NOUN\|Typo=Yes`, `ExtPos=SCONJ\|Gender=Masc\|Number=Sing\|POS=ADP\|PronType=Dem`, `Case=Acc\|POS=PRON\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Rel`, `Gender=Masc\|Number=Sing\|POS=VERB\|VerbForm=Part`, `Gender=Fem\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Number=Plur\|POS=PROPN`, `Gender=Masc\|Number=Plur\|POS=PROPN`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=VERB\|VerbForm=Part\|Voice=Pass`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Gender=Fem\|Number=Sing\|POS=ADP\|PronType=Dem`, `ExtPos=SCONJ\|POS=ADV`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `ExtPos=PROPN\|Number=Sing\|POS=PROPN`, `Gender=Masc\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Abbr=Yes\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Abbr=Yes\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Gender=Fem\|Number=Plur\|POS=ADP\|PronType=Dem`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=SCONJ\|PronType=Art`, `Number=Sing\|POS=AUX\|Person=3\|VerbForm=Inf`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=3\|VerbForm=Fin`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=SCONJ\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Art`, `ExtPos=AUX\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Case=Dat\|POS=PRON\|PronType=Prs`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art\|Typo=Yes`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=NOUN\|Typo=Yes`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Ger`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Definite=Def\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Ind`, `Gender=Fem\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Definite=Def\|ExtPos=ADV\|Gender=Fem\|Number=Plur\|POS=ADP\|PronType=Art`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=AUX\|Person=3\|PronType=Prs\|VerbForm=Inf`, `ExtPos=PROPN\|Gender=Fem\|Number=Sing\|POS=NOUN`, `ExtPos=CCONJ\|POS=VERB\|VerbForm=Ger`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `ExtPos=ADV\|POS=ADP`, `ExtPos=AUX\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Abbr=Yes\|ExtPos=PROPN\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Neg`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Ger`, `ExtPos=SCONJ\|POS=SCONJ`, `Gender=Masc\|Number=Sing\|POS=VERB\|VerbForm=Inf`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Ind`, `Definite=Ind\|Gender=Fem\|Number=Sing\|POS=ADP\|PronType=Art`, `Case=Dat\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Definite=Def\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art\|Typo=Yes`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pqp\|VerbForm=Fin`, `Degree=Abs\|Gender=Masc\|Number=Sing\|POS=ADJ`, `ExtPos=NOUN\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Neg`, `ExtPos=PROPN\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Gender=Fem\|Number=Plur\|POS=PROPN`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Int`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Number=Sing\|POS=PRON\|PronType=Int`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `ExtPos=SCONJ\|POS=ADP`, `Definite=Ind\|Gender=Masc\|Number=Sing\|POS=ADP\|PronType=Art`, `ExtPos=PROPN\|Gender=Fem\|Number=Sing\|POS=PROPN\|PronType=Art`, `Mood=Ind\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `ExtPos=NOUN\|POS=ADP`, `Gender=Masc\|NumType=Mult\|Number=Sing\|POS=NUM`, `ExtPos=ADV\|POS=ADV`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Emp`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `ExtPos=NOUN\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|POS=PRON\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Rel`, `ExtPos=NOUN\|POS=X`, `POS=X`, `ExtPos=NOUN\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Dem`, `Gender=Masc\|Number=Plur\|POS=ADP\|PronType=Dem`, `Definite=Def\|Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Dem`, `ExtPos=AUX\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Emp`, `Gender=Masc\|Number=Sing\|POS=DET`, `ExtPos=ADP\|POS=ADP`, `POS=NOUN`, `Gender=Masc\|NumType=Ord\|Number=Sing\|POS=NOUN`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=ADP\|PronType=Art`, `ExtPos=AUX\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=ADP\|PronType=Art`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Typo=Yes\|VerbForm=Inf`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Tot`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pqp\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pqp\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=ADV\|PronType=Ind`, `POS=ADV\|Typo=Yes`, `Abbr=Yes\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Gender=Masc\|Number=Sing\|POS=SCONJ\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=AUX\|Tense=Imp\|VerbForm=Fin`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `POS=PRON\|PronType=Rel`, `ExtPos=ADV\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Sub\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=ADP\|PronType=Art`, `Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Imp\|VerbForm=Fin`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pqp\|VerbForm=Fin`, `Definite=Def\|ExtPos=CCONJ\|Gender=Masc\|Number=Sing\|POS=ADP\|PronType=Art`, `Definite=Def\|ExtPos=SCONJ\|Gender=Masc\|Number=Sing\|POS=ADP\|PronType=Art`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `ExtPos=AUX\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=ADJ\|Voice=Pass`, `Number=Sing\|POS=ADJ`, `ExtPos=ADV\|Gender=Masc\|Number=Plur\|POS=ADP\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=DET`, `Case=Acc\|Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin`, `Mood=Imp\|Number=Sing\|POS=AUX\|Person=2\|VerbForm=Fin`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `POS=INTJ`, `Number=Sing\|POS=NOUN`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `ExtPos=ADV\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `ExtPos=PROPN\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `ExtPos=AUX\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Degree=Cmp\|POS=ADV`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Gender=Masc\|Number=Sing\|POS=AUX\|VerbForm=Part`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Rel`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `ExtPos=CCONJ\|POS=ADP`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Rel`, `ExtPos=PROPN\|Gender=Masc\|Number=Sing\|POS=PROPN\|PronType=Art`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin\|Voice=Pass`, `POS=DET\|PronType=Ind`, `Case=Acc\|Number=Sing\|POS=VERB\|Person=1\|PronType=Prs\|VerbForm=Inf`, `ExtPos=NOUN\|Gender=Masc\|Number=Sing\|POS=X`, `Case=Acc\|POS=VERB\|PronType=Prs\|VerbForm=Inf`, `POS=SCONJ\|VerbForm=Ger`, `Abbr=Yes\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Neg`, `ExtPos=PROPN\|Gender=Masc\|Number=Sing\|POS=NUM`, `Number=Sing\|POS=NUM`, `Gender=Masc\|Number=Plur\|POS=ADJ\|Typo=Yes`, `Mood=Cnd\|Number=Sing\|POS=VERB\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=DET`, `ExtPos=PROPN\|Gender=Masc\|Number=Plur\|POS=PROPN`, `ExtPos=AUX\|POS=VERB\|VerbForm=Inf`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Int`, `ExtPos=ADJ\|POS=X`, `Gender=Fem\|Number=Sing\|POS=X`, `Abbr=Yes\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Gender=Masc\|Number=Sing\|POS=PRON`, `Number=Sing\|POS=ADP`, `Definite=Def\|Gender=Fem\|Number=Plur\|POS=ADP\|PronType=Art\|Typo=Yes`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Rel\|Typo=Yes`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=VERB\|Tense=Fut\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `ExtPos=AUX\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `ExtPos=AUX\|Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Abbr=Yes\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Acc\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Ger`, `Case=Acc\|Gender=Fem\|POS=PRON\|PronType=Prs`, `Definite=Def\|Gender=Masc\|Number=Plur\|POS=ADP\|PronType=Art\|Typo=Yes`, `ExtPos=AUX\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Definite=Def\|Gender=Masc\|Number=Plur\|POS=SCONJ\|PronType=Art`, `Case=Dat\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=ADP\|PronType=Art\|Typo=Yes`, `ExtPos=AUX\|Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art\|Typo=Yes`, `NumType=Ord\|POS=ADJ`, `Gender=Masc\|POS=NOUN`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int`, `ExtPos=NOUN\|Gender=Masc\|Number=Sing\|POS=PROPN`, `ExtPos=PROPN\|Gender=Masc\|POS=PROPN`, `Gender=Masc\|POS=PROPN`, `Gender=Fem\|Number=Plur\|POS=DET`, `ExtPos=ADJ\|POS=ADP`, `ExtPos=ADJ\|POS=ADV`, `Gender=Masc\|Number=Plur\|POS=PRON`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Definite=Def\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art\|Typo=Yes`, `ExtPos=ADP\|Gender=Masc\|Number=Sing\|POS=ADP\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=SCONJ\|PronType=Rel`, `Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `ExtPos=AUX\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `ExtPos=NOUN\|POS=ADV`, `Gender=Fem\|Number=Sing\|POS=ADJ\|Typo=Yes`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int`, `ExtPos=NOUN\|Gender=Fem\|Number=Plur\|POS=NOUN`, `ExtPos=CCONJ\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Int`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Number=Plur\|POS=AUX\|Person=1\|VerbForm=Inf`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `ExtPos=ADV\|POS=X`, `Gender=Masc\|Number=Sing\|POS=X`, `POS=NUM`, `ExtPos=NOUN\|NumType=Ord\|POS=NUM`, `Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Ger`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `ExtPos=AUX\|POS=VERB\|VerbForm=Ger`, `ExtPos=AUX\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|POS=VERB\|PronType=Prs\|VerbForm=Ger`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Emp`, `Number=Plur\|POS=VERB\|Person=1\|VerbForm=Inf`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Neg`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Acc\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Gender=Masc\|Number=Plur\|POS=ADP\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Tot`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Rel`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `ExtPos=NOUN\|NumType=Card\|POS=PART`, `ExtPos=NUM\|Gender=Masc\|NumType=Frac\|Number=Sing\|POS=NUM`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Number=Plur\|POS=NOUN`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Definite=Ind\|ExtPos=SCONJ\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `ExtPos=NOUN\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=1\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|VerbForm=Inf`, `Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs`, `Number=Sing\|POS=CCONJ`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Definite=Def\|ExtPos=PROPN\|Gender=Masc\|Number=Sing\|POS=ADP\|PronType=Art`, `Definite=Def\|ExtPos=PROPN\|Gender=Fem\|Number=Sing\|POS=ADP\|PronType=Art`, `Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Abbr=Yes\|Gender=Fem\|Number=Plur\|POS=NOUN`, `NumType=Card\|POS=ADP`, `ExtPos=AUX\|Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Definite=Def\|ExtPos=ADV\|Gender=Fem\|Number=Sing\|POS=ADP\|PronType=Art`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Tot`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Tot`, `Gender=Masc\|Number=Sing\|POS=PROPN\|Typo=Yes`, `Gender=Masc\|Number=Sing\|POS=ADP\|PronType=Rel`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pqp\|VerbForm=Fin`, `Abbr=Yes\|ExtPos=PROPN\|Gender=Masc\|Number=Sing\|POS=PROPN`, `NumType=Ord\|POS=NUM`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Ger`, `ExtPos=AUX\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=SCONJ\|Person=3\|PronType=Prs`, `ExtPos=PROPN\|POS=X`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin`, `ExtPos=NOUN\|POS=NOUN`, `Number=Sing\|POS=PRON\|PronType=Tot`, `Number=Sing\|POS=DET\|PronType=Rel`, `Case=Dat\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Definite=Def\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Art`, `POS=PRON\|PronType=Int`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=ADJ\|Typo=Yes`, `Case=Dat\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Ger`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `ExtPos=AUX\|Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `ExtPos=AUX\|POS=VERB\|VerbForm=Part`, `ExtPos=AUX\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `ExtPos=AUX\|Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `ExtPos=ADP\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Number=Plur\|POS=ADJ`, `Definite=Def\|POS=ADP\|PronType=Art`, `Number=Sing\|POS=PRON\|PronType=Ind`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `ExtPos=NOUN\|Gender=Masc\|NumType=Frac\|Number=Sing\|POS=NUM`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Definite=Def\|POS=SCONJ\|PronType=Art`, `Case=Acc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Gender=Masc\|POS=PRON\|PronType=Ind`, `ExtPos=AUX\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pqp\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pqp\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|POS=VERB\|PronType=Prs\|VerbForm=Inf`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Fut\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=NOUN\|Voice=Pass`, `Gender=Fem\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `ExtPos=AUX\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Fut\|VerbForm=Fin`, `ExtPos=AUX\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Inf`, `Gender=Masc\|Number=Sing\|POS=PART`, `Number=Plur\|POS=DET\|PronType=Ind`, `Case=Acc\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Gender=Masc\|Number=Sing\|POS=ADV`, `Case=Dat\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=NOUN\|Typo=Yes`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=AUX\|Person=3\|PronType=Prs\|VerbForm=Ger`, `NumType=Card\|POS=DET`, `Case=Dat\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `ExtPos=AUX\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Inf`, `Case=Dat\|Gender=Masc\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `POS=PRON\|PronType=Prs`, `ExtPos=PROPN\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Case=Dat\|Gender=Masc\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Case=Dat\|Gender=Masc\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Tense=Imp\|VerbForm=Fin`, `ExtPos=ADV\|Gender=Masc\|Number=Sing\|POS=ADP\|PronType=Dem`, `POS=VERB\|VerbForm=Inf\|Voice=Pass`, `Case=Acc\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `ExtPos=AUX\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Inf`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Number=Sing\|POS=PROPN\|PronType=Art`, `Case=Dat\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Person=1\|PronType=Prs\|VerbForm=Inf`, `Gender=Fem\|Number=Sing\|POS=ADJ\|PronType=Dem`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Number=Plur\|POS=AUX\|Person=1\|Tense=Past`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `POS=PRON\|PronType=Dem`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=ADV\|Person=3\|PronType=Prs`, `POS=PRON\|PronType=Ind`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Fut\|VerbForm=Fin`, `ExtPos=AUX\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `ExtPos=SCONJ\|Gender=Masc\|Number=Sing\|POS=VERB\|VerbForm=Part`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|Typo=Yes\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `ExtPos=NOUN\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Dat\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=ADV\|Typo=Yes`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Rel`, `Gender=Masc\|Number=Sing\|POS=SCONJ`, `Definite=Def\|Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `ExtPos=ADP\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `ExtPos=CCONJ\|Gender=Masc\|Number=Sing\|POS=ADP\|PronType=Dem`, `Definite=Def\|POS=DET\|PronType=Art`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `ExtPos=ADV\|Gender=Masc\|Number=Sing\|POS=ADP`, `ExtPos=AUX\|Gender=Masc\|Number=Sing\|POS=VERB\|VerbForm=Part`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pqp\|VerbForm=Fin`, `Case=Acc,Dat\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Case=Dat\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `POS=DET`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Emp`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Art`, `Case=Acc\|Gender=Masc\|Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Degree=Cmp\|POS=ADJ`, `Gender=Fem\|Number=Plur\|POS=ADP\|PronType=Ind`, `Definite=Def\|ExtPos=SCONJ\|Gender=Fem\|Number=Sing\|POS=SCONJ\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=NOUN\|Typo=Yes`, `ExtPos=PROPN\|POS=ADV`, `Case=Acc\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `ExtPos=PROPN\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Number=Sing\|POS=VERB\|Person=3\|VerbForm=Inf\|Voice=Pass`, `Case=Acc\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Number=Plur\|POS=VERB\|Person=2\|PronType=Prs\|VerbForm=Inf`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `NumType=Card\|POS=DET\|PronType=Art`, `Gender=Fem,Masc\|Number=Sing\|POS=PROPN`, `Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `POS=PRON\|PronType=Neg`, `Gender=Fem\|Number=Sing\|POS=SCONJ\|PronType=Dem`, `ExtPos=AUX\|Gender=Masc\|Number=Plur\|POS=VERB\|VerbForm=Part`, `ExtPos=ADJ\|Gender=Fem\|Number=Sing\|POS=X`, `Gender=Fem\|Number=Plur\|POS=NUM`, `Definite=Def\|Gender=Fem\|Number=Plur\|POS=SCONJ\|PronType=Art`, `Case=Dat\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|NumType=Sets\|Number=Sing\|POS=NUM`, `POS=ADV\|PronType=Rel`, `Gender=Masc\|NumType=Ord\|Number=Plur\|POS=ADJ\|Typo=Yes`, `Foreign=Yes\|POS=NOUN`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Ger`, `Case=Acc\|POS=AUX\|PronType=Prs\|VerbForm=Inf`, `ExtPos=INTJ\|POS=ADV\|Polarity=Neg`, `POS=AUX`, `Gender=Masc\|Number=Plur\|POS=NUM`, `Number=Sing\|POS=DET\|PronType=Ind`, `Number=Plur\|POS=PRON\|PronType=Int`, `Abbr=Yes\|Number=Sing\|POS=PROPN`, `Number=Sing\|POS=VERB\|VerbForm=Part\|Voice=Pass`, `Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=ADP\|PronType=Ind`, `ExtPos=AUX\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art\|Typo=Yes`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `ExtPos=AUX\|Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Definite=Ind\|Gender=Fem\|Number=Sing\|POS=SCONJ\|PronType=Art\|Typo=Yes`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin\|Voice=Pass`, `ExtPos=NUM\|NumType=Mult\|POS=NUM`, `ExtPos=AUX\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|POS=VERB\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `NumType=Card\|Number=Plur\|POS=NUM`, `ExtPos=AUX\|Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin`, `ExtPos=NUM\|NumType=Card\|POS=NUM`, `POS=VERB`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=SCONJ\|PronType=Rel`, `Case=Acc\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=VERB\|Typo=Yes\|VerbForm=Part`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|Typo=Yes\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=ADV\|Polarity=Neg`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Number=Sing\|POS=VERB\|Person=1\|VerbForm=Inf`, `ExtPos=NOUN\|Number=Sing\|POS=PROPN`, `ExtPos=ADP\|POS=DET`, `ExtPos=ADP\|Gender=Fem\|Number=Sing\|POS=ADP\|PronType=Art`, `Abbr=Yes\|ExtPos=PROPN\|Number=Sing\|POS=PROPN`, `ExtPos=AUX\|Gender=Fem\|Number=Sing\|POS=VERB\|VerbForm=Part`, `ExtPos=SCONJ\|Gender=Fem\|Number=Sing\|POS=ADV\|PronType=Ind`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Acc\|Number=Plur\|POS=VERB\|Person=1\|PronType=Prs\|VerbForm=Inf`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Art`, `Case=Dat\|Gender=Fem\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=AUX\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Case=Acc\|Gender=Masc\|Number=Plur,Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `ExtPos=PROPN\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|PronType=Prs\|VerbForm=Inf`, `Number=Sing\|POS=DET\|PronType=Tot`, `NumType=Range\|POS=NUM`, `Case=Dat\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Mood=Sub\|POS=VERB\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|PronType=Rel`, `ExtPos=PROPN\|Gender=Masc\|Number=Plur\|POS=ADJ\|Typo=Yes`, `Definite=Def\|ExtPos=PROPN\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Dat\|Gender=Masc\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Fin`, `ExtPos=AUX\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Number=Sing\|POS=X`, `ExtPos=NOUN\|POS=PROPN`, `Gender=Masc\|Number=Sing\|POS=NUM`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Fut\|VerbForm=Fin`, `Abbr=Yes\|ExtPos=PROPN\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Ger`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=VERB\|Person=1\|PronType=Prs\|VerbForm=Inf`, `Case=Dat\|Gender=Masc\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Number=Sing\|POS=VERB\|Person=1\|VerbForm=Inf\|Voice=Pass`, `Case=Acc\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=SCONJ\|PronType=Dem`, `ExtPos=SCONJ\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `NumType=Frac\|POS=NUM`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|Tense=Pres\|VerbForm=Fin`, `POS=ADJ`, `Gender=Fem\|Number=Sing\|POS=ADP\|PronType=Ind`, `Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin`, `Case=Acc\|Gender=Masc\|Mood=Ind\|Number=Plur,Sing\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Past\|VerbForm=Fin`, `ExtPos=AUX\|Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=ADV\|PronType=Rel`, `ExtPos=NOUN\|NumType=Card\|POS=NUM`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind\|Typo=Yes`, `Mood=Cnd\|POS=VERB\|VerbForm=Fin`, `Case=Dat\|Gender=Masc\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1,3\|PronType=Prs\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Dat\|Gender=Masc\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|Tense=Imp\|VerbForm=Fin` |
| **`parser`** | `ROOT`, `acl`, `acl:relcl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `aux:pass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `cop`, `csubj`, `dep`, `det`, `discourse`, `expl`, `fixed`, `flat`, `flat:foreign`, `flat:name`, `iobj`, `mark`, `nmod`, `nsubj`, `nsubj:pass`, `nummod`, `obj`, `obl`, `obl:agent`, `parataxis`, `punct`, `xcomp` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 92.84 |
| `ENTS_P` | 92.75 |
| `ENTS_R` | 92.94 |
| `TAG_ACC` | 97.82 |
| `POS_ACC` | 97.81 |
| `MORPH_ACC` | 96.11 |
| `LEMMA_ACC` | 97.35 |
| `DEP_UAS` | 92.84 |
| `DEP_LAS` | 89.66 |
| `SENTS_P` | 93.49 |
| `SENTS_R` | 94.28 |
| `SENTS_F` | 93.88 | |
AnonymousSub/AR_SDR_HF_model_base | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | 2022-11-12T23:57:50Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6243
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2733 | 1.0 | 1113 | 1.8881 |
| 1.5489 | 2.0 | 2226 | 1.6480 |
| 1.2799 | 3.0 | 3339 | 1.6243 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
AnonymousSub/AR_bert-base-uncased | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
model-index:
- name: imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# imdb
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 0.3268 | 0.876 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
AnonymousSub/AR_declutr | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
datasets:
- ProGamerGov/StableDiffusion-v1-5-Regularization-Images
---
**Min-Illust-Background-Diffusion**
This fine-tuned Stable Diffusion v1.5 model was trained for 2250 iterations with a batch size of 4, on a selection of artistic works by Sin Jong Hun. Training was performed using [ShivamShrirao/diffusers](https://github.com/ShivamShrirao/diffusers) with full precision, prior-preservation loss, the train-text-encoder feature, and the new [1.5 MSE VAE from Stability AI](https://huggingface.co/stabilityai/sd-vae-ft-mse). A total of 4120 regularization / class images were used from [here](https://huggingface.co/datasets/ProGamerGov/StableDiffusion-v1-5-Regularization-Images). Regularization images were generated using the prompt "artwork style", 50 DDIM steps, and a CFG of 7.
Use the tokens **sjh style** in your prompts for the effect. Note that the effect also appears to occur at a much weaker strength on prompts that steer the output towards specific artistic styles.
This model will likely not perform well on generating portraits and related tasks, as the training data was primarily composed of landscapes.
<div align="center">
<img src="https://huggingface.co/ProGamerGov/Min-Illust-Background-Diffusion/resolve/main/v1_size_512x768_t3x4.png">
</div>
* [Full Image](https://huggingface.co/ProGamerGov/Min-Illust-Background-Diffusion/resolve/main/v1_size_512x768_t3x4.png)
<div align="center">
<img src="https://huggingface.co/ProGamerGov/Min-Illust-Background-Diffusion/resolve/main/v1_size_512x512_t4x10.png">
</div>
* [Full Image](https://huggingface.co/ProGamerGov/Min-Illust-Background-Diffusion/resolve/main/v1_size_512x512_t4x10.png)
<div align="center">
<img src="https://huggingface.co/ProGamerGov/Min-Illust-Background-Diffusion/resolve/main/v1_512x512_t4x5.png">
</div>
* [Full Image](https://huggingface.co/ProGamerGov/Min-Illust-Background-Diffusion/resolve/main/v1_512x512_t4x5.png)
Example images were generated with the v1 2250 iteration model using 50 steps of DPM++ 2M Karras with a format of:
```
<prompt>, sjh style
```
|
AnonymousSub/AR_rule_based_bert_quadruplet_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
tags:
- pyannote
- pyannote-audio
- pyannote-audio-pipeline
- audio
- voice
- speech
- speaker
- speaker-diarization
- speaker-change-detection
- voice-activity-detection
- overlapped-speech-detection
- automatic-speech-recognition
datasets:
- ami
- dihard
- voxconverse
- aishell
- repere
- voxceleb
license: mit
---
# 🎹 Speaker diarization
Relies on pyannote.audio 2.0: see [installation instructions](https://github.com/pyannote/pyannote-audio/tree/develop#installation).
## TL;DR
```python
# load the pipeline from Hugginface Hub
from pyannote.audio import Pipeline
pipeline = Pipeline.from_pretrained("pyannote/[email protected]")
# apply the pipeline to an audio file
diarization = pipeline("audio.wav")
# dump the diarization output to disk using RTTM format
with open("audio.rttm", "w") as rttm:
diarization.write_rttm(rttm)
```
## Advanced usage
In case the number of speakers is known in advance, one can use the `num_speakers` option:
```python
diarization = pipeline("audio.wav", num_speakers=2)
```
One can also provide lower and/or upper bounds on the number of speakers using `min_speakers` and `max_speakers` options:
```python
diarization = pipeline("audio.wav", min_speakers=2, max_speakers=5)
```
If you feel adventurous, you can try and play with the various pipeline hyper-parameters.
For instance, one can use a more aggressive voice activity detection by increasing the value of `segmentation_onset` threshold:
```python
hparams = pipeline.parameters(instantiated=True)
hparams["segmentation_onset"] += 0.1
pipeline.instantiate(hparams)
```
## Benchmark
### Real-time factor
Real-time factor is around 5% using one Nvidia Tesla V100 SXM2 GPU (for the neural inference part) and one Intel Cascade Lake 6248 CPU (for the clustering part).
In other words, it takes approximately 3 minutes to process a one hour conversation.
### Accuracy
This pipeline is benchmarked on a growing collection of datasets.
Processing is fully automatic:
* no manual voice activity detection (as is sometimes the case in the literature)
* no manual number of speakers (though it is possible to provide it to the pipeline)
* no fine-tuning of the internal models nor tuning of the pipeline hyper-parameters to each dataset
... with the least forgiving diarization error rate (DER) setup (named *"Full"* in [this paper](https://doi.org/10.1016/j.csl.2021.101254)):
* no forgiveness collar
* evaluation of overlapped speech
| Benchmark | [DER%](. "Diarization error rate") | [FA%](. "False alarm rate") | [Miss%](. "Missed detection rate") | [Conf%](. "Speaker confusion rate") | Expected output | File-level evaluation |
| ---------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------- | --------------------------- | ---------------------------------- | ----------------------------------- | ------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------ |
| [AISHELL-4](http://www.openslr.org/111/) | 14.61 | 3.31 | 4.35 | 6.95 | [RTTM](reproducible_research/AISHELL.SpeakerDiarization.Full.test.rttm) | [eval](reproducible_research/AISHELL.SpeakerDiarization.Full.test.eval) |
| [AMI *Mix-Headset*](https://groups.inf.ed.ac.uk/ami/corpus/) [*only_words*](https://github.com/BUTSpeechFIT/AMI-diarization-setup) | 18.21 | 3.28 | 11.07 | 3.87 | [RTTM](reproducible_research/2022.07/AMI.SpeakerDiarization.only_words.test.rttm) | [eval](reproducible_research/2022.07/AMI.SpeakerDiarization.only_words.test.eval) |
| [AMI *Array1-01*](https://groups.inf.ed.ac.uk/ami/corpus/) [*only_words*](https://github.com/BUTSpeechFIT/AMI-diarization-setup) | 29.00 | 2.71 | 21.61 | 4.68 | [RTTM](reproducible_research/2022.07/AMI-SDM.SpeakerDiarization.only_words.test.rttm) | [eval](reproducible_research/2022.07/AMI-SDM.SpeakerDiarization.only_words.test.eval) |
| [CALLHOME](https://catalog.ldc.upenn.edu/LDC2001S97) [*Part2*](https://github.com/BUTSpeechFIT/CALLHOME_sublists/issues/1) | 30.24 | 3.71 | 16.86 | 9.66 | [RTTM](reproducible_research/2022.07/CALLHOME.SpeakerDiarization.CALLHOME.test.rttm) | [eval](reproducible_research/2022.07/CALLHOME.SpeakerDiarization.CALLHOME.test.eval) |
| [DIHARD 3 *Full*](https://arxiv.org/abs/2012.01477) | 20.99 | 4.25 | 10.74 | 6.00 | [RTTM](reproducible_research/2022.07/DIHARD.SpeakerDiarization.Full.test.rttm) | [eval](reproducible_research/2022.07/DIHARD.SpeakerDiarization.Full.test.eval) |
| [REPERE *Phase 2*](https://islrn.org/resources/360-758-359-485-0/) | 12.62 | 1.55 | 3.30 | 7.76 | [RTTM](reproducible_research/2022.07/REPERE.SpeakerDiarization.Full.test.rttm) | [eval](reproducible_research/2022.07/REPERE.SpeakerDiarization.Full.test.eval) |
| [VoxConverse *v0.0.2*](https://github.com/joonson/voxconverse) | 12.76 | 3.45 | 3.85 | 5.46 | [RTTM](reproducible_research/2022.07/VoxConverse.SpeakerDiarization.VoxConverse.test.rttm) | [eval](reproducible_research/2022.07/VoxConverse.SpeakerDiarization.VoxConverse.test.eval) |
## Support
For commercial enquiries and scientific consulting, please contact [me](mailto:[email protected]).
For [technical questions](https://github.com/pyannote/pyannote-audio/discussions) and [bug reports](https://github.com/pyannote/pyannote-audio/issues), please check [pyannote.audio](https://github.com/pyannote/pyannote-audio) Github repository.
## Citations
```bibtex
@inproceedings{Bredin2021,
Title = {{End-to-end speaker segmentation for overlap-aware resegmentation}},
Author = {{Bredin}, Herv{\'e} and {Laurent}, Antoine},
Booktitle = {Proc. Interspeech 2021},
Address = {Brno, Czech Republic},
Month = {August},
Year = {2021},
}
```
```bibtex
@inproceedings{Bredin2020,
Title = {{pyannote.audio: neural building blocks for speaker diarization}},
Author = {{Bredin}, Herv{\'e} and {Yin}, Ruiqing and {Coria}, Juan Manuel and {Gelly}, Gregory and {Korshunov}, Pavel and {Lavechin}, Marvin and {Fustes}, Diego and {Titeux}, Hadrien and {Bouaziz}, Wassim and {Gill}, Marie-Philippe},
Booktitle = {ICASSP 2020, IEEE International Conference on Acoustics, Speech, and Signal Processing},
Address = {Barcelona, Spain},
Month = {May},
Year = {2020},
}
```
|
AnonymousSub/AR_rule_based_roberta_bert_triplet_epochs_1_shard_1 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/bookingcom/1668303763939/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1323220178574938113/SZK83dEL_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Booking.com</div>
<div style="text-align: center; font-size: 14px;">@bookingcom</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Booking.com.
| Data | Booking.com |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 0 |
| Short tweets | 15 |
| Tweets kept | 3235 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/s8f2y1by/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bookingcom's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2ksjpd3c) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2ksjpd3c/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/bookingcom')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
AnonymousSub/AR_rule_based_roberta_hier_quadruplet_epochs_1_shard_1 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3587
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6942 | 0.54 | 500 | 1.4832 |
| 1.4133 | 1.09 | 1000 | 1.4111 |
| 1.5088 | 1.63 | 1500 | 1.3778 |
| 1.4368 | 2.17 | 2000 | 1.3645 |
| 1.4041 | 2.72 | 2500 | 1.3587 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
AnonymousSub/AR_rule_based_roberta_only_classfn_twostage_epochs_1_shard_1 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/lockheedmartin/1668307132890/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1067242179955896320/mKdx6PgL_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Lockheed Martin</div>
<div style="text-align: center; font-size: 14px;">@lockheedmartin</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Lockheed Martin.
| Data | Lockheed Martin |
| --- | --- |
| Tweets downloaded | 3245 |
| Retweets | 482 |
| Short tweets | 52 |
| Tweets kept | 2711 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/c8nhjq27/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @lockheedmartin's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2h80t679) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2h80t679/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/lockheedmartin')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
AnonymousSub/AR_rule_based_twostagequadruplet_hier_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | 2022-11-13T02:53:16Z | ---
language: en
thumbnail: http://www.huggingtweets.com/officialuom/1668308017702/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1433363936854880264/SO3O-Jle_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">The University of Manchester</div>
<div style="text-align: center; font-size: 14px;">@officialuom</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from The University of Manchester.
| Data | The University of Manchester |
| --- | --- |
| Tweets downloaded | 3247 |
| Retweets | 429 |
| Short tweets | 143 |
| Tweets kept | 2675 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3i3q53v0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @officialuom's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/22shuuiy) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/22shuuiy/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/officialuom')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
AnonymousSub/EManuals_RoBERTa_wikiqa | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 29 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: khasrul-alam/banglabert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# khasrul-alam/banglabert-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 5.8513
- Train End Logits Accuracy: 0.0
- Train Start Logits Accuracy: 0.0
- Validation Loss: 5.8678
- Validation End Logits Accuracy: 0.0
- Validation Start Logits Accuracy: 0.0
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 6, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 5.9297 | 0.0 | 0.0208 | 5.9075 | 0.0 | 0.0 | 0 |
| 5.8513 | 0.0 | 0.0 | 5.8678 | 0.0 | 0.0 | 1 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
AnonymousSub/SR_rule_based_hier_quadruplet_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: lmv2-g-receipts2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lmv2-g-receipts2
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2756
- Purchase Time Precision: 0.9180
- Purchase Time Recall: 0.875
- Purchase Time F1: 0.8960
- Purchase Time Number: 128
- Receipt Date Precision: 0.8941
- Receipt Date Recall: 0.8994
- Receipt Date F1: 0.8968
- Receipt Date Number: 169
- Sub Total Precision: 0.8673
- Sub Total Recall: 0.7727
- Sub Total F1: 0.8173
- Sub Total Number: 110
- Supplier Address Precision: 0.7097
- Supplier Address Recall: 0.7719
- Supplier Address F1: 0.7395
- Supplier Address Number: 114
- Supplier Name Precision: 0.7159
- Supplier Name Recall: 0.7079
- Supplier Name F1: 0.7119
- Supplier Name Number: 267
- Tip Amount Precision: 0.6667
- Tip Amount Recall: 1.0
- Tip Amount F1: 0.8
- Tip Amount Number: 2
- Total Precision: 0.8978
- Total Recall: 0.9126
- Total F1: 0.9051
- Total Number: 183
- Total Tax Amount Precision: 0.8644
- Total Tax Amount Recall: 0.7846
- Total Tax Amount F1: 0.8226
- Total Tax Amount Number: 65
- Overall Precision: 0.8246
- Overall Recall: 0.8150
- Overall F1: 0.8198
- Overall Accuracy: 0.9749
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Purchase Time Precision | Purchase Time Recall | Purchase Time F1 | Purchase Time Number | Receipt Date Precision | Receipt Date Recall | Receipt Date F1 | Receipt Date Number | Sub Total Precision | Sub Total Recall | Sub Total F1 | Sub Total Number | Supplier Address Precision | Supplier Address Recall | Supplier Address F1 | Supplier Address Number | Supplier Name Precision | Supplier Name Recall | Supplier Name F1 | Supplier Name Number | Tip Amount Precision | Tip Amount Recall | Tip Amount F1 | Tip Amount Number | Total Precision | Total Recall | Total F1 | Total Number | Total Tax Amount Precision | Total Tax Amount Recall | Total Tax Amount F1 | Total Tax Amount Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:-----------------------:|:--------------------:|:----------------:|:--------------------:|:----------------------:|:-------------------:|:---------------:|:-------------------:|:-------------------:|:----------------:|:------------:|:----------------:|:--------------------------:|:-----------------------:|:-------------------:|:-----------------------:|:-----------------------:|:--------------------:|:----------------:|:--------------------:|:--------------------:|:-----------------:|:-------------:|:-----------------:|:---------------:|:------------:|:--------:|:------------:|:--------------------------:|:-----------------------:|:-------------------:|:-----------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.972 | 1.0 | 793 | 0.4257 | 0.3830 | 0.1406 | 0.2057 | 128 | 0.0 | 0.0 | 0.0 | 169 | 0.0 | 0.0 | 0.0 | 110 | 0.3973 | 0.5088 | 0.4462 | 114 | 0.5263 | 0.3745 | 0.4376 | 267 | 0.0 | 0.0 | 0.0 | 2 | 0.4096 | 0.7923 | 0.5400 | 183 | 0.0 | 0.0 | 0.0 | 65 | 0.4355 | 0.3092 | 0.3617 | 0.9296 |
| 0.2924 | 2.0 | 1586 | 0.2379 | 0.9259 | 0.7812 | 0.8475 | 128 | 0.8182 | 0.7456 | 0.7802 | 169 | 0.8966 | 0.2364 | 0.3741 | 110 | 0.5571 | 0.6842 | 0.6142 | 114 | 0.6584 | 0.5993 | 0.6275 | 267 | 0.0 | 0.0 | 0.0 | 2 | 0.7042 | 0.8197 | 0.7576 | 183 | 1.0 | 0.0462 | 0.0882 | 65 | 0.7225 | 0.6195 | 0.6670 | 0.9630 |
| 0.1611 | 3.0 | 2379 | 0.1756 | 0.8138 | 0.9219 | 0.8645 | 128 | 0.8020 | 0.9349 | 0.8634 | 169 | 0.7064 | 0.7 | 0.7032 | 110 | 0.5733 | 0.7544 | 0.6515 | 114 | 0.7308 | 0.6404 | 0.6826 | 267 | 0.0 | 0.0 | 0.0 | 2 | 0.8057 | 0.7705 | 0.7877 | 183 | 0.7258 | 0.6923 | 0.7087 | 65 | 0.7425 | 0.7669 | 0.7545 | 0.9670 |
| 0.1013 | 4.0 | 3172 | 0.1557 | 0.9099 | 0.7891 | 0.8452 | 128 | 0.8659 | 0.8402 | 0.8529 | 169 | 0.8493 | 0.5636 | 0.6776 | 110 | 0.5970 | 0.7018 | 0.6452 | 114 | 0.6603 | 0.6479 | 0.6541 | 267 | 0.0 | 0.0 | 0.0 | 2 | 0.8371 | 0.8142 | 0.8255 | 183 | 0.8654 | 0.6923 | 0.7692 | 65 | 0.7721 | 0.7245 | 0.7475 | 0.9692 |
| 0.0684 | 5.0 | 3965 | 0.1623 | 0.7718 | 0.8984 | 0.8303 | 128 | 0.7949 | 0.9172 | 0.8516 | 169 | 0.7131 | 0.7909 | 0.75 | 110 | 0.5705 | 0.7807 | 0.6593 | 114 | 0.6887 | 0.6629 | 0.6756 | 267 | 0.0 | 0.0 | 0.0 | 2 | 0.8392 | 0.9126 | 0.8743 | 183 | 0.5816 | 0.8769 | 0.6994 | 65 | 0.7202 | 0.8160 | 0.7651 | 0.9661 |
| 0.0491 | 6.0 | 4758 | 0.1828 | 0.9008 | 0.8516 | 0.8755 | 128 | 0.8830 | 0.8935 | 0.8882 | 169 | 0.6846 | 0.8091 | 0.7417 | 110 | 0.5062 | 0.7105 | 0.5912 | 114 | 0.6729 | 0.6779 | 0.6754 | 267 | 0.0 | 0.0 | 0.0 | 2 | 0.8807 | 0.8470 | 0.8635 | 183 | 0.7361 | 0.8154 | 0.7737 | 65 | 0.7452 | 0.7890 | 0.7665 | 0.9673 |
| 0.043 | 7.0 | 5551 | 0.1825 | 0.9237 | 0.8516 | 0.8862 | 128 | 0.8807 | 0.9172 | 0.8986 | 169 | 0.7672 | 0.8091 | 0.7876 | 110 | 0.6279 | 0.7105 | 0.6667 | 114 | 0.7788 | 0.6330 | 0.6983 | 267 | 0.0 | 0.0 | 0.0 | 2 | 0.9045 | 0.8798 | 0.8920 | 183 | 0.6867 | 0.8769 | 0.7703 | 65 | 0.8073 | 0.7909 | 0.7990 | 0.9717 |
| 0.0325 | 8.0 | 6344 | 0.1645 | 0.875 | 0.875 | 0.875 | 128 | 0.8636 | 0.8994 | 0.8812 | 169 | 0.7288 | 0.7818 | 0.7544 | 110 | 0.6241 | 0.7719 | 0.6902 | 114 | 0.7085 | 0.7191 | 0.7138 | 267 | 0.0 | 0.0 | 0.0 | 2 | 0.8367 | 0.8962 | 0.8654 | 183 | 0.6344 | 0.9077 | 0.7468 | 65 | 0.7596 | 0.8218 | 0.7894 | 0.9711 |
| 0.0276 | 9.0 | 7137 | 0.1761 | 0.9160 | 0.8516 | 0.8826 | 128 | 0.8706 | 0.8757 | 0.8732 | 169 | 0.8861 | 0.6364 | 0.7407 | 110 | 0.6045 | 0.7105 | 0.6532 | 114 | 0.6689 | 0.7416 | 0.7034 | 267 | 0.0 | 0.0 | 0.0 | 2 | 0.8299 | 0.8798 | 0.8541 | 183 | 0.9268 | 0.5846 | 0.7170 | 65 | 0.7793 | 0.7755 | 0.7774 | 0.9705 |
| 0.0237 | 10.0 | 7930 | 0.1842 | 0.8473 | 0.8672 | 0.8571 | 128 | 0.8613 | 0.8817 | 0.8713 | 169 | 0.7607 | 0.8091 | 0.7841 | 110 | 0.6569 | 0.7895 | 0.7171 | 114 | 0.7189 | 0.6704 | 0.6938 | 267 | 0.0 | 0.0 | 0.0 | 2 | 0.8729 | 0.8634 | 0.8681 | 183 | 0.7794 | 0.8154 | 0.7970 | 65 | 0.7850 | 0.7987 | 0.7918 | 0.9709 |
| 0.0229 | 11.0 | 8723 | 0.1811 | 0.9167 | 0.8594 | 0.8871 | 128 | 0.8929 | 0.8876 | 0.8902 | 169 | 0.75 | 0.7909 | 0.7699 | 110 | 0.6 | 0.7368 | 0.6614 | 114 | 0.6958 | 0.6854 | 0.6906 | 267 | 1.0 | 0.5 | 0.6667 | 2 | 0.8639 | 0.9016 | 0.8824 | 183 | 0.8525 | 0.8 | 0.8254 | 65 | 0.7849 | 0.8015 | 0.7931 | 0.9713 |
| 0.017 | 12.0 | 9516 | 0.2075 | 0.8906 | 0.8906 | 0.8906 | 128 | 0.7727 | 0.9053 | 0.8338 | 169 | 0.8218 | 0.7545 | 0.7867 | 110 | 0.6042 | 0.7632 | 0.6744 | 114 | 0.6830 | 0.6779 | 0.6805 | 267 | 1.0 | 0.5 | 0.6667 | 2 | 0.8429 | 0.8798 | 0.8610 | 183 | 0.8281 | 0.8154 | 0.8217 | 65 | 0.7628 | 0.8025 | 0.7822 | 0.9696 |
| 0.0149 | 13.0 | 10309 | 0.1781 | 0.8760 | 0.8828 | 0.8794 | 128 | 0.8786 | 0.8994 | 0.8889 | 169 | 0.8660 | 0.7636 | 0.8116 | 110 | 0.6357 | 0.7193 | 0.6749 | 114 | 0.7154 | 0.6779 | 0.6962 | 267 | 0.5 | 1.0 | 0.6667 | 2 | 0.9138 | 0.8689 | 0.8908 | 183 | 0.8793 | 0.7846 | 0.8293 | 65 | 0.8102 | 0.7938 | 0.8019 | 0.9738 |
| 0.0124 | 14.0 | 11102 | 0.1957 | 0.9106 | 0.875 | 0.8924 | 128 | 0.8629 | 0.8935 | 0.8779 | 169 | 0.8586 | 0.7727 | 0.8134 | 110 | 0.5909 | 0.7982 | 0.6791 | 114 | 0.6823 | 0.7079 | 0.6949 | 267 | 1.0 | 0.5 | 0.6667 | 2 | 0.9191 | 0.8689 | 0.8933 | 183 | 0.8030 | 0.8154 | 0.8092 | 65 | 0.7875 | 0.8102 | 0.7987 | 0.9729 |
| 0.01 | 15.0 | 11895 | 0.2174 | 0.9098 | 0.8672 | 0.888 | 128 | 0.8817 | 0.8817 | 0.8817 | 169 | 0.8298 | 0.7091 | 0.7647 | 110 | 0.6587 | 0.7281 | 0.6917 | 114 | 0.6842 | 0.6816 | 0.6829 | 267 | 1.0 | 1.0 | 1.0 | 2 | 0.875 | 0.8798 | 0.8774 | 183 | 0.9783 | 0.6923 | 0.8108 | 65 | 0.8038 | 0.7813 | 0.7924 | 0.9726 |
| 0.0102 | 16.0 | 12688 | 0.2073 | 0.9106 | 0.875 | 0.8924 | 128 | 0.8276 | 0.8521 | 0.8397 | 169 | 0.7679 | 0.7818 | 0.7748 | 110 | 0.6378 | 0.7105 | 0.6722 | 114 | 0.6806 | 0.6704 | 0.6755 | 267 | 0.6667 | 1.0 | 0.8 | 2 | 0.8717 | 0.8907 | 0.8811 | 183 | 0.6962 | 0.8462 | 0.7639 | 65 | 0.7697 | 0.7919 | 0.7806 | 0.9705 |
| 0.0091 | 17.0 | 13481 | 0.2205 | 0.8358 | 0.875 | 0.8550 | 128 | 0.8306 | 0.8994 | 0.8636 | 169 | 0.6133 | 0.8364 | 0.7077 | 110 | 0.5944 | 0.7456 | 0.6615 | 114 | 0.6833 | 0.7191 | 0.7007 | 267 | 0.6667 | 1.0 | 0.8 | 2 | 0.8410 | 0.8962 | 0.8677 | 183 | 0.7297 | 0.8308 | 0.7770 | 65 | 0.7334 | 0.8218 | 0.7751 | 0.9680 |
| 0.0063 | 18.0 | 14274 | 0.2007 | 0.8527 | 0.8594 | 0.8560 | 128 | 0.8613 | 0.8817 | 0.8713 | 169 | 0.8283 | 0.7455 | 0.7847 | 110 | 0.6535 | 0.7281 | 0.6888 | 114 | 0.7520 | 0.6929 | 0.7212 | 267 | 1.0 | 1.0 | 1.0 | 2 | 0.8730 | 0.9016 | 0.8871 | 183 | 0.7432 | 0.8462 | 0.7914 | 65 | 0.7998 | 0.8006 | 0.8002 | 0.9719 |
| 0.0075 | 19.0 | 15067 | 0.2173 | 0.925 | 0.8672 | 0.8952 | 128 | 0.8765 | 0.8817 | 0.8791 | 169 | 0.8113 | 0.7818 | 0.7963 | 110 | 0.7196 | 0.6754 | 0.6968 | 114 | 0.6982 | 0.7191 | 0.7085 | 267 | 1.0 | 0.5 | 0.6667 | 2 | 0.9080 | 0.8634 | 0.8852 | 183 | 0.8833 | 0.8154 | 0.848 | 65 | 0.8164 | 0.7967 | 0.8064 | 0.9733 |
| 0.0062 | 20.0 | 15860 | 0.2255 | 0.888 | 0.8672 | 0.8775 | 128 | 0.8613 | 0.8817 | 0.8713 | 169 | 0.9048 | 0.6909 | 0.7835 | 110 | 0.6718 | 0.7719 | 0.7184 | 114 | 0.7552 | 0.6816 | 0.7165 | 267 | 1.0 | 0.5 | 0.6667 | 2 | 0.9017 | 0.8525 | 0.8764 | 183 | 0.9074 | 0.7538 | 0.8235 | 65 | 0.8269 | 0.7823 | 0.8040 | 0.9733 |
| 0.0063 | 21.0 | 16653 | 0.2417 | 0.8952 | 0.8672 | 0.8810 | 128 | 0.8453 | 0.9053 | 0.8743 | 169 | 0.84 | 0.7636 | 0.8000 | 110 | 0.6917 | 0.7281 | 0.7094 | 114 | 0.7194 | 0.6816 | 0.7 | 267 | 0.6667 | 1.0 | 0.8 | 2 | 0.8901 | 0.8852 | 0.8877 | 183 | 0.7937 | 0.7692 | 0.7813 | 65 | 0.8060 | 0.7967 | 0.8014 | 0.9721 |
| 0.0045 | 22.0 | 17446 | 0.2069 | 0.8626 | 0.8828 | 0.8726 | 128 | 0.8830 | 0.8935 | 0.8882 | 169 | 0.7679 | 0.7818 | 0.7748 | 110 | 0.6462 | 0.7368 | 0.6885 | 114 | 0.7045 | 0.6966 | 0.7006 | 267 | 0.5 | 1.0 | 0.6667 | 2 | 0.8914 | 0.8525 | 0.8715 | 183 | 0.7361 | 0.8154 | 0.7737 | 65 | 0.7847 | 0.8006 | 0.7926 | 0.9721 |
| 0.0044 | 23.0 | 18239 | 0.2675 | 0.8760 | 0.8828 | 0.8794 | 128 | 0.8721 | 0.8876 | 0.8798 | 169 | 0.8155 | 0.7636 | 0.7887 | 110 | 0.6864 | 0.7105 | 0.6983 | 114 | 0.7588 | 0.6479 | 0.6990 | 267 | 1.0 | 1.0 | 1.0 | 2 | 0.8983 | 0.8689 | 0.8833 | 183 | 0.7714 | 0.8308 | 0.8 | 65 | 0.8168 | 0.7861 | 0.8012 | 0.9711 |
| 0.0037 | 24.0 | 19032 | 0.2294 | 0.9032 | 0.875 | 0.8889 | 128 | 0.8848 | 0.8639 | 0.8743 | 169 | 0.8283 | 0.7455 | 0.7847 | 110 | 0.7097 | 0.7719 | 0.7395 | 114 | 0.6866 | 0.6891 | 0.6879 | 267 | 0.6667 | 1.0 | 0.8 | 2 | 0.8950 | 0.8852 | 0.8901 | 183 | 0.7826 | 0.8308 | 0.8060 | 65 | 0.8035 | 0.7996 | 0.8015 | 0.9733 |
| 0.0028 | 25.0 | 19825 | 0.2435 | 0.9310 | 0.8438 | 0.8852 | 128 | 0.8398 | 0.8994 | 0.8686 | 169 | 0.7870 | 0.7727 | 0.7798 | 110 | 0.5959 | 0.7632 | 0.6692 | 114 | 0.6679 | 0.6929 | 0.6801 | 267 | 0.5 | 1.0 | 0.6667 | 2 | 0.8601 | 0.9071 | 0.8830 | 183 | 0.7179 | 0.8615 | 0.7832 | 65 | 0.7625 | 0.8102 | 0.7856 | 0.9712 |
| 0.0031 | 26.0 | 20618 | 0.2441 | 0.9160 | 0.8516 | 0.8826 | 128 | 0.9036 | 0.8876 | 0.8955 | 169 | 0.8925 | 0.7545 | 0.8177 | 110 | 0.6667 | 0.7368 | 0.7 | 114 | 0.7323 | 0.6966 | 0.7140 | 267 | 1.0 | 1.0 | 1.0 | 2 | 0.8817 | 0.8962 | 0.8889 | 183 | 0.8909 | 0.7538 | 0.8167 | 65 | 0.8262 | 0.7967 | 0.8112 | 0.9740 |
| 0.0022 | 27.0 | 21411 | 0.2598 | 0.9160 | 0.8516 | 0.8826 | 128 | 0.8728 | 0.8935 | 0.8830 | 169 | 0.8646 | 0.7545 | 0.8058 | 110 | 0.7025 | 0.7456 | 0.7234 | 114 | 0.7660 | 0.6742 | 0.7171 | 267 | 1.0 | 1.0 | 1.0 | 2 | 0.8639 | 0.9016 | 0.8824 | 183 | 0.8833 | 0.8154 | 0.848 | 65 | 0.8305 | 0.7977 | 0.8138 | 0.9742 |
| 0.0027 | 28.0 | 22204 | 0.2239 | 0.8898 | 0.8828 | 0.8863 | 128 | 0.8817 | 0.8817 | 0.8817 | 169 | 0.8333 | 0.7727 | 0.8019 | 110 | 0.672 | 0.7368 | 0.7029 | 114 | 0.7216 | 0.6891 | 0.7050 | 267 | 1.0 | 1.0 | 1.0 | 2 | 0.8956 | 0.8907 | 0.8932 | 183 | 0.8462 | 0.8462 | 0.8462 | 65 | 0.8130 | 0.8044 | 0.8087 | 0.9743 |
| 0.0028 | 29.0 | 22997 | 0.2268 | 0.8889 | 0.875 | 0.8819 | 128 | 0.8772 | 0.8876 | 0.8824 | 169 | 0.8119 | 0.7455 | 0.7773 | 110 | 0.6667 | 0.7368 | 0.7 | 114 | 0.7245 | 0.7191 | 0.7218 | 267 | 0.5 | 1.0 | 0.6667 | 2 | 0.8865 | 0.8962 | 0.8913 | 183 | 0.7761 | 0.8 | 0.7879 | 65 | 0.8019 | 0.8073 | 0.8046 | 0.9742 |
| 0.0023 | 30.0 | 23790 | 0.2654 | 0.9113 | 0.8828 | 0.8968 | 128 | 0.8935 | 0.8935 | 0.8935 | 169 | 0.82 | 0.7455 | 0.7810 | 110 | 0.6444 | 0.7632 | 0.6988 | 114 | 0.7570 | 0.7116 | 0.7336 | 267 | 1.0 | 1.0 | 1.0 | 2 | 0.8649 | 0.8743 | 0.8696 | 183 | 0.8305 | 0.7538 | 0.7903 | 65 | 0.8137 | 0.8035 | 0.8085 | 0.9737 |
| 0.0018 | 31.0 | 24583 | 0.2678 | 0.9024 | 0.8672 | 0.8845 | 128 | 0.8824 | 0.8876 | 0.8850 | 169 | 0.8039 | 0.7455 | 0.7736 | 110 | 0.5503 | 0.7193 | 0.6236 | 114 | 0.7015 | 0.7041 | 0.7028 | 267 | 1.0 | 1.0 | 1.0 | 2 | 0.8653 | 0.9126 | 0.8883 | 183 | 0.8793 | 0.7846 | 0.8293 | 65 | 0.7822 | 0.8025 | 0.7922 | 0.9717 |
| 0.0018 | 32.0 | 25376 | 0.2460 | 0.9174 | 0.8672 | 0.8916 | 128 | 0.8988 | 0.8935 | 0.8961 | 169 | 0.8224 | 0.8 | 0.8111 | 110 | 0.6860 | 0.7281 | 0.7064 | 114 | 0.7542 | 0.6779 | 0.7140 | 267 | 1.0 | 0.5 | 0.6667 | 2 | 0.8994 | 0.8798 | 0.8895 | 183 | 0.8448 | 0.7538 | 0.7967 | 65 | 0.8291 | 0.7948 | 0.8116 | 0.9742 |
| 0.0015 | 33.0 | 26169 | 0.2474 | 0.9098 | 0.8672 | 0.888 | 128 | 0.8663 | 0.8817 | 0.8739 | 169 | 0.8131 | 0.7909 | 0.8018 | 110 | 0.7 | 0.7368 | 0.7179 | 114 | 0.7214 | 0.7079 | 0.7146 | 267 | 1.0 | 1.0 | 1.0 | 2 | 0.8817 | 0.8962 | 0.8889 | 183 | 0.7937 | 0.7692 | 0.7813 | 65 | 0.8085 | 0.8054 | 0.8069 | 0.9739 |
| 0.0006 | 34.0 | 26962 | 0.2690 | 0.9024 | 0.8672 | 0.8845 | 128 | 0.8844 | 0.9053 | 0.8947 | 169 | 0.8315 | 0.6727 | 0.7437 | 110 | 0.6667 | 0.7368 | 0.7 | 114 | 0.7391 | 0.7004 | 0.7192 | 267 | 1.0 | 1.0 | 1.0 | 2 | 0.8870 | 0.8579 | 0.8722 | 183 | 0.8889 | 0.7385 | 0.8067 | 65 | 0.8185 | 0.7861 | 0.8020 | 0.9735 |
| 0.0038 | 35.0 | 27755 | 0.2565 | 0.912 | 0.8906 | 0.9012 | 128 | 0.8786 | 0.8994 | 0.8889 | 169 | 0.7757 | 0.7545 | 0.7650 | 110 | 0.6562 | 0.7368 | 0.6942 | 114 | 0.6794 | 0.7303 | 0.7040 | 267 | 1.0 | 1.0 | 1.0 | 2 | 0.8962 | 0.8962 | 0.8962 | 183 | 0.8 | 0.8 | 0.8000 | 65 | 0.7907 | 0.8150 | 0.8027 | 0.9730 |
| 0.0008 | 36.0 | 28548 | 0.2583 | 0.8943 | 0.8594 | 0.8765 | 128 | 0.8655 | 0.8757 | 0.8706 | 169 | 0.7607 | 0.8091 | 0.7841 | 110 | 0.6829 | 0.7368 | 0.7089 | 114 | 0.7266 | 0.7266 | 0.7266 | 267 | 1.0 | 1.0 | 1.0 | 2 | 0.8889 | 0.9180 | 0.9032 | 183 | 0.8125 | 0.8 | 0.8062 | 65 | 0.8021 | 0.8160 | 0.8090 | 0.9733 |
| 0.0008 | 37.0 | 29341 | 0.2733 | 0.8862 | 0.8516 | 0.8685 | 128 | 0.8663 | 0.8817 | 0.8739 | 169 | 0.7611 | 0.7818 | 0.7713 | 110 | 0.6324 | 0.7544 | 0.688 | 114 | 0.7148 | 0.7041 | 0.7094 | 267 | 0.6667 | 1.0 | 0.8 | 2 | 0.8830 | 0.9071 | 0.8949 | 183 | 0.8333 | 0.7692 | 0.8 | 65 | 0.7902 | 0.8054 | 0.7977 | 0.9729 |
| 0.0013 | 38.0 | 30134 | 0.2555 | 0.9322 | 0.8594 | 0.8943 | 128 | 0.8988 | 0.8935 | 0.8961 | 169 | 0.7395 | 0.8 | 0.7686 | 110 | 0.7395 | 0.7719 | 0.7554 | 114 | 0.7308 | 0.7116 | 0.7211 | 267 | 0.6667 | 1.0 | 0.8 | 2 | 0.8691 | 0.9071 | 0.8877 | 183 | 0.8197 | 0.7692 | 0.7937 | 65 | 0.8133 | 0.8141 | 0.8137 | 0.9744 |
| 0.0003 | 39.0 | 30927 | 0.2683 | 0.9174 | 0.8672 | 0.8916 | 128 | 0.8882 | 0.8935 | 0.8909 | 169 | 0.8190 | 0.7818 | 0.8000 | 110 | 0.6718 | 0.7719 | 0.7184 | 114 | 0.7154 | 0.7154 | 0.7154 | 267 | 1.0 | 1.0 | 1.0 | 2 | 0.9022 | 0.9071 | 0.9046 | 183 | 0.8772 | 0.7692 | 0.8197 | 65 | 0.8149 | 0.8141 | 0.8145 | 0.9744 |
| 0.0004 | 40.0 | 31720 | 0.2727 | 0.8889 | 0.875 | 0.8819 | 128 | 0.8817 | 0.8817 | 0.8817 | 169 | 0.8469 | 0.7545 | 0.7981 | 110 | 0.6822 | 0.7719 | 0.7243 | 114 | 0.7041 | 0.7041 | 0.7041 | 267 | 1.0 | 1.0 | 1.0 | 2 | 0.8883 | 0.9126 | 0.9003 | 183 | 0.8793 | 0.7846 | 0.8293 | 65 | 0.8100 | 0.8092 | 0.8096 | 0.9745 |
| 0.0005 | 41.0 | 32513 | 0.2607 | 0.9106 | 0.875 | 0.8924 | 128 | 0.8629 | 0.8935 | 0.8779 | 169 | 0.8737 | 0.7545 | 0.8098 | 110 | 0.6953 | 0.7807 | 0.7355 | 114 | 0.7154 | 0.6966 | 0.7059 | 267 | 0.6667 | 1.0 | 0.8 | 2 | 0.8743 | 0.9126 | 0.8930 | 183 | 0.8125 | 0.8 | 0.8062 | 65 | 0.8104 | 0.8112 | 0.8108 | 0.9743 |
| 0.0007 | 42.0 | 33306 | 0.2628 | 0.9106 | 0.875 | 0.8924 | 128 | 0.8678 | 0.8935 | 0.8805 | 169 | 0.8119 | 0.7455 | 0.7773 | 110 | 0.6899 | 0.7807 | 0.7325 | 114 | 0.6985 | 0.7116 | 0.7050 | 267 | 0.6667 | 1.0 | 0.8 | 2 | 0.8730 | 0.9016 | 0.8871 | 183 | 0.8254 | 0.8 | 0.8125 | 65 | 0.7998 | 0.8121 | 0.8059 | 0.9744 |
| 0.0004 | 43.0 | 34099 | 0.2784 | 0.9098 | 0.8672 | 0.888 | 128 | 0.8994 | 0.8994 | 0.8994 | 169 | 0.8542 | 0.7455 | 0.7961 | 110 | 0.696 | 0.7632 | 0.7280 | 114 | 0.7127 | 0.7154 | 0.7140 | 267 | 0.6667 | 1.0 | 0.8 | 2 | 0.8925 | 0.9071 | 0.8997 | 183 | 0.8281 | 0.8154 | 0.8217 | 65 | 0.8170 | 0.8131 | 0.8151 | 0.9743 |
| 0.0004 | 44.0 | 34892 | 0.2771 | 0.9098 | 0.8672 | 0.888 | 128 | 0.8941 | 0.8994 | 0.8968 | 169 | 0.8586 | 0.7727 | 0.8134 | 110 | 0.7049 | 0.7544 | 0.7288 | 114 | 0.7231 | 0.7041 | 0.7135 | 267 | 0.6667 | 1.0 | 0.8 | 2 | 0.8919 | 0.9016 | 0.8967 | 183 | 0.8154 | 0.8154 | 0.8154 | 65 | 0.8207 | 0.8112 | 0.8159 | 0.9745 |
| 0.0003 | 45.0 | 35685 | 0.2756 | 0.9180 | 0.875 | 0.8960 | 128 | 0.8941 | 0.8994 | 0.8968 | 169 | 0.8673 | 0.7727 | 0.8173 | 110 | 0.7097 | 0.7719 | 0.7395 | 114 | 0.7159 | 0.7079 | 0.7119 | 267 | 0.6667 | 1.0 | 0.8 | 2 | 0.8978 | 0.9126 | 0.9051 | 183 | 0.8644 | 0.7846 | 0.8226 | 65 | 0.8246 | 0.8150 | 0.8198 | 0.9749 |
| 0.0005 | 46.0 | 36478 | 0.2739 | 0.9180 | 0.875 | 0.8960 | 128 | 0.8941 | 0.8994 | 0.8968 | 169 | 0.8333 | 0.7727 | 0.8019 | 110 | 0.6667 | 0.7719 | 0.7154 | 114 | 0.7011 | 0.7116 | 0.7063 | 267 | 0.6667 | 1.0 | 0.8 | 2 | 0.8698 | 0.9126 | 0.8907 | 183 | 0.8226 | 0.7846 | 0.8031 | 65 | 0.8036 | 0.8160 | 0.8098 | 0.9747 |
| 0.0001 | 47.0 | 37271 | 0.2774 | 0.9180 | 0.875 | 0.8960 | 128 | 0.8941 | 0.8994 | 0.8968 | 169 | 0.85 | 0.7727 | 0.8095 | 110 | 0.6667 | 0.7719 | 0.7154 | 114 | 0.7127 | 0.7154 | 0.7140 | 267 | 0.6667 | 1.0 | 0.8 | 2 | 0.9061 | 0.8962 | 0.9011 | 183 | 0.8226 | 0.7846 | 0.8031 | 65 | 0.8141 | 0.8141 | 0.8141 | 0.9747 |
| 0.0002 | 48.0 | 38064 | 0.2768 | 0.9180 | 0.875 | 0.8960 | 128 | 0.8941 | 0.8994 | 0.8968 | 169 | 0.85 | 0.7727 | 0.8095 | 110 | 0.6718 | 0.7719 | 0.7184 | 114 | 0.7159 | 0.7266 | 0.7212 | 267 | 0.6667 | 1.0 | 0.8 | 2 | 0.8967 | 0.9016 | 0.8992 | 183 | 0.8254 | 0.8 | 0.8125 | 65 | 0.8142 | 0.8189 | 0.8165 | 0.9754 |
| 0.0001 | 49.0 | 38857 | 0.2778 | 0.9180 | 0.875 | 0.8960 | 128 | 0.8941 | 0.8994 | 0.8968 | 169 | 0.8416 | 0.7727 | 0.8057 | 110 | 0.6718 | 0.7719 | 0.7184 | 114 | 0.7159 | 0.7266 | 0.7212 | 267 | 0.6667 | 1.0 | 0.8 | 2 | 0.8967 | 0.9016 | 0.8992 | 183 | 0.8254 | 0.8 | 0.8125 | 65 | 0.8134 | 0.8189 | 0.8161 | 0.9753 |
| 0.0003 | 50.0 | 39650 | 0.2778 | 0.9180 | 0.875 | 0.8960 | 128 | 0.8941 | 0.8994 | 0.8968 | 169 | 0.8431 | 0.7818 | 0.8113 | 110 | 0.6718 | 0.7719 | 0.7184 | 114 | 0.7159 | 0.7266 | 0.7212 | 267 | 0.6667 | 1.0 | 0.8 | 2 | 0.8967 | 0.9016 | 0.8992 | 183 | 0.8254 | 0.8 | 0.8125 | 65 | 0.8136 | 0.8198 | 0.8167 | 0.9753 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.2.2
- Tokenizers 0.13.2
|
AnonymousSub/SR_rule_based_hier_triplet_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
tags:
- generated_from_keras_callback
model-index:
- name: Vit-mbert
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Vit-mbert
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Tokenizers 0.13.2
|
AnonymousSub/SR_rule_based_only_classfn_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5_large_epoch_1_comve_triple
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5_large_epoch_1_comve_triple
This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5605
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 4 | 4.1923 |
| No log | 2.0 | 8 | 3.5605 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.10.1
- Datasets 2.6.1
- Tokenizers 0.13.1
|
AnonymousSub/SR_rule_based_roberta_hier_triplet_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/bbcnews/1672158882347/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1529107486271225859/03qcVNIk_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">BBC News (UK)</div>
<div style="text-align: center; font-size: 14px;">@bbcnews</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from BBC News (UK).
| Data | BBC News (UK) |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 266 |
| Short tweets | 0 |
| Tweets kept | 2984 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3n0xwshy/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @bbcnews's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/139ervf3) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/139ervf3/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/bbcnews')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
AnonymousSub/SR_rule_based_roberta_twostagequadruplet_hier_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
language:
- ko
tags:
- text generation
- pytorch
- causal-lm
license: apache-2.0
datasets:
- oscar
- lcw99/wikipedia-korean-20221001
- heegyu/namuwiki-extracted
- cc100
---
# gpt-neo-1.3B Korean float16 version
PPL on Oscar Korean text dataset = 46.0 |
AnonymousSub/SR_rule_based_twostagetriplet_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 198.19 +/- 17.93
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AnonymousSub/cline-s10-AR | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 31 | 2022-11-13T10:32:13Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: finetuning-hatespeech-model-sayak
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-hatespeech-model-sayak
This model is a fine-tuned version of [cross-encoder/ms-marco-electra-base](https://huggingface.co/cross-encoder/ms-marco-electra-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1963
- Accuracy: 0.9639
- F1: 0.2609
- Precision: 0.6
- Recall: 0.1667
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
AnonymousSub/cline | [
"pytorch",
"roberta",
"transformers"
]
| null | {
"architectures": [
"LecbertForPreTraining"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: imagefolder
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# 128-NORMAL
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-06
- train_batch_size: 24
- eval_batch_size: 4
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/Omerdor/128-NORMAL/tensorboard?#scalars)
|
AnonymousSub/rule_based_hier_quadruplet_epochs_1_shard_1_squad2.0 | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: phildav/testpyramidsrnd
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
AnonymousSub/rule_based_roberta_bert_quadruplet_epochs_1_shard_1 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
license: mit
---
### mia from [lost nova](https://store.steampowered.com/app/1603410) on Stable Diffusion via Dreambooth
#### model by no3
This your the Stable Diffusion model fine-tuned the mia-sd-1.5-beta1 concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **sks_mia**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts).
### note
If the output is real girl or a woman instead of mia just add **tow parentheses** in the instance_prompt, example `((sks_mia))`.
If you want to convert diffusers to checkpoint ".ckpt" to use in UI like [AUTOMATIC1111](https://github.com/AUTOMATIC1111/stable-diffusion-webui) or any UI that's uses .ckpt file, Use this [script](https://gist.github.com/Christopher-Hayes/636ba25e0ae2e7020722d5386ac2571b)
If you have issues or questions feel free to visit the Community Tab and start discussion about it.
Here are some images used for training this concept:






Non included images my be removed next model for minimizing learning confusion, but you can view them in [concept_images](https://huggingface.co/no3/mia-sd-1.5-beta1/tree/main/concept_images) folder. |
AnonymousSub/rule_based_roberta_bert_triplet_epochs_1_shard_1 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.86254900846639
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1370
- F1: 0.8625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.26 | 1.0 | 525 | 0.1565 | 0.8218 |
| 0.1276 | 2.0 | 1050 | 0.1409 | 0.8486 |
| 0.0817 | 3.0 | 1575 | 0.1370 | 0.8625 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
AnonymousSub/rule_based_roberta_bert_triplet_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 15.00 +/- 11.07
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
AnonymousSub/rule_based_twostagetriplet_epochs_1_shard_1_wikiqa | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 27 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: finetune_hate_speech_improved_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetune_hate_speech_improved_v1
This model is a fine-tuned version of [cross-encoder/ms-marco-electra-base](https://huggingface.co/cross-encoder/ms-marco-electra-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5548
- Accuracy: 0.8277
- F1: 0.8416
- Precision: 0.7883
- Recall: 0.9026
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
AntonClaesson/finetuning_test | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-11-13T21:14:40Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 17.10 +/- 21.04
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
|
ArBert/bert-base-uncased-finetuned-ner-gmm | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6090
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 274 | 1.5943 |
| 0.9165 | 2.0 | 548 | 1.5836 |
| 0.9165 | 3.0 | 822 | 1.6090 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
ArJakusz/DialoGPT-small-starky | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: creativeml-openrail-m
thumbnail: "https://huggingface.co/mitchtech/cardassian-diffusion/resolve/main/cardassian-grid1.png"
tags:
- stable-diffusion
- text-to-image
---
### Cardassian Diffusion
This is the fine-tuned Stable Diffusion model trained on screenshots of the cardassian alien species from the Star Trek franchise. Use the token **_cardassian_** in your prompts to generate the effect.
[CKPT download link](https://huggingface.co/mitchtech/cardassian-diffusion/resolve/main/cardassian-diffusion-v1.ckpt)
### **Cardassians generated using this model**

Kim Cardassian
CardassELON Musk
CardassIAN McKellen
CardassiANNE Hathaway

This model was trained using the diffusers based Dreambooth training by ShivamShrirao.
--
### 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX]().
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
Araf/Ummah | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distilbert-base-multilingual-cased-finetuned-squad-finetuned-squadv2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-multilingual-cased-finetuned-squad-finetuned-squadv2
This model is a fine-tuned version of [monakth/distilbert-base-multilingual-cased-finetuned-squad](https://huggingface.co/monakth/distilbert-base-multilingual-cased-finetuned-squad) on the squad_v2 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
|
ArashEsk95/bert-base-uncased-finetuned-sst2 | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1196519479364268034/5QpniWSP_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1229589315069628421/5Hy71tkj_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1591483220880678915/vDy4TSgn_400x400.png')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Humongous Ape MP & Sean Diamond & Jon Mao & dan & Pesky Splinter - Eternal Goatse Celebrant & Admiral Dan EX QC of the 3rd Antifa fleet! 💙 & Guybrush Tweetbad & Fesshole 🧻</div>
<div style="text-align: center; font-size: 14px;">@apesahoy-bierincognito-fesshole-jonmao___-meat__hook-ripeacsky-theseandiamond-unfetteredmind1</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Humongous Ape MP & Sean Diamond & Jon Mao & dan & Pesky Splinter - Eternal Goatse Celebrant & Admiral Dan EX QC of the 3rd Antifa fleet! 💙 & Guybrush Tweetbad & Fesshole 🧻.
| Data | Humongous Ape MP | Sean Diamond | Jon Mao | dan | Pesky Splinter - Eternal Goatse Celebrant | Admiral Dan EX QC of the 3rd Antifa fleet! 💙 | Guybrush Tweetbad | Fesshole 🧻 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Tweets downloaded | 3242 | 3220 | 662 | 2928 | 3107 | 3220 | 3127 | 3253 |
| Retweets | 176 | 2162 | 53 | 683 | 2406 | 444 | 450 | 17 |
| Short tweets | 577 | 239 | 119 | 305 | 136 | 1180 | 421 | 1 |
| Tweets kept | 2489 | 819 | 490 | 1940 | 565 | 1596 | 2256 | 3235 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/329ftz7y/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @apesahoy-bierincognito-fesshole-jonmao___-meat__hook-ripeacsky-theseandiamond-unfetteredmind1's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2b8bvjnq) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2b8bvjnq/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/apesahoy-bierincognito-fesshole-jonmao___-meat__hook-ripeacsky-theseandiamond-unfetteredmind1')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
ArashEsk95/bert-base-uncased-finetuned-stsb | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
datasets:
- drcd
tags:
- question-generation
widget:
- text: "[HL]伊隆·里夫·馬斯克[HL]是一名企業家和商業大亨"
---
# Transformer QG on DRCD
請參閱 https://github.com/p208p2002/Transformer-QG-on-DRCD 獲得更多細節
The inputs of the model refers to
```
we integrate C and A into a new C' in the following form.
C' = [c1, c2, ..., [HL], a1, ..., a|A|, [HL], ..., c|C|]
```
> Proposed by [Ying-Hong Chan & Yao-Chung Fan. (2019). A Re-current BERT-based Model for Question Generation.](https://www.aclweb.org/anthology/D19-5821/)
## Features
- Fully pipline from fine-tune to evaluation
- Support most of state of the art models
- Fast deploy as a API server
## DRCD dataset
[台達閱讀理解資料集 Delta Reading Comprehension Dataset (DRCD)](https://github.com/DRCKnowledgeTeam/DRCD) 屬於通用領域繁體中文機器閱讀理解資料集。 DRCD資料集從2,108篇維基條目中整理出10,014篇段落,並從段落中標註出30,000多個問題。
## Available models
- BART (base on **[uer/bart-base-chinese-cluecorpussmall](https://huggingface.co/uer/bart-base-chinese-cluecorpussmall)**)
## Expriments
Model |Bleu 1|Bleu 2|Bleu 3|Bleu 4|METEOR|ROUGE-L|
------------------|------|------|------|------|------|-------|
BART-HLSQG |34.25 |27.70 |22.43 |18.13 |23.58 |36.88 |
BART-HLSQG-v2 |39.30 |32.51 |26.72 |22.08 |24.94 |41.18 |
## Environment requirements
The hole development is based on Ubuntu system
1. If you don't have pytorch 1.6+ please install or update first
> https://pytorch.org/get-started/locally/
2. Install packages `pip install -r requirements.txt`
3. Setup scorer `python setup_scorer.py`
5. Download dataset `python init_dataset.py`
## Training
### Seq2Seq LM
```
usage: train_seq2seq_lm.py [-h]
[--base_model {facebook/bart-base,facebook/bart-large,t5-small,t5-base,t5-large}]
[-d {squad,squad-nqg}] [--epoch EPOCH] [--lr LR]
[--dev DEV] [--server] [--run_test]
[-fc FROM_CHECKPOINT]
optional arguments:
-h, --help show this help message and exit
--base_model {facebook/bart-base,facebook/bart-large,t5-small,t5-base,t5-large}
-d {squad,squad-nqg}, --dataset {squad,squad-nqg}
--epoch EPOCH
--lr LR
--dev DEV
--server
--run_test
-fc FROM_CHECKPOINT, --from_checkpoint FROM_CHECKPOINT
```
## Deploy
### Start up
```
python train_seq2seq_lm.py --server --base_model YOUR_BASE_MODEL --from_checkpoint FROM_CHECKPOINT
```
### Request example
```
curl --location --request POST 'http://127.0.0.1:5000/' \
--header 'Content-Type: application/x-www-form-urlencoded' \
--data-urlencode 'context=[HL]伊隆·里夫·馬斯克[HL]是一名企業家和商業大亨'
```
```json
{"predict": "哪一個人是一名企業家和商業大亨?"}
```
|
Archie/myProject | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
---
### blue lightsaber toy on Stable Diffusion via Dreambooth
#### model by ktingos
This your the Stable Diffusion model fine-tuned the blue lightsaber toy concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **a photo of sks toy**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:




|
ArenaGrenade/char-cnn | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | <div style='display: flex; flex-wrap: wrap; column-gap: 0.75rem;'>
<img src='https://s3.amazonaws.com/moonup/production/uploads/1668392910189-noauth.jpeg' width='400' height='400'>
<img src='https://s3.amazonaws.com/moonup/production/uploads/1668392910472-noauth.jpeg' width='400' height='400'>
<img src='https://s3.amazonaws.com/moonup/production/uploads/1668392910185-noauth.jpeg' width='400' height='400'>
<img src='https://s3.amazonaws.com/moonup/production/uploads/1668392910466-noauth.jpeg' width='400' height='400'>
<img src='https://s3.amazonaws.com/moonup/production/uploads/1668392910473-noauth.jpeg' width='400' height='400'>
<img src='https://s3.amazonaws.com/moonup/production/uploads/1668392910473-noauth.jpeg' width='400' height='400'>
<img src='https://s3.amazonaws.com/moonup/production/uploads/1668392910467-noauth.jpeg' width='400' height='400'>
<img src='https://s3.amazonaws.com/moonup/production/uploads/1668392910468-noauth.jpeg' width='400' height='400'>
<img src='https://s3.amazonaws.com/moonup/production/uploads/1668392909896-noauth.jpeg' width='400' height='400'>
</div>
|
AriakimTaiyo/DialoGPT-cultured-Kumiko | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/omershapira/1668392832122/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1890064313/oie_121542TV9Q0Cxb_400x400.gif')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Maybe: SIMD Crawford</div>
<div style="text-align: center; font-size: 14px;">@omershapira</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Maybe: SIMD Crawford.
| Data | Maybe: SIMD Crawford |
| --- | --- |
| Tweets downloaded | 3226 |
| Retweets | 257 |
| Short tweets | 266 |
| Tweets kept | 2703 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2lgxr3u0/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @omershapira's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/23oo80xz) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/23oo80xz/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/omershapira')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
AriakimTaiyo/DialoGPT-revised-Kumiko | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- zeroth_korean_asr
metrics:
- wer
model-index:
- name: hubert_zeroth_gpu
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: zeroth_korean_asr
type: zeroth_korean_asr
config: clean
split: train
args: clean
metrics:
- name: Wer
type: wer
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert_zeroth_gpu
This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on the zeroth_korean_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 4.8302
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:---:|
| 26.5222 | 0.14 | 100 | 10.9084 | 1.0 |
| 6.6076 | 0.29 | 200 | 4.8783 | 1.0 |
| 4.8383 | 0.43 | 300 | 4.8768 | 1.0 |
| 4.8372 | 0.57 | 400 | 4.8608 | 1.0 |
| 4.8298 | 0.72 | 500 | 4.8625 | 1.0 |
| 4.8377 | 0.86 | 600 | 4.8646 | 1.0 |
| 4.829 | 1.01 | 700 | 4.8472 | 1.0 |
| 4.8282 | 1.15 | 800 | 4.8435 | 1.0 |
| 4.8282 | 1.29 | 900 | 4.8438 | 1.0 |
| 4.8299 | 1.44 | 1000 | 4.8540 | 1.0 |
| 4.8276 | 1.58 | 1100 | 4.8408 | 1.0 |
| 4.8306 | 1.72 | 1200 | 4.8390 | 1.0 |
| 4.8315 | 1.87 | 1300 | 4.8426 | 1.0 |
| 4.8296 | 2.01 | 1400 | 4.8418 | 1.0 |
| 4.829 | 2.16 | 1500 | 4.8475 | 1.0 |
| 4.8324 | 2.3 | 1600 | 4.8409 | 1.0 |
| 4.8299 | 2.44 | 1700 | 4.8360 | 1.0 |
| 4.8285 | 2.59 | 1800 | 4.8419 | 1.0 |
| 4.8267 | 2.73 | 1900 | 4.8355 | 1.0 |
| 4.8232 | 2.87 | 2000 | 4.8445 | 1.0 |
| 4.8179 | 3.02 | 2100 | 4.8390 | 1.0 |
| 4.8248 | 3.16 | 2200 | 4.8506 | 1.0 |
| 4.8184 | 3.3 | 2300 | 4.8392 | 1.0 |
| 4.8268 | 3.45 | 2400 | 4.8509 | 1.0 |
| 4.8315 | 3.59 | 2500 | 4.8469 | 1.0 |
| 4.8249 | 3.74 | 2600 | 4.8457 | 1.0 |
| 4.8244 | 3.88 | 2700 | 4.8414 | 1.0 |
| 4.8226 | 4.02 | 2800 | 4.8333 | 1.0 |
| 4.8275 | 4.17 | 2900 | 4.8344 | 1.0 |
| 4.8218 | 4.31 | 3000 | 4.8351 | 1.0 |
| 4.8199 | 4.45 | 3100 | 4.8386 | 1.0 |
| 4.825 | 4.6 | 3200 | 4.8344 | 1.0 |
| 4.828 | 4.74 | 3300 | 4.8372 | 1.0 |
| 4.8228 | 4.89 | 3400 | 4.8349 | 1.0 |
| 4.8264 | 5.03 | 3500 | 4.8344 | 1.0 |
| 4.8237 | 5.17 | 3600 | 4.8332 | 1.0 |
| 4.8269 | 5.32 | 3700 | 4.8376 | 1.0 |
| 4.833 | 5.46 | 3800 | 4.8380 | 1.0 |
| 4.8188 | 5.6 | 3900 | 4.8352 | 1.0 |
| 4.8208 | 5.75 | 4000 | 4.8354 | 1.0 |
| 4.8177 | 5.89 | 4100 | 4.8291 | 1.0 |
| 4.8208 | 6.03 | 4200 | 4.8500 | 1.0 |
| 4.8242 | 6.18 | 4300 | 4.8369 | 1.0 |
| 4.8222 | 6.32 | 4400 | 4.8366 | 1.0 |
| 4.8259 | 6.47 | 4500 | 4.8369 | 1.0 |
| 4.8231 | 6.61 | 4600 | 4.8319 | 1.0 |
| 4.825 | 6.75 | 4700 | 4.8363 | 1.0 |
| 4.8245 | 6.9 | 4800 | 4.8420 | 1.0 |
| 4.8139 | 7.04 | 4900 | 4.8427 | 1.0 |
| 4.8202 | 7.18 | 5000 | 4.8393 | 1.0 |
| 4.8196 | 7.33 | 5100 | 4.8380 | 1.0 |
| 4.8199 | 7.47 | 5200 | 4.8364 | 1.0 |
| 4.8264 | 7.61 | 5300 | 4.8414 | 1.0 |
| 4.8259 | 7.76 | 5400 | 4.8397 | 1.0 |
| 4.8215 | 7.9 | 5500 | 4.8376 | 1.0 |
| 4.8198 | 8.05 | 5600 | 4.8344 | 1.0 |
| 4.828 | 8.19 | 5700 | 4.8314 | 1.0 |
| 4.8246 | 8.33 | 5800 | 4.8361 | 1.0 |
| 4.8167 | 8.48 | 5900 | 4.8336 | 1.0 |
| 4.8174 | 8.62 | 6000 | 4.8345 | 1.0 |
| 4.8283 | 8.76 | 6100 | 4.8363 | 1.0 |
| 4.8231 | 8.91 | 6200 | 4.8345 | 1.0 |
| 4.8191 | 9.05 | 6300 | 4.8327 | 1.0 |
| 4.8144 | 9.2 | 6400 | 4.8299 | 1.0 |
| 4.8206 | 9.34 | 6500 | 4.8281 | 1.0 |
| 4.822 | 9.48 | 6600 | 4.8329 | 1.0 |
| 4.8228 | 9.63 | 6700 | 4.8309 | 1.0 |
| 4.8239 | 9.77 | 6800 | 4.8348 | 1.0 |
| 4.8245 | 9.91 | 6900 | 4.8309 | 1.0 |
| 4.8173 | 10.06 | 7000 | 4.8303 | 1.0 |
| 4.8188 | 10.2 | 7100 | 4.8335 | 1.0 |
| 4.8208 | 10.34 | 7200 | 4.8290 | 1.0 |
| 4.8228 | 10.49 | 7300 | 4.8316 | 1.0 |
| 4.8226 | 10.63 | 7400 | 4.8272 | 1.0 |
| 4.824 | 10.78 | 7500 | 4.8309 | 1.0 |
| 4.8175 | 10.92 | 7600 | 4.8317 | 1.0 |
| 4.8234 | 11.06 | 7700 | 4.8271 | 1.0 |
| 4.8188 | 11.21 | 7800 | 4.8291 | 1.0 |
| 4.8182 | 11.35 | 7900 | 4.8340 | 1.0 |
| 4.8224 | 11.49 | 8000 | 4.8309 | 1.0 |
| 4.8207 | 11.64 | 8100 | 4.8308 | 1.0 |
| 4.8207 | 11.78 | 8200 | 4.8301 | 1.0 |
| 4.822 | 11.93 | 8300 | 4.8281 | 1.0 |
| 4.8199 | 12.07 | 8400 | 4.8301 | 1.0 |
| 4.8198 | 12.21 | 8500 | 4.8337 | 1.0 |
| 4.8212 | 12.36 | 8600 | 4.8310 | 1.0 |
| 4.8211 | 12.5 | 8700 | 4.8304 | 1.0 |
| 4.8226 | 12.64 | 8800 | 4.8303 | 1.0 |
| 4.8224 | 12.79 | 8900 | 4.8312 | 1.0 |
| 4.8146 | 12.93 | 9000 | 4.8362 | 1.0 |
| 4.8173 | 13.07 | 9100 | 4.8321 | 1.0 |
| 4.816 | 13.22 | 9200 | 4.8347 | 1.0 |
| 4.8219 | 13.36 | 9300 | 4.8377 | 1.0 |
| 4.8251 | 13.51 | 9400 | 4.8403 | 1.0 |
| 4.8173 | 13.65 | 9500 | 4.8387 | 1.0 |
| 4.8226 | 13.79 | 9600 | 4.8375 | 1.0 |
| 4.8137 | 13.94 | 9700 | 4.8364 | 1.0 |
| 4.819 | 14.08 | 9800 | 4.8323 | 1.0 |
| 4.8258 | 14.22 | 9900 | 4.8329 | 1.0 |
| 4.8097 | 14.37 | 10000 | 4.8293 | 1.0 |
| 4.8247 | 14.51 | 10100 | 4.8311 | 1.0 |
| 4.8197 | 14.66 | 10200 | 4.8306 | 1.0 |
| 4.8201 | 14.8 | 10300 | 4.8308 | 1.0 |
| 4.8158 | 14.94 | 10400 | 4.8319 | 1.0 |
| 4.818 | 15.09 | 10500 | 4.8306 | 1.0 |
| 4.8216 | 15.23 | 10600 | 4.8343 | 1.0 |
| 4.8096 | 15.37 | 10700 | 4.8326 | 1.0 |
| 4.8248 | 15.52 | 10800 | 4.8323 | 1.0 |
| 4.8178 | 15.66 | 10900 | 4.8358 | 1.0 |
| 4.8191 | 15.8 | 11000 | 4.8338 | 1.0 |
| 4.8248 | 15.95 | 11100 | 4.8359 | 1.0 |
| 4.8095 | 16.09 | 11200 | 4.8392 | 1.0 |
| 4.8196 | 16.24 | 11300 | 4.8374 | 1.0 |
| 4.827 | 16.38 | 11400 | 4.8346 | 1.0 |
| 4.8165 | 16.52 | 11500 | 4.8365 | 1.0 |
| 4.8206 | 16.67 | 11600 | 4.8344 | 1.0 |
| 4.8169 | 16.81 | 11700 | 4.8344 | 1.0 |
| 4.8164 | 16.95 | 11800 | 4.8390 | 1.0 |
| 4.8159 | 17.1 | 11900 | 4.8367 | 1.0 |
| 4.8202 | 17.24 | 12000 | 4.8375 | 1.0 |
| 4.8156 | 17.39 | 12100 | 4.8362 | 1.0 |
| 4.8174 | 17.53 | 12200 | 4.8410 | 1.0 |
| 4.8188 | 17.67 | 12300 | 4.8323 | 1.0 |
| 4.8167 | 17.82 | 12400 | 4.8319 | 1.0 |
| 4.8229 | 17.96 | 12500 | 4.8347 | 1.0 |
| 4.8179 | 18.1 | 12600 | 4.8320 | 1.0 |
| 4.8182 | 18.25 | 12700 | 4.8384 | 1.0 |
| 4.8151 | 18.39 | 12800 | 4.8374 | 1.0 |
| 4.8212 | 18.53 | 12900 | 4.8346 | 1.0 |
| 4.8241 | 18.68 | 13000 | 4.8344 | 1.0 |
| 4.8184 | 18.82 | 13100 | 4.8352 | 1.0 |
| 4.8174 | 18.97 | 13200 | 4.8357 | 1.0 |
| 4.8092 | 19.11 | 13300 | 4.8332 | 1.0 |
| 4.8149 | 19.25 | 13400 | 4.8347 | 1.0 |
| 4.813 | 19.4 | 13500 | 4.8376 | 1.0 |
| 4.8226 | 19.54 | 13600 | 4.8343 | 1.0 |
| 4.8175 | 19.68 | 13700 | 4.8320 | 1.0 |
| 4.8203 | 19.83 | 13800 | 4.8339 | 1.0 |
| 4.8227 | 19.97 | 13900 | 4.8324 | 1.0 |
| 4.8177 | 20.11 | 14000 | 4.8356 | 1.0 |
| 4.824 | 20.26 | 14100 | 4.8339 | 1.0 |
| 4.815 | 20.4 | 14200 | 4.8342 | 1.0 |
| 4.8189 | 20.55 | 14300 | 4.8340 | 1.0 |
| 4.8115 | 20.69 | 14400 | 4.8319 | 1.0 |
| 4.8162 | 20.83 | 14500 | 4.8288 | 1.0 |
| 4.8183 | 20.98 | 14600 | 4.8321 | 1.0 |
| 4.8189 | 21.12 | 14700 | 4.8315 | 1.0 |
| 4.8123 | 21.26 | 14800 | 4.8311 | 1.0 |
| 4.8165 | 21.41 | 14900 | 4.8321 | 1.0 |
| 4.8247 | 21.55 | 15000 | 4.8309 | 1.0 |
| 4.8165 | 21.7 | 15100 | 4.8313 | 1.0 |
| 4.815 | 21.84 | 15200 | 4.8354 | 1.0 |
| 4.8234 | 21.98 | 15300 | 4.8300 | 1.0 |
| 4.8134 | 22.13 | 15400 | 4.8284 | 1.0 |
| 4.8178 | 22.27 | 15500 | 4.8298 | 1.0 |
| 4.8128 | 22.41 | 15600 | 4.8309 | 1.0 |
| 4.8185 | 22.56 | 15700 | 4.8291 | 1.0 |
| 4.8177 | 22.7 | 15800 | 4.8288 | 1.0 |
| 4.8208 | 22.84 | 15900 | 4.8306 | 1.0 |
| 4.8183 | 22.99 | 16000 | 4.8277 | 1.0 |
| 4.8135 | 23.13 | 16100 | 4.8286 | 1.0 |
| 4.8116 | 23.28 | 16200 | 4.8275 | 1.0 |
| 4.816 | 23.42 | 16300 | 4.8290 | 1.0 |
| 4.8203 | 23.56 | 16400 | 4.8292 | 1.0 |
| 4.8198 | 23.71 | 16500 | 4.8299 | 1.0 |
| 4.8203 | 23.85 | 16600 | 4.8294 | 1.0 |
| 4.8177 | 23.99 | 16700 | 4.8286 | 1.0 |
| 4.8153 | 24.14 | 16800 | 4.8275 | 1.0 |
| 4.8201 | 24.28 | 16900 | 4.8259 | 1.0 |
| 4.8189 | 24.43 | 17000 | 4.8289 | 1.0 |
| 4.8219 | 24.57 | 17100 | 4.8280 | 1.0 |
| 4.8148 | 24.71 | 17200 | 4.8284 | 1.0 |
| 4.8113 | 24.86 | 17300 | 4.8286 | 1.0 |
| 4.8133 | 25.0 | 17400 | 4.8293 | 1.0 |
| 4.8164 | 25.14 | 17500 | 4.8302 | 1.0 |
| 4.8231 | 25.29 | 17600 | 4.8278 | 1.0 |
| 4.8136 | 25.43 | 17700 | 4.8296 | 1.0 |
| 4.8118 | 25.57 | 17800 | 4.8288 | 1.0 |
| 4.8139 | 25.72 | 17900 | 4.8280 | 1.0 |
| 4.8144 | 25.86 | 18000 | 4.8282 | 1.0 |
| 4.8206 | 26.01 | 18100 | 4.8279 | 1.0 |
| 4.8096 | 26.15 | 18200 | 4.8281 | 1.0 |
| 4.8177 | 26.29 | 18300 | 4.8271 | 1.0 |
| 4.8222 | 26.44 | 18400 | 4.8289 | 1.0 |
| 4.8148 | 26.58 | 18500 | 4.8282 | 1.0 |
| 4.8148 | 26.72 | 18600 | 4.8277 | 1.0 |
| 4.819 | 26.87 | 18700 | 4.8283 | 1.0 |
| 4.8138 | 27.01 | 18800 | 4.8290 | 1.0 |
| 4.8094 | 27.16 | 18900 | 4.8292 | 1.0 |
| 4.8236 | 27.3 | 19000 | 4.8282 | 1.0 |
| 4.8208 | 27.44 | 19100 | 4.8293 | 1.0 |
| 4.816 | 27.59 | 19200 | 4.8281 | 1.0 |
| 4.8103 | 27.73 | 19300 | 4.8294 | 1.0 |
| 4.8152 | 27.87 | 19400 | 4.8297 | 1.0 |
| 4.8158 | 28.02 | 19500 | 4.8305 | 1.0 |
| 4.8121 | 28.16 | 19600 | 4.8294 | 1.0 |
| 4.8199 | 28.3 | 19700 | 4.8292 | 1.0 |
| 4.8185 | 28.45 | 19800 | 4.8288 | 1.0 |
| 4.8199 | 28.59 | 19900 | 4.8288 | 1.0 |
| 4.8102 | 28.74 | 20000 | 4.8292 | 1.0 |
| 4.8168 | 28.88 | 20100 | 4.8291 | 1.0 |
| 4.8117 | 29.02 | 20200 | 4.8304 | 1.0 |
| 4.8156 | 29.17 | 20300 | 4.8295 | 1.0 |
| 4.8126 | 29.31 | 20400 | 4.8296 | 1.0 |
| 4.8193 | 29.45 | 20500 | 4.8302 | 1.0 |
| 4.8175 | 29.6 | 20600 | 4.8301 | 1.0 |
| 4.8167 | 29.74 | 20700 | 4.8301 | 1.0 |
| 4.8137 | 29.89 | 20800 | 4.8302 | 1.0 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.0.0
- Tokenizers 0.13.2
|
Arkadiusz/Test-model | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- en
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- safetensors
inference: false
---
Dreambooth model for Klonoa from the videogame series of the same name. Trained for a bro, because none of the models can actually Klonoa.
814 pictures, 10k steps
Prompt is klonoa
Includes an additional hypernetwork trained ages ago which might help or might not.
I claim no ownership over this, all rights belong to their respective owners. I also don't claim any responsibility or maintenance of the model, it is what it is.





Can do Klonoa cosplay too apparently
 |
Arnold/wav2vec2-hausa-demo-colab | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: roberta-large-unlabeled-gab-semeval2023-task10-45000sample
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-unlabeled-gab-semeval2023-task10-45000sample
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8859
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1552 | 1.0 | 1407 | 1.9502 |
| 1.9918 | 2.0 | 2814 | 1.8859 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.10.3
|
ArpanZS/search_model | [
"joblib"
]
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
---
README
**U**niversal **I**nformation **E**xtraction for Medical NER
Model detail: https://github.com/PaddlePaddle/PaddleNLP/tree/develop/model_zoo/uie
|
Atiqah/Atiqah | [
"license:artistic-2.0"
]
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language:
- zh
tags:
- bert
license: "apache-2.0"
---
# Please use 'Bert' related functions to load this model!
## Chinese small pre-trained model MiniRBT
In order to further promote the research and development of Chinese information processing, we launched a Chinese small pre-training model MiniRBT based on the self-developed knowledge distillation tool TextBrewer, combined with Whole Word Masking technology and Knowledge Distillation technology.
This repository is developed based on:https://github.com/iflytek/MiniRBT
You may also interested in,
- Chinese LERT: https://github.com/ymcui/LERT
- Chinese PERT: https://github.com/ymcui/PERT
- Chinese MacBERT: https://github.com/ymcui/MacBERT
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer
More resources by HFL: https://github.com/iflytek/HFL-Anthology |
Augustvember/WokkaBot2 | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: vikram15/t5-small-finetuned-newsSummary
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# vikram15/t5-small-finetuned-newsSummary
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.0476
- Validation Loss: 1.7854
- Train Rouge1: 47.4977
- Train Rouge2: 24.4278
- Train Rougel: 42.2516
- Train Rougelsum: 42.4756
- Train Gen Len: 16.305
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch |
|:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:|
| 2.0476 | 1.7854 | 47.4977 | 24.4278 | 42.2516 | 42.4756 | 16.305 | 0 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
Augustvember/wokka5 | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | null | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: ChiefTheLord/codeparrot-ds
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ChiefTheLord/codeparrot-ds
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.7143
- Validation Loss: 2.2348
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 1378398, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.7143 | 2.2348 | 0 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
Augustvember/wokkabottest2 | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bart-finetuned-idl-new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-finetuned-idl-new
This model is a fine-tuned version of [rohitsan/bart-finetuned-idl-new](https://huggingface.co/rohitsan/bart-finetuned-idl-new) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2981
- eval_bleu: 18.5188
- eval_gen_len: 19.3843
- eval_runtime: 257.315
- eval_samples_per_second: 24.464
- eval_steps_per_second: 3.059
- epoch: 8.0
- step: 56648
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 35
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
Ayham/albert_bert_summarization_cnn_dailymail | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9825925925925926
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0454
- Accuracy: 0.9826
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2137 | 1.0 | 190 | 0.0981 | 0.9681 |
| 0.1487 | 2.0 | 380 | 0.0517 | 0.9830 |
| 0.1398 | 3.0 | 570 | 0.0454 | 0.9826 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
Azaghast/DistilBERT-SCP-Class-Classification | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 42 | 2022-11-14T13:04:38Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: test-sentiment-model-imdb-3000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.86
- name: F1
type: f1
value: 0.8618421052631579
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-sentiment-model-imdb-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3296
- Accuracy: 0.86
- F1: 0.8618
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.14.0.dev20221113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
Bagus/wav2vec2-xlsr-japanese-speech-emotion-recognition | [
"pytorch",
"wav2vec2",
"audio-classification",
"ja",
"dataset:jtes",
"transformers",
"audio",
"speech",
"speech-emotion-recognition",
"has_space"
]
| audio-classification | {
"architectures": [
"HubertForSequenceClassification"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 26 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: RoniXZONE/distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# RoniXZONE/distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.9645
- Train End Logits Accuracy: 0.7308
- Train Start Logits Accuracy: 0.6936
- Validation Loss: 1.1246
- Validation End Logits Accuracy: 0.7006
- Validation Start Logits Accuracy: 0.6612
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 11064, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.5015 | 0.6068 | 0.5715 | 1.1471 | 0.6864 | 0.6508 | 0 |
| 0.9645 | 0.7308 | 0.6936 | 1.1246 | 0.7006 | 0.6612 | 1 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.6.1
- Tokenizers 0.13.2
|
Bakkes/BakkesModWiki | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- autotrain
- token-classification
language:
- pt
widget:
- text: "I love AutoTrain 🤗"
datasets:
- famube/autotrain-data-documentos-oficiais
co2_eq_emissions:
emissions: 6.461431564881563
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 2092367351
- CO2 Emissions (in grams): 6.4614
## Validation Metrics
- Loss: 0.059
- Accuracy: 0.986
- Precision: 0.000
- Recall: 0.000
- F1: 0.000
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/famube/autotrain-documentos-oficiais-2092367351
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("famube/autotrain-documentos-oficiais-2092367351", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("famube/autotrain-documentos-oficiais-2092367351", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.