modelId
stringlengths 4
81
| tags
list | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
timestamp[ns, tz=UTC] | card
stringlengths 51
438k
|
---|---|---|---|---|---|---|
DeepPavlov/distilrubert-tiny-cased-conversational-v1 | [
"pytorch",
"distilbert",
"ru",
"arxiv:2205.02340",
"transformers"
]
| null | {
"architectures": null,
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9,141 | 2022-11-16T10:20:11Z | ---
license: openrail
library_name: diffusers
tags:
- TPU
- JAX
- Flax
- stable-diffusion
- text-to-image
language:
- en
---
|
DeltaHub/adapter_t5-3b_qnli | [
"pytorch",
"transformers"
]
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: bert_uncased_L-2_H-128_A-2-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.913
- name: F1
type: f1
value: 0.9131486432959599
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_uncased_L-2_H-128_A-2-finetuned-emotion
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2502
- Accuracy: 0.913
- F1: 0.9131
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.5953 | 1.0 | 250 | 1.4759 | 0.5055 | 0.3899 |
| 1.3208 | 2.0 | 500 | 1.1113 | 0.628 | 0.5554 |
| 1.0064 | 3.0 | 750 | 0.8224 | 0.79 | 0.7802 |
| 0.7535 | 4.0 | 1000 | 0.6185 | 0.8455 | 0.8425 |
| 0.5891 | 5.0 | 1250 | 0.5004 | 0.877 | 0.8758 |
| 0.4783 | 6.0 | 1500 | 0.4260 | 0.8865 | 0.8862 |
| 0.4078 | 7.0 | 1750 | 0.3787 | 0.8905 | 0.8903 |
| 0.3554 | 8.0 | 2000 | 0.3432 | 0.891 | 0.8909 |
| 0.3146 | 9.0 | 2250 | 0.3181 | 0.8925 | 0.8924 |
| 0.2808 | 10.0 | 2500 | 0.2986 | 0.8965 | 0.8970 |
| 0.2659 | 11.0 | 2750 | 0.2881 | 0.9 | 0.8999 |
| 0.2487 | 12.0 | 3000 | 0.2740 | 0.907 | 0.9072 |
| 0.2253 | 13.0 | 3250 | 0.2683 | 0.9045 | 0.9047 |
| 0.2103 | 14.0 | 3500 | 0.2650 | 0.9095 | 0.9099 |
| 0.1995 | 15.0 | 3750 | 0.2551 | 0.9105 | 0.9108 |
| 0.1894 | 16.0 | 4000 | 0.2534 | 0.9085 | 0.9088 |
| 0.1791 | 17.0 | 4250 | 0.2473 | 0.91 | 0.9102 |
| 0.168 | 18.0 | 4500 | 0.2441 | 0.913 | 0.9134 |
| 0.1563 | 19.0 | 4750 | 0.2459 | 0.9105 | 0.9107 |
| 0.1511 | 20.0 | 5000 | 0.2497 | 0.9075 | 0.9076 |
| 0.1363 | 21.0 | 5250 | 0.2502 | 0.913 | 0.9131 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.12.1+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
DemangeJeremy/4-sentiments-with-flaubert | [
"pytorch",
"flaubert",
"text-classification",
"fr",
"transformers",
"sentiments",
"french",
"flaubert-large"
]
| text-classification | {
"architectures": [
"FlaubertForSequenceClassification"
],
"model_type": "flaubert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 226 | null | Access to model AlexKozachuk/Kitchen is restricted and you are not in the authorized list. Visit https://huggingface.co/AlexKozachuk/Kitchen to ask for access. |
Deniskin/emailer_medium_300 | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | # This is min-stable-diffusion weights file
## I hope you enjoyed. I hope you can discovery light!!!
#### weight file notes
1) wd-1-3-penultimate-ucg-cont.pt is waifu-diffusion-v1-4 weight
2) mdjrny-v4.pt is midjourney-v4-diffusion weight
3) stable_diffusion_v1_4.pt is CompVis/stable-diffusion-v1-4
4) stable_diffusion_v1_5.pt is runwayml/stable-diffusion-v1-5
5) animev3.pt is https://huggingface.co/Linaqruf/personal_backup/tree/main/animev3ckpt
6) Anything-V3.0.pt is https://huggingface.co/Linaqruf/anything-v3.0
#### install and run is github
https://github.com/scale100xu/min-stable-diffusion
|
Deniskin/essays_small_2000 | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-large-patch32-384-finetuned-melanoma
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8272727272727273
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-large-patch32-384-finetuned-melanoma
This model is a fine-tuned version of [google/vit-large-patch32-384](https://huggingface.co/google/vit-large-patch32-384) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0767
- Accuracy: 0.8273
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.0081 | 1.0 | 550 | 0.7650 | 0.68 |
| 0.7527 | 2.0 | 1100 | 0.6693 | 0.7364 |
| 0.6234 | 3.0 | 1650 | 0.6127 | 0.7709 |
| 2.6284 | 4.0 | 2200 | 0.6788 | 0.7655 |
| 0.1406 | 5.0 | 2750 | 0.6657 | 0.7836 |
| 0.317 | 6.0 | 3300 | 0.6936 | 0.78 |
| 2.5358 | 7.0 | 3850 | 0.7104 | 0.7909 |
| 1.5802 | 8.0 | 4400 | 0.6928 | 0.8 |
| 0.088 | 9.0 | 4950 | 0.8060 | 0.7982 |
| 0.0183 | 10.0 | 5500 | 0.7811 | 0.8091 |
| 0.0074 | 11.0 | 6050 | 0.7185 | 0.7945 |
| 0.0448 | 12.0 | 6600 | 0.8780 | 0.7909 |
| 0.4288 | 13.0 | 7150 | 0.8229 | 0.82 |
| 0.017 | 14.0 | 7700 | 0.7516 | 0.8182 |
| 0.0057 | 15.0 | 8250 | 0.7974 | 0.7964 |
| 1.7571 | 16.0 | 8800 | 0.7866 | 0.8218 |
| 1.3159 | 17.0 | 9350 | 0.8491 | 0.8073 |
| 1.649 | 18.0 | 9900 | 0.8432 | 0.7891 |
| 0.0014 | 19.0 | 10450 | 0.8870 | 0.82 |
| 0.002 | 20.0 | 11000 | 0.9460 | 0.8236 |
| 0.3717 | 21.0 | 11550 | 0.8866 | 0.8327 |
| 0.0025 | 22.0 | 12100 | 1.0287 | 0.8073 |
| 0.0094 | 23.0 | 12650 | 0.9696 | 0.8091 |
| 0.002 | 24.0 | 13200 | 0.9659 | 0.8018 |
| 0.1001 | 25.0 | 13750 | 0.9712 | 0.8327 |
| 0.2953 | 26.0 | 14300 | 1.0512 | 0.8236 |
| 0.0141 | 27.0 | 14850 | 1.0503 | 0.82 |
| 0.612 | 28.0 | 15400 | 1.2020 | 0.8109 |
| 0.0792 | 29.0 | 15950 | 1.0498 | 0.8364 |
| 0.0117 | 30.0 | 16500 | 1.0079 | 0.8327 |
| 0.0568 | 31.0 | 17050 | 1.0199 | 0.8255 |
| 0.0001 | 32.0 | 17600 | 1.0319 | 0.8291 |
| 0.075 | 33.0 | 18150 | 1.0427 | 0.8382 |
| 0.001 | 34.0 | 18700 | 1.1289 | 0.8382 |
| 0.0001 | 35.0 | 19250 | 1.0589 | 0.8364 |
| 0.0006 | 36.0 | 19800 | 1.0349 | 0.8236 |
| 0.0023 | 37.0 | 20350 | 1.1192 | 0.8273 |
| 0.0002 | 38.0 | 20900 | 1.0863 | 0.8273 |
| 0.2031 | 39.0 | 21450 | 1.0604 | 0.8255 |
| 0.0006 | 40.0 | 22000 | 1.0767 | 0.8273 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
DeskDown/MarianMixFT_en-ms | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1294 with parameters:
```
{'batch_size': 3, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 1294,
"warmup_steps": 130,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 4096, 'do_lower_case': False}) with Transformer model: LongformerModel
(1): Pooling({'word_embedding_dimension': 512, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
DeskDown/MarianMixFT_en-my | [
"pytorch",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
---
Example result:
===============
# Using whitemanedb_step_3500.ckpt

# Using dbwhitemane.ckpt


%2C%20best%20quality%2C%20(masterpiece_1.3)%2C%20(red%20eyes_1.2)%2C%20blush%2C%20embarrassed.png)
%2C%20lush%2C%20%20blond.png)





Clip skip comparsion

I uploaded for now 3 models (more incoming for whitemane):
-[whitemanedb_step_2500.ckpt](https://huggingface.co/sd-dreambooth-library/sally-whitemanev/blob/main/whitemanedb_step_2500.ckpt)
-[whitemanedb_step_3500.ckpt](https://huggingface.co/sd-dreambooth-library/sally-whitemanev/blob/main/whitemanedb_step_3500.ckpt)
Are trained with 21 images and the trigger is "whitemanedb", this is my first attempts and I didn't get the final file because I ran out of space on drive :\ but model seems to work just fine.
The second model is [dbwhitemane.ckpt](https://huggingface.co/sd-dreambooth-library/sally-whitemanev/blob/main/dbwhitemane.ckpt)
This one has a total of 39 images used for training that you can find [here](https://huggingface.co/sd-dreambooth-library/sally-whitemanev/tree/main/dataset)
**Model is based on AnythingV3 FP16 [38c1ebe3]
And so I would recommend to use a VAE from NAI, Anything or WaifuDiffusion**
**Also set clip skip to 2 will help because its based on NAI model**
# Promt examples
This one is for the comparsion on top
> whitemanedb , 8k, 4k, (highres:1.1), best quality, (masterpiece:1.3), (red eyes:1.2), blush, embarrassed
> Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name, bad face, deformed face, (poorly drawn face)),((buckteeth)), (((mutation))), (((deformed))), ((ugly)), blurry, ((bad anatomy)), (((bad proportions))), ((extra limbs)), cloned face, (((disfigured))), out of frame, ugly, extra limbs, (bad anatomy), gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), mutated hands, (fused fingers), (too many fingers), (((long neck))), 1boy,
> Steps: 45, Sampler: Euler a, CFG scale: 7, Seed: 772493513, Size: 512x512, Model hash: 313ad056, Eta: 0.07, Clip skip: 2
> whitemanedb taking a bath, 8k, 4k, (highres:1.1), best quality, (masterpiece:1.3), (red eyes:1.2), nsfw, nude, blush, nipples,
> Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name, bad face, deformed face, (poorly drawn face)),((buckteeth)), (((mutation))), (((deformed))), ((ugly)), blurry, ((bad anatomy)), (((bad proportions))), ((extra limbs)), cloned face, (((disfigured))), out of frame, ugly, extra limbs, (bad anatomy), gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), mutated hands, (fused fingers), (too many fingers), (((long neck))), 1boy,
> Steps: 45, Sampler: Euler a, CFG scale: 7, Seed: 3450621385, Size: 512x512, Model hash: 313ad056, Eta: 0.07, Clip skip: 2
> whitemanedb in a forest
> Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name, bad face, deformed face
> Steps: 35, Sampler: Euler a, CFG scale: 10.0, Seed: 2547952708, Size: 512x512, Model hash: 313ad056, Eta: 0.07, Clip skip: 2
> lying in the ground , princess, 1girl, solo, sbwhitemane in forest , leather armor, red eyes, lush
> Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name, bad face, deformed face, (poorly drawn face)),((buckteeth)), (((mutation))), (((deformed))), ((ugly)), blurry, ((bad anatomy)), (((bad proportions))), ((extra limbs)), cloned face, (((disfigured))), out of frame, ugly, extra limbs, (bad anatomy), gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), mutated hands, (fused fingers), (too many fingers), (((long neck))), 1boy,
> Steps: 58, Sampler: Euler a, CFG scale: 7, Seed: 1390776440, Size: 512x512, Model hash: 8b1a4378, Clip skip: 2
> sbwhitemane leaning forward, princess, 1girl, solo,elf in forest , leather armor, large eyes, (ice green eyes:1.1), lush, blonde hair, realistic photo
> Negative prompt: lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts,signature, watermark, username, blurry, artist name, bad face, deformed face, (poorly drawn face)),((buckteeth)), (((mutation))), (((deformed))), ((ugly)), blurry, ((bad anatomy)), (((bad proportions))), ((extra limbs)), cloned face, (((disfigured))), out of frame, ugly, extra limbs, (bad anatomy), gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), mutated hands, (fused fingers), (too many fingers), (((long neck))), 1boy,
> Steps: 45, Sampler: Euler a, CFG scale: 7, Seed: 1501953711, Size: 512x512, Model hash: 8b1a4378, Clip skip: 2
Enjoy, any recommendation or help is welcome, this is my first model and probably a lot of things can be improved! |
DeskDown/MarianMix_en-zh-10 | [
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"MarianMTModel"
],
"model_type": "marian",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2313
- Accuracy: 0.92
- F1: 0.9200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8717 | 1.0 | 250 | 0.3385 | 0.9015 | 0.8976 |
| 0.2633 | 2.0 | 500 | 0.2313 | 0.92 | 0.9200 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Tokenizers 0.13.2
|
Dhritam/Zova-bot | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- generated_from_trainer
model-index:
- name: unifiedqa-v2-t5-base-1363200-finetuned-causalqa-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# unifiedqa-v2-t5-base-1363200-finetuned-causalqa-squad
This model is a fine-tuned version of [allenai/unifiedqa-v2-t5-base-1363200](https://huggingface.co/allenai/unifiedqa-v2-t5-base-1363200) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2574
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.7378 | 0.05 | 73 | 1.1837 |
| 0.6984 | 0.1 | 146 | 0.8918 |
| 0.4511 | 0.15 | 219 | 0.8342 |
| 0.4696 | 0.2 | 292 | 0.7642 |
| 0.295 | 0.25 | 365 | 0.7996 |
| 0.266 | 0.3 | 438 | 0.7773 |
| 0.2372 | 0.35 | 511 | 0.8592 |
| 0.2881 | 0.39 | 584 | 0.8440 |
| 0.2578 | 0.44 | 657 | 0.8306 |
| 0.2733 | 0.49 | 730 | 0.8228 |
| 0.2073 | 0.54 | 803 | 0.8419 |
| 0.2683 | 0.59 | 876 | 0.8241 |
| 0.2693 | 0.64 | 949 | 0.8573 |
| 0.355 | 0.69 | 1022 | 0.8204 |
| 0.2246 | 0.74 | 1095 | 0.8530 |
| 0.2468 | 0.79 | 1168 | 0.8410 |
| 0.3102 | 0.84 | 1241 | 0.8035 |
| 0.2115 | 0.89 | 1314 | 0.8262 |
| 0.1855 | 0.94 | 1387 | 0.8560 |
| 0.1772 | 0.99 | 1460 | 0.8747 |
| 0.1509 | 1.04 | 1533 | 0.9132 |
| 0.1871 | 1.09 | 1606 | 0.8920 |
| 0.1624 | 1.14 | 1679 | 0.9085 |
| 0.1404 | 1.18 | 1752 | 0.9460 |
| 0.1639 | 1.23 | 1825 | 0.9812 |
| 0.0983 | 1.28 | 1898 | 0.9790 |
| 0.1395 | 1.33 | 1971 | 0.9843 |
| 0.1439 | 1.38 | 2044 | 0.9877 |
| 0.1397 | 1.43 | 2117 | 1.0338 |
| 0.1095 | 1.48 | 2190 | 1.0589 |
| 0.1228 | 1.53 | 2263 | 1.0498 |
| 0.1246 | 1.58 | 2336 | 1.0923 |
| 0.1438 | 1.63 | 2409 | 1.0995 |
| 0.1305 | 1.68 | 2482 | 1.0867 |
| 0.1077 | 1.73 | 2555 | 1.1013 |
| 0.2104 | 1.78 | 2628 | 1.0765 |
| 0.1633 | 1.83 | 2701 | 1.0796 |
| 0.1658 | 1.88 | 2774 | 1.0314 |
| 0.1358 | 1.92 | 2847 | 0.9823 |
| 0.1571 | 1.97 | 2920 | 0.9826 |
| 0.1127 | 2.02 | 2993 | 1.0324 |
| 0.0927 | 2.07 | 3066 | 1.0679 |
| 0.0549 | 2.12 | 3139 | 1.1069 |
| 0.0683 | 2.17 | 3212 | 1.1624 |
| 0.0677 | 2.22 | 3285 | 1.1174 |
| 0.0615 | 2.27 | 3358 | 1.1431 |
| 0.0881 | 2.32 | 3431 | 1.1721 |
| 0.0807 | 2.37 | 3504 | 1.1885 |
| 0.0955 | 2.42 | 3577 | 1.1991 |
| 0.0779 | 2.47 | 3650 | 1.1999 |
| 0.11 | 2.52 | 3723 | 1.1774 |
| 0.0852 | 2.57 | 3796 | 1.2095 |
| 0.0616 | 2.62 | 3869 | 1.1824 |
| 0.072 | 2.67 | 3942 | 1.2397 |
| 0.1055 | 2.71 | 4015 | 1.2181 |
| 0.0806 | 2.76 | 4088 | 1.2159 |
| 0.0684 | 2.81 | 4161 | 1.1864 |
| 0.0869 | 2.86 | 4234 | 1.1816 |
| 0.1023 | 2.91 | 4307 | 1.1717 |
| 0.0583 | 2.96 | 4380 | 1.1477 |
| 0.0684 | 3.01 | 4453 | 1.1662 |
| 0.0319 | 3.06 | 4526 | 1.2174 |
| 0.0609 | 3.11 | 4599 | 1.1947 |
| 0.0435 | 3.16 | 4672 | 1.1821 |
| 0.0417 | 3.21 | 4745 | 1.1964 |
| 0.0502 | 3.26 | 4818 | 1.2140 |
| 0.0844 | 3.31 | 4891 | 1.2028 |
| 0.0692 | 3.36 | 4964 | 1.2215 |
| 0.0366 | 3.41 | 5037 | 1.2136 |
| 0.0615 | 3.46 | 5110 | 1.2224 |
| 0.0656 | 3.5 | 5183 | 1.2468 |
| 0.0469 | 3.55 | 5256 | 1.2554 |
| 0.0475 | 3.6 | 5329 | 1.2804 |
| 0.0998 | 3.65 | 5402 | 1.2035 |
| 0.0505 | 3.7 | 5475 | 1.2095 |
| 0.0459 | 3.75 | 5548 | 1.2064 |
| 0.0256 | 3.8 | 5621 | 1.2164 |
| 0.0831 | 3.85 | 5694 | 1.2154 |
| 0.0397 | 3.9 | 5767 | 1.2126 |
| 0.0449 | 3.95 | 5840 | 1.2174 |
| 0.0322 | 4.0 | 5913 | 1.2288 |
| 0.059 | 4.05 | 5986 | 1.2274 |
| 0.0382 | 4.1 | 6059 | 1.2228 |
| 0.0202 | 4.15 | 6132 | 1.2177 |
| 0.0328 | 4.2 | 6205 | 1.2305 |
| 0.0407 | 4.24 | 6278 | 1.2342 |
| 0.0356 | 4.29 | 6351 | 1.2448 |
| 0.0414 | 4.34 | 6424 | 1.2537 |
| 0.0448 | 4.39 | 6497 | 1.2540 |
| 0.0545 | 4.44 | 6570 | 1.2552 |
| 0.0492 | 4.49 | 6643 | 1.2570 |
| 0.0293 | 4.54 | 6716 | 1.2594 |
| 0.0498 | 4.59 | 6789 | 1.2562 |
| 0.0349 | 4.64 | 6862 | 1.2567 |
| 0.0497 | 4.69 | 6935 | 1.2550 |
| 0.0194 | 4.74 | 7008 | 1.2605 |
| 0.0255 | 4.79 | 7081 | 1.2590 |
| 0.0212 | 4.84 | 7154 | 1.2571 |
| 0.0231 | 4.89 | 7227 | 1.2583 |
| 0.0399 | 4.94 | 7300 | 1.2580 |
| 0.0719 | 4.99 | 7373 | 1.2574 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
Dibyaranjan/nl_image_search | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: cc-by-nc-4.0
tags:
- galactica
widget:
- text: "The Transformer architecture [START_REF]"
- text: "The Schwarzschild radius is defined as: \\["
- text: "A force of 0.6N is applied to an object, which accelerates at 3m/s. What is its mass? <work>"
- text: "Lecture 1: The Ising Model\n\n"
- text: "[START_I_SMILES]"
- text: "[START_AMINO]GHMQSITAGQKVISKHKNGRFYQCEVVRLTTETFYEVNFDDGSFSDNLYPEDIVSQDCLQFGPPAEGEVVQVRWTDGQVYGAKFVASHPIQMYQVEFEDGSQLVVKRDDVYTLDEELP[END_AMINO] ## Keywords"
inference: false
---

# GALACTICA 1.3B (base)
Model card from the original [repo](https://github.com/paperswithcode/galai/blob/main/docs/model_card.md)
Following [Mitchell et al. (2018)](https://arxiv.org/abs/1810.03993), this model card provides information about the GALACTICA model, how it was trained, and the intended use cases. Full details about how the model was trained and evaluated can be found in the [release paper](https://galactica.org/paper.pdf).
## Model Details
The GALACTICA models are trained on a large-scale scientific corpus. The models are designed to perform scientific tasks, including but not limited to citation prediction, scientific QA, mathematical reasoning, summarization, document generation, molecular property prediction and entity extraction. The models were developed by the Papers with Code team at Meta AI to study the use of language models for the automatic organization of science. We train models with sizes ranging from 125M to 120B parameters. Below is a summary of the released models:
| Size | Parameters |
|:-----------:|:-----------:|
| `mini` | 125 M |
| `base` | 1.3 B |
| `standard` | 6.7 B |
| `large` | 30 B |
| `huge` | 120 B |
## Release Date
November 2022
## Model Type
Transformer based architecture in a decoder-only setup with a few modifications (see paper for more details).
## Paper & Demo
[Paper](https://galactica.org/paper.pdf) / [Demo](https://galactica.org)
## Model Use
The primary intended users of the GALACTICA models are researchers studying language models applied to the scientific domain. We also anticipate the model will be useful for developers who wish to build scientific tooling. However, we caution against production use without safeguards given the potential of language models to hallucinate.
The models are made available under a non-commercial CC BY-NC 4.0 license. More information about how to use the model can be found in the README.md of this repository.
## Training Data
The GALACTICA models are trained on 106 billion tokens of open-access scientific text and data. This includes papers, textbooks, scientific websites, encyclopedias, reference material, knowledge bases, and more. We tokenize different modalities to provide a natural langauge interface for different tasks. See the README.md for more information. See the paper for full information on the training data.
## How to use
Find below some example scripts on how to use the model in `transformers`:
## Using the Pytorch model
### Running the model on a CPU
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, OPTForCausalLM
tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-1.3b")
model = OPTForCausalLM.from_pretrained("facebook/galactica-1.3b")
input_text = "The Transformer architecture [START_REF]"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
### Running the model on a GPU
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
from transformers import AutoTokenizer, OPTForCausalLM
tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-1.3b")
model = OPTForCausalLM.from_pretrained("facebook/galactica-1.3b", device_map="auto")
input_text = "The Transformer architecture [START_REF]"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
### Running the model on a GPU using different precisions
#### FP16
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import torch
from transformers import AutoTokenizer, OPTForCausalLM
tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-1.3b")
model = OPTForCausalLM.from_pretrained("facebook/galactica-1.3b", device_map="auto", torch_dtype=torch.float16)
input_text = "The Transformer architecture [START_REF]"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
#### INT8
<details>
<summary> Click to expand </summary>
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, OPTForCausalLM
tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-1.3b")
model = OPTForCausalLM.from_pretrained("facebook/galactica-1.3b", device_map="auto", load_in_8bit=True)
input_text = "The Transformer architecture [START_REF]"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
## Performance and Limitations
The model outperforms several existing language models on a range of knowledge probes, reasoning, and knowledge-intensive scientific tasks. This also extends to general NLP tasks, where GALACTICA outperforms other open source general language models. That being said, we note a number of limitations in this section.
As with other language models, GALACTICA is often prone to hallucination - and training on a high-quality academic corpus does not prevent this, especially for less popular and less cited scientific concepts. There are no guarantees of truthful output when generating from the model. This extends to specific modalities such as citation prediction. While GALACTICA's citation behaviour approaches the ground truth citation behaviour with scale, the model continues to exhibit a popularity bias at larger scales.
In addition, we evaluated the model on several types of benchmarks related to stereotypes and toxicity. Overall, the model exhibits substantially lower toxicity rates compared to other large language models. That being said, the model continues to exhibit bias on certain measures (see the paper for details). So we recommend care when using the model for generations.
## Broader Implications
GALACTICA can potentially be used as a new way to discover academic literature. We also expect a lot of downstream use for application to particular domains, such as mathematics, biology, and chemistry. In the paper, we demonstrated several examples of the model acting as alternative to standard search tools. We expect a new generation of scientific tools to be built upon large language models such as GALACTICA.
We encourage researchers to investigate beneficial and new use cases for these models. That being said, it is important to be aware of the current limitations of large language models. Researchers should pay attention to common issues such as hallucination and biases that could emerge from using these models.
## Citation
```bibtex
@inproceedings{GALACTICA,
title={GALACTICA: A Large Language Model for Science},
author={Ross Taylor and Marcin Kardas and Guillem Cucurull and Thomas Scialom and Anthony Hartshorn and Elvis Saravia and Andrew Poulton and Viktor Kerkez and Robert Stojnic},
year={2022}
}
``` |
Digakive/Hsgshs | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
language:
- uk
- sv
tags:
- generated_from_trainer
- translation
model-index:
- name: mt-uk-sv-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt-uk-sv-finetuned
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-uk-sv](https://huggingface.co/Helsinki-NLP/opus-mt-uk-sv) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.4210
- eval_bleu: 40.6634
- eval_runtime: 966.5303
- eval_samples_per_second: 18.744
- eval_steps_per_second: 4.687
- epoch: 6.0
- step: 40764
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 24
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Dilmk2/DialoGPT-small-harrypotter | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | null | ---
library_name: stable-baselines3
tags:
- seals/MountainCar-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: -100.60 +/- 5.75
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: seals/MountainCar-v0
type: seals/MountainCar-v0
---
# **PPO** Agent playing **seals/MountainCar-v0**
This is a trained model of a **PPO** agent playing **seals/MountainCar-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env seals/MountainCar-v0 -orga ernestumorga -f logs/
python enjoy.py --algo ppo --env seals/MountainCar-v0 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo --env seals/MountainCar-v0 -orga ernestumorga -f logs/
rl_zoo3 enjoy --algo ppo --env seals/MountainCar-v0 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo ppo --env seals/MountainCar-v0 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env seals/MountainCar-v0 -f logs/ -orga ernestumorga
```
## Hyperparameters
```python
OrderedDict([('batch_size', 512),
('clip_range', 0.2),
('ent_coef', 6.4940755116195606e-06),
('gae_lambda', 0.98),
('gamma', 0.99),
('learning_rate', 0.0004476103728105138),
('max_grad_norm', 1),
('n_envs', 16),
('n_epochs', 20),
('n_steps', 256),
('n_timesteps', 1000000.0),
('normalize', 'dict(norm_obs=False, norm_reward=True)'),
('policy',
'imitation.policies.base.MlpPolicyWithNormalizeFeaturesExtractor'),
('policy_kwargs',
'dict(activation_fn=nn.Tanh, net_arch=[dict(pi=[64, 64], vf=[64, '
'64])])'),
('vf_coef', 0.25988158989488963),
('normalize_kwargs', {'norm_obs': False, 'norm_reward': False})])
```
|
DingleyMaillotUrgell/homer-bot | [
"pytorch",
"gpt2",
"text-generation",
"en",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
language: id
license: mit
datasets:
- oscar
- wikipedia
- id_newspapers_2018
widget:
- text: Saya [MASK] makan nasi goreng.
- text: Kucing itu sedang bermain dengan [MASK].
pipeline_tag: fill-mask
---
# Indonesian small BigBird model
## Source Code
Source code to create this model is available at [https://github.com/ilos-vigil/bigbird-small-indonesian](https://github.com/ilos-vigil/bigbird-small-indonesian).
## Downstream Task
* NLI/ZSC: [ilos-vigil/bigbird-small-indonesian-nli](https://huggingface.co/ilos-vigil/bigbird-small-indonesian-nli)
## Model Description
This **cased** model has been pretrained with Masked LM objective. It has ~30M parameters and was pretrained with 8 epoch/51474 steps with 2.078 eval loss (7.988 perplexity). Architecture of this model is shown in the configuration snippet below. The tokenizer was trained with whole dataset with 30K vocabulary size.
```py
from transformers import BigBirdConfig
config = BigBirdConfig(
vocab_size = 30_000,
hidden_size = 512,
num_hidden_layers = 4,
num_attention_heads = 8,
intermediate_size = 2048,
max_position_embeddings = 4096,
is_encoder_decoder=False,
attention_type='block_sparse'
)
```
## How to use
> Inference with Transformers pipeline (one MASK token)
```py
>>> from transformers import pipeline
>>> pipe = pipeline(task='fill-mask', model='ilos-vigil/bigbird-small-indonesian')
>>> pipe('Saya sedang bermain [MASK] teman saya.')
[{'score': 0.7199566960334778,
'token': 14,
'token_str':'dengan',
'sequence': 'Saya sedang bermain dengan teman saya.'},
{'score': 0.12370546162128448,
'token': 17,
'token_str': 'untuk',
'sequence': 'Saya sedang bermain untuk teman saya.'},
{'score': 0.0385284349322319,
'token': 331,
'token_str': 'bersama',
'sequence': 'Saya sedang bermain bersama teman saya.'},
{'score': 0.012146958149969578,
'token': 28,
'token_str': 'oleh',
'sequence': 'Saya sedang bermain oleh teman saya.'},
{'score': 0.009499032981693745,
'token': 25,
'token_str': 'sebagai',
'sequence': 'Saya sedang bermain sebagai teman saya.'}]
```
> Inference with PyTorch (one or multiple MASK token)
```py
import torch
from transformers import BigBirdTokenizerFast, BigBirdForMaskedLM
from pprint import pprint
tokenizer = BigBirdTokenizerFast.from_pretrained('ilos-vigil/bigbird-small-indonesian')
model = BigBirdForMaskedLM.from_pretrained('ilos-vigil/bigbird-small-indonesian')
topk = 5
text = 'Saya [MASK] bermain [MASK] teman saya.'
tokenized_text = tokenizer(text, return_tensors='pt')
raw_output = model(**tokenized_text)
tokenized_output = torch.topk(raw_output.logits, topk, dim=2).indices
score_output = torch.softmax(raw_output.logits, dim=2)
result = []
for position_idx in range(tokenized_text['input_ids'][0].shape[0]):
if tokenized_text['input_ids'][0][position_idx] == tokenizer.mask_token_id:
outputs = []
for token_idx in tokenized_output[0, position_idx]:
output = {}
output['score'] = score_output[0, position_idx, token_idx].item()
output['token'] = token_idx.item()
output['token_str'] = tokenizer.decode(output['token'])
outputs.append(output)
result.append(outputs)
pprint(result)
```
```py
[[{'score': 0.22353802621364594, 'token': 36, 'token_str': 'dapat'},
{'score': 0.13962049782276154, 'token': 24, 'token_str': 'tidak'},
{'score': 0.13610956072807312, 'token': 32, 'token_str': 'juga'},
{'score': 0.0725034773349762, 'token': 584, 'token_str': 'bermain'},
{'score': 0.033740025013685226, 'token': 38, 'token_str': 'akan'}],
[{'score': 0.7111291885375977, 'token': 14, 'token_str': 'dengan'},
{'score': 0.10754624754190445, 'token': 17, 'token_str': 'untuk'},
{'score': 0.022657711058855057, 'token': 331, 'token_str': 'bersama'},
{'score': 0.020862115547060966, 'token': 25, 'token_str': 'sebagai'},
{'score': 0.013086902908980846, 'token': 11, 'token_str': 'di'}]]
```
## Limitations and bias
Due to low parameter count and case-sensitive tokenizer/model, it's expected this model have low performance on certain fine-tuned task. Just like any language model, the model reflect biases from training dataset which comes from various source. Here's an example of how the model can have biased predictions,
```py
>>> pipe('Memasak dirumah adalah kewajiban seorang [MASK].')
[{'score': 0.16381049156188965,
'sequence': 'Memasak dirumah adalah kewajiban seorang budak.',
'token': 4910,
'token_str': 'budak'},
{'score': 0.1334381103515625,
'sequence': 'Memasak dirumah adalah kewajiban seorang wanita.',
'token': 649,
'token_str': 'wanita'},
{'score': 0.11588197946548462,
'sequence': 'Memasak dirumah adalah kewajiban seorang lelaki.',
'token': 6368,
'token_str': 'lelaki'},
{'score': 0.061377108097076416,
'sequence': 'Memasak dirumah adalah kewajiban seorang diri.',
'token': 258,
'token_str': 'diri'},
{'score': 0.04679233580827713,
'sequence': 'Memasak dirumah adalah kewajiban seorang gadis.',
'token': 6845,
'token_str': 'gadis'}]
```
## Training and evaluation data
This model was pretrained with [Indonesian Wikipedia](https://huggingface.co/datasets/wikipedia) with dump file from 2022-10-20, [OSCAR](https://huggingface.co/datasets/oscar) on subset `unshuffled_deduplicated_id` and [Indonesian Newspaper 2018](https://huggingface.co/datasets/id_newspapers_2018). Preprocessing is done using function from [task guides - language modeling](https://huggingface.co/docs/transformers/tasks/language_modeling#preprocess) with 4096 block size. Each dataset is splitted using [`train_test_split`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.train_test_split) with 5% allocation as evaluation data.
## Training Procedure
The model was pretrained on single RTX 3060 with 8 epoch/51474 steps with accumalted batch size 128. The sequence was limited to 4096 tokens. The optimizer used is AdamW with LR 1e-4, weight decay 0.01, learning rate warmup for first 6% steps (~3090 steps) and linear decay of the learning rate afterwards. But due to early configuration mistake, first 2 epoch used LR 1e-3 instead. Additional information can be seen on Tensorboard training logs.
## Evaluation
The model achieve the following result during training evaluation.
| Epoch | Steps | Eval. loss | Eval. perplexity |
| ----- | ----- | ---------- | ---------------- |
| 1 | 6249 | 2.466 | 11.775 |
| 2 | 12858 | 2.265 | 9.631 |
| 3 | 19329 | 2.127 | 8.390 |
| 4 | 25758 | 2.116 | 8.298 |
| 5 | 32187 | 2.097 | 8.141 |
| 6 | 38616 | 2.087 | 8.061 |
| 7 | 45045 | 2.081 | 8.012 |
| 8 | 51474 | 2.078 | 7.988 | |
DivyanshuSheth/T5-Seq2Seq-Final | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
---
## Prompt Trigger Keywords
Use `darkprincess638 person` to trigger the character, this works best at the start of the prompt
## Examples
Here are some random generations to show the flexibility of the model with different prompts
 |
Dizoid/Lll | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: cc-by-nc-4.0
tags:
- galactica
widget:
- text: "The Transformer architecture [START_REF]"
- text: "The Schwarzschild radius is defined as: \\["
- text: "A force of 0.6N is applied to an object, which accelerates at 3m/s. What is its mass? <work>"
- text: "Lecture 1: The Ising Model\n\n"
- text: "[START_I_SMILES]"
- text: "[START_AMINO]GHMQSITAGQKVISKHKNGRFYQCEVVRLTTETFYEVNFDDGSFSDNLYPEDIVSQDCLQFGPPAEGEVVQVRWTDGQVYGAKFVASHPIQMYQVEFEDGSQLVVKRDDVYTLDEELP[END_AMINO] ## Keywords"
inference: false
---

# GALACTICA 6.7B (standard)
Model card from the original [repo](https://github.com/paperswithcode/galai/blob/main/docs/model_card.md)
Following [Mitchell et al. (2018)](https://arxiv.org/abs/1810.03993), this model card provides information about the GALACTICA model, how it was trained, and the intended use cases. Full details about how the model was trained and evaluated can be found in the [release paper](https://galactica.org/paper.pdf).
## Model Details
The GALACTICA models are trained on a large-scale scientific corpus. The models are designed to perform scientific tasks, including but not limited to citation prediction, scientific QA, mathematical reasoning, summarization, document generation, molecular property prediction and entity extraction. The models were developed by the Papers with Code team at Meta AI to study the use of language models for the automatic organization of science. We train models with sizes ranging from 125M to 120B parameters. Below is a summary of the released models:
| Size | Parameters |
|:-----------:|:-----------:|
| `mini` | 125 M |
| `base` | 1.3 B |
| `standard` | 6.7 B |
| `large` | 30 B |
| `huge` | 120 B |
## Release Date
November 2022
## Model Type
Transformer based architecture in a decoder-only setup with a few modifications (see paper for more details).
## Paper & Demo
[Paper](https://galactica.org/paper.pdf) / [Demo](https://galactica.org)
## Model Use
The primary intended users of the GALACTICA models are researchers studying language models applied to the scientific domain. We also anticipate the model will be useful for developers who wish to build scientific tooling. However, we caution against production use without safeguards given the potential of language models to hallucinate.
The models are made available under a non-commercial CC BY-NC 4.0 license. More information about how to use the model can be found in the README.md of this repository.
## Training Data
The GALACTICA models are trained on 106 billion tokens of open-access scientific text and data. This includes papers, textbooks, scientific websites, encyclopedias, reference material, knowledge bases, and more. We tokenize different modalities to provide a natural langauge interface for different tasks. See the README.md for more information. See the paper for full information on the training data.
## How to use
Find below some example scripts on how to use the model in `transformers`:
## Using the Pytorch model
### Running the model on a CPU
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, OPTForCausalLM
tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-6.7b")
model = OPTForCausalLM.from_pretrained("facebook/galactica-6.7b")
input_text = "The Transformer architecture [START_REF]"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
### Running the model on a GPU
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
from transformers import AutoTokenizer, OPTForCausalLM
tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-6.7b")
model = OPTForCausalLM.from_pretrained("facebook/galactica-6.7b", device_map="auto")
input_text = "The Transformer architecture [START_REF]"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
### Running the model on a GPU using different precisions
#### FP16
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import torch
from transformers import AutoTokenizer, OPTForCausalLM
tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-6.7b")
model = OPTForCausalLM.from_pretrained("facebook/galactica-6.7b", device_map="auto", torch_dtype=torch.float16)
input_text = "The Transformer architecture [START_REF]"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
#### INT8
<details>
<summary> Click to expand </summary>
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, OPTForCausalLM
tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-6.7b")
model = OPTForCausalLM.from_pretrained("facebook/galactica-6.7b", device_map="auto", load_in_8bit=True)
input_text = "The Transformer architecture [START_REF]"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
## Performance and Limitations
The model outperforms several existing language models on a range of knowledge probes, reasoning, and knowledge-intensive scientific tasks. This also extends to general NLP tasks, where GALACTICA outperforms other open source general language models. That being said, we note a number of limitations in this section.
As with other language models, GALACTICA is often prone to hallucination - and training on a high-quality academic corpus does not prevent this, especially for less popular and less cited scientific concepts. There are no guarantees of truthful output when generating from the model. This extends to specific modalities such as citation prediction. While GALACTICA's citation behaviour approaches the ground truth citation behaviour with scale, the model continues to exhibit a popularity bias at larger scales.
In addition, we evaluated the model on several types of benchmarks related to stereotypes and toxicity. Overall, the model exhibits substantially lower toxicity rates compared to other large language models. That being said, the model continues to exhibit bias on certain measures (see the paper for details). So we recommend care when using the model for generations.
## Broader Implications
GALACTICA can potentially be used as a new way to discover academic literature. We also expect a lot of downstream use for application to particular domains, such as mathematics, biology, and chemistry. In the paper, we demonstrated several examples of the model acting as alternative to standard search tools. We expect a new generation of scientific tools to be built upon large language models such as GALACTICA.
We encourage researchers to investigate beneficial and new use cases for these models. That being said, it is important to be aware of the current limitations of large language models. Researchers should pay attention to common issues such as hallucination and biases that could emerge from using these models.
## Citation
```bibtex
@inproceedings{GALACTICA,
title={GALACTICA: A Large Language Model for Science},
author={Ross Taylor and Marcin Kardas and Guillem Cucurull and Thomas Scialom and Anthony Hartshorn and Elvis Saravia and Andrew Poulton and Viktor Kerkez and Robert Stojnic},
year={2022}
}
``` |
Dkwkk/Da | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
language: en
tags:
- newspapers
- library
- historic
- glam
- mdma
license: mit
metrics:
- pseudo-perplexity
widget:
- text: "1820 [DATE] We received a letter from [MASK] Majesty."
- text: "1850 [DATE] We received a letter from [MASK] Majesty."
- text: "[MASK] [DATE] The Franco-Prussian war is a matter of great concern."
- text: "[MASK] [DATE] The Schleswig war is a matter of great concern."
---
**MODEL CARD UNDER CONSTRUCTION, ETA END OF NOVEMBER**
<img src="https://upload.wikimedia.org/wikipedia/commons/5/5b/NCI_peas_in_pod.jpg" alt="erwt" width="200" >
# ERWT-year
🌺ERWT\* a language model that (🤭 maybe 🤫) knows more about history than you...🌺
ERWT is a fine-tuned [`distilbert-base-uncased`](https://huggingface.co/distilbert-base-uncased) model trained on historical newspapers from the [Heritage Made Digital collection](https://huggingface.co/datasets/davanstrien/hmd-erwt-training).
We trained a model based on a combination of text and **temporal metadata** (i.e. year information).
ERWT performs [**time-sensitive masked language modelling**](#historical-language-change-herhis-majesty-%F0%9F%91%91) or [**date prediction**]((#date-prediction-pub-quiz-with-lms-%F0%9F%8D%BB)).
This model is served by [Kaspar von Beelen](https://huggingface.co/Kaspar) and [Daniel van Strien](https://huggingface.co/davanstrien), *"Improving AI, one pea at a time"*.
If these models happen to be useful, please cite our working paper.
```
@misc{https://doi.org/10.48550/arxiv.2211.10086,
doi = {10.48550/ARXIV.2211.10086},
url = {https://arxiv.org/abs/2211.10086},
author = {Beelen, Kaspar and van Strien, Daniel},
keywords = {Computation and Language (cs.CL), Digital Libraries (cs.DL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Metadata Might Make Language Models Better},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}}
```
\*ERWT is dutch for PEA.
# Overview
- [Introduction: Repent Now 😇](#introductory-note-repent-now-%F0%9F%98%87)
- [Background: MDMA to the rescue 🙂](#background-mdma-to-the-rescue-%F0%9F%99%82)
- [Intended Use: LMs as History Machines 🚂](#intended-use-lms-as-history-machines)
- [Historical Language Change: Her/His Majesty? 👑](#historical-language-change-herhis-majesty-%F0%9F%91%91)
- [Date Prediction: Pub Quiz with LMs 🍻](#date-prediction-pub-quiz-with-lms-%F0%9F%8D%BB)
- [Limitations: Not all is well 😮](#limitations-not-all-is-well-%F0%9F%98%AE)
- [Training Data](#training-data)
- [Training Routine](#training-routine)
- [Data Description](#data-description)
- [Evaluation: 🤓 In case you care to count 🤓](#evaluation-%F0%9F%A4%93-in-case-you-care-to-count-%F0%9F%A4%93)
## Introductory Note: Repent Now. 😇
The ERWT models are trained for **experimental purposes**.
Please consult the [**limitations**](#limitations-not-all-is-well-%F0%9F%98%AE) section (seriously before using the models. Seriously, read this section, **we don't repent in public just for fun**).
If you can't get enough of these neural peas and crave some more. In that case, you can consult our working paper ["Metadata Might Make Language Models Better"](https://arxiv.org/abs/2211.10086) for more background information and nerdy evaluation stuff (work in progress, handle with care and kindness).
## Background: MDMA to the rescue. 🙂
ERWT was created using a **M**eta**D**ata **M**asking **A**pproach (or **MDMA** 💊), a scenario in which we train a Masked Language Model (MLM) on text and metadata simultaneously. Our intuition was that incorporating metadata (information that *describes* a text but and is not part of the content) may make language models "better", or at least make them more **sensitive** to historical, political and geographical aspects of language use. We mainly use temporal, political and geographical metadata.
ERWT is a [`distilbert-base-uncased`](https://huggingface.co/distilbert-base-uncased) model, fine-tuned on a random subsample taken from the [Heritage Made Digital newspaper collection]((https://huggingface.co/datasets/davanstrien/hmd-erwt-training)). The training data comprises around half a billion words.
To unleash the power of MDMA, we adapted to the training routine mainly by fidgeting with the input data.
When preprocessing the text, we prepended each segment of hundred words with a time stamp (year of publication) and a special `[DATE]` token.
The snippet below, taken from the [Londonderry Sentinel]:(https://www.britishnewspaperarchive.co.uk/viewer/bl/0001480/18700722/014/0002)...
```
Every scrap of intelligence relative to the war between France and Prussia is now read with interest.
```
... would be formatted as:
```python
"1870 [DATE] Every scrap of intelligence relative to the war between France and Prussia is now read with interest."
```
These text chunks are then forwarded to the data collator and eventually the language model.
Exposed to the tokens and (temporal) metadata, the model learns a relation between text and time. When a text token is hidden, the prepended `year` field influences the prediction of the masked words. Vice versa, when the prepended metadata is hidden, the model predicts the year of publication based on the content.
## Intended Use: LMs as History Machines.
Exposing the model to temporal metadata allows us to investigate **historical language change** and perform **date prediction**.
### Historical Language Change: Her/His Majesty? 👑
Let's show how ERWT works with a very concrete example.
The ERWT models are trained on a handful of British newspapers published between 1800 and 1870. It can be used to monitor historical change in this specific context.
Imagine you are confronted with the following snippet: "We received a letter from [MASK] Majesty" and want to predict the correct pronoun for the masked token (again assuming a British context).
👩🏫 **History Intermezzo** Please remember, for most of the nineteenth century, Queen Victoria ruled Britain, from 1837 to 1901 to be precise. Her nineteenth-century predecessors (George III, George IV and William IV) were all male.
While a standard language model will provide you with one a general prediction—based on what it has observed during training–ERWT allows you to manipulate to prediction, by anchoring the text in a specific year.
Doing this requires just a few lines of code:
```python
from transformers import pipeline
mask_filler = pipeline("fill-mask",
model='Livingwithmachines/erwt-year')
mask_filler(f"1820 [DATE] We received a letter from [MASK] Majesty.")
```
This returns "his" as the most likely filler:
```python
{'score': 0.8527863025665283,
'token': 2010,
'token_str': 'his',
'sequence': '1820 we received a letter from his majesty.'}
```
However, if we change the date at the start of the sentence to 1850:
```python
mask_filler(f"1850 [DATE] We received a letter from [MASK] Majesty.")
```
ERWT puts most of the probability mass on the token "her" and only a little bit on "his".
```python
{'score': 0.8168327212333679,
'token': 2014,
'token_str': 'her',
'sequence': '1850 we received a letter from her majesty.'}
```
You can repeat this experiment for yourself using the example sentences in the **Hosted inference API** at the top right.
Okay, but why is this **interesting**?
Firstly, eyeballing some toy examples (but also using more rigorous metrics such as [perplexity](#evaluation-%F0%9F%A4%93-in-case-you-care-to-count-%F0%9F%A4%93)) shows that MLMs yield more accurate predictions when they have access to temporal metadata.
In other words, **ERWT models are better at capturing historical context.**
Secondly, MDMA may **reduce biases** that arise from imbalanced training data (or at least give us more of a handle on this problem). Admittedly, we have to prove this more formally, but some experiments at least hint in this direction.
### Date Prediction: Pub Quiz with LMs 🍻
Another feature of ERWT is **date prediction**. Remember that during training the temporal metadata token is regularly masked. In this case, the model effectively learns to situate documents in time based on the tokens in a text.
By masking the year token at the beginning of the text string, ERWT guesses the document's year of publication.
👩🏫 **History Intermezzo** To unite the German states (there used to be [plenty](https://www.britannica.com/topic/German-Confederation)!), Prussia fought a number of wars with its neighbours in the second half of the nineteenth century. It invaded Denmark in 1864 (the second of the Schleswig Wars) and France in 1870 (the Franco-Prussian war).
Reusing to code above, we can time-stamp documents by masking the year. For example, the line of python code below:
```python
mask_filler("[MASK] [DATE] The Schleswig war is a matter of great concern.")
```
Outputs as most likely filler:
```python
{'score': 0.48822104930877686,
'token': 6717,
'token_str': '1864',
'sequence': '1864 the schleswig war is a matter of great concern.'}
```
The prediction "1864" makes sense; this was indeed the year of Prussian troops (with some help of their Austrian friends) crossed the border into Schleswig, then part of the Kingdom of Denmark.
A few years later, in 1870, Prussia aimed its artillery and bayonets southwards and invaded France.
```python
mask_filler("[MASK] [DATE] The Franco-Prussian war is a matter of great concern.")
```
ERWT clearly learned a lot about the history of German unification by ploughing through a plethora of nineteenth-century newspaper articles: it correctly returns "1870" as the predicted year for the Franco-Prussian war!
Again, we have to ask: Who cares? Wikipedia can tell us pretty much the same. More importantly, don't we already have timestamps for newspaper data?
In both cases, our answer sounds "yes, but...". ERWT's time-stamping powers have little instrumental use and won't make us rich (but donations are welcome of course 🤑). Nonetheless, we believe date prediction has value for research purposes. We can use ERWT for "fictitious" prediction, i.e. as a diagnostic tool.
Firstly, we used date prediction for evaluation purposes, to measure which training routine produces models that best capture the year of publication from a set of tokens.
Secondly, we could use date prediction as an analytical or research tool, and study, for example, temporal variation **within** text documents; or scrutinise which features drive the time prediction (it goes without saying that the same applies to other metadata fields, like political orientation).
## Limitations: Not all is well 😮.
The ERWT series were trained for evaluation purposes and therefore carry some critical limitations.
### Training Data
Many of the limitations are a direct result of the training data. ERWT models are trained on a rather small subsample of nineteenth-century **British newspapers**, and its predictions have to be understood in this context (remember, "Her Majesty?"). The corpus has a strong **Metropolitan and liberal bias** (see the section on Data Description for more information).
The training data spans from **1800 to 1870**. If your research interest is outside of this period, it's unlikely that ERWT will be of much use. Don't ask the poor model to predict when the Second World War happened. ERWT can be smart (at times) but it doesn't have the power of fortune-telling. At least not yet...
Furthermore, historical models tend to reflect past (and present?) stereotypes and prejudices. We strongly advise against using these models outside of a research context. The predictions are likely to exhibit harmful biases, they should be investigated critically and understood within the context of nineteenth-century British cultural history.
One way of evaluating a model's bias is to gauge the impact of changing a prompt on the predicted [MASK] token. Often a comparison is made between the predictions given for 'The **man** worked as a [MASK]' to 'The **woman** worked as a [MASK]'.
An example of the output for this model:
```
1810 [DATE] The man worked as a [MASK].
```
Produces the following three top predicted mask tokens
```python
[
{
"score": 0.17358914017677307,
"token": 10533,
"token_str": "carpenter",
},
{
"score": 0.08387620747089386,
"token": 22701,
"token_str": "tailor",
},
{
"score": 0.068501777946949,
"token": 6243,
"token_str": "baker",
}
]
```
```
1810 [DATE] The woman worked as a [MASK].
```
Produces the following three top predicted mask tokens
```python
[
{
"score": 0.148710235953331,
"token": 7947,
"token_str": "servant",
},
{
"score": 0.07184035331010818,
"token": 6243,
"token_str": "baker",
},
{
"score": 0.0675836056470871,
"token": 6821,
"token_str": "nurse",
},
]
```
Mostly, prompt evaluation is done to assess the bias in *contemporary* language models. In the case of historic language models, the bias exhibited by a model *may* be a valuable research tool in assessing (at scale) language use over time, and the stereotypes and prejudices encoded in text corpora.
For this particular prompt, the 'bias' exhibited by the language model (and the underlying data) may be a relatively accurate reflection of employment patterns during the 19th century. A possible area of exploration is to see how these predictions change when the model is prompted with different dates. With a dataset covering a more extended time period, we may expect to see a decline in the [MASK] `servant` toward the end of the 19th Century and particularly following the start of the First World War when the number of domestic servants employed in the United Kingdom fell rapidly.
### Training Routine
We created various ERWT models as part of a wider experiment that aimed to establish best practices and guidelines for training models with metadata. An overview of all the models is available on our [GitHub](https://github.com/Living-with-machines/ERWT/) page.
To reduce training time, we based our experiments on a random subsample of the HMD corpus, consisting of half a billion tokens.
Furthermore, we only trained the models for one epoch, which implies they are most likely **undertrained** at the moment.
We were mainly interested in the **relative** performance of the different ERWT models. We did, however, compared ERWT with [`distilbert-base-uncased`](https://huggingface.co/distilbert-base-uncased) in our evaluation experiments. And, of course, our tiny LM peas
did much better. 🎉🥳
Want to know the details—Oh, critical reader!—then consult and cite [our working paper](https://arxiv.org/abs/2211.10086)!
## Data Description
The ERWT models are trained on an openly accessible newspaper corpus created by the [Heritage Made Digital (HMD) newspaper digitisation project](footnote{https://blogs.bl.uk/thenewsroom/2019/01/heritage-made-digital-the-newspapers.html).
The HMD newspapers comprise around 2 billion words in total, but the bulk of the articles originate from the (then) liberal paper *The Sun*.
Geographically, most papers are metropolitan (i.e. based in London). The inclusion of *The Northern Daily Times* and *Liverpool Standard*, adds some geographical diversity to this corpus. The political classification is based on historical newspaper press directories, please read [our paper](https://academic.oup.com/dsh/advance-article/doi/10.1093/llc/fqac037/6644524?searchresult=1) on bias in newspaper collections for more information.
The table below contains a more detailed overview of the corpus.
| | | | | |
|------|--------------------------|--------------|-----------|---------------|
| NLP | Title | Politics | Location | Tokens |
| 2083 | The Northern Daily Times | NEUTRAL | LIVERPOOL | 14,094,212 |
| 2084 | The Northern Daily Times | NEUTRAL | LIVERPOOL | 34,450,366 |
| 2085 | The Northern Daily Times | NEUTRAL | LIVERPOOL | 16,166,627 |
| 2088 | The Liverpool Standard | CONSERVATIVE | LIVERPOOL | 149,204,800 |
| 2090 | The Liverpool Standard | CONSERVATIVE | LIVERPOOL | 6,417,320 |
| 2194 | The Sun | LIBERAL | LONDON | 1,155,791,480 |
| 2244 | Colored News | NONE | LONDON | 53,634 |
| 2642 | The Express | LIBERAL | LONDON | 236,240,555 |
| 2644 | National Register | CONSERVATIVE | LONDON | 23,409,733 |
| 2645 | The Press | CONSERVATIVE | LONDON | 15,702,276 |
| 2646 | The Star | NONE | LONDON | 163,072,742 |
| 2647 | The Statesman | RADICAL | LONDON | 61,225,215 |
Table 1: Overview of Newspapers included in the Heritage Made Digital newspaper corpus
Temporally, most of the articles date from the second half of the nineteenth century. The figure below gives an overview of the number of articles by year.

## Evaluation: 🤓 In case you care to count 🤓
Our article ["Metadata Might Make Language Models Better"](https://arxiv.org/abs/2211.10086) comprises an extensive evaluation of all the MDMA-infused language models.
The table below shows the [pseudo-perplexity](https://arxiv.org/abs/1910.14659) scores for different models based on text documents of 64 and 128 tokens.
In general, [ERWT-year-masked-25](https://huggingface.co/Livingwithmachines/erwt-year-masked-25) turned out to yield the most competitive scores across different tasks and we generally recommend you use this model.
| text length | 64 | | 128 | |
|------------------|----------------|--------|----------------|--------|
| model | mean | sd | mean | sd |
| DistilBERT | 354.40 | 376.32 | 229.19 | 294.70 |
| HMDistilBERT | 32.94 | 64.78 | 25.72 | 45.99 |
| ERWT-year | 31.49 | 61.85 | 24.97 | 44.58 |
| ERWT-st | 31.69 | 62.42 | 25.03 | 44.74 |
| ERWT-year-masked-25 | **30.97** | 61.50 | **24.59** | 44.36 |
| ERWT-year-masked-75 | 31.02 | 61.41 | 24.63 | 44.40 |
| PEA | 31.63 | 62.09 | 25.58 | 44.99 |
| PEA-st | 31.65 | 62.19 | 25.59 | 44.99 |
Table 2: Mean and standard deviations of pseudo-perplexity scores computed on 1000 fragments of 64 respectively 128 tokens length
## Questions?
Questions? Feedback? Please leave a message!
|
Dmitry12/sber | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: other
tags:
- generated_from_trainer
model-index:
- name: nvidia-segformer-b0-finetuned-ade-512-512-finetuned-ISIC17
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nvidia-segformer-b0-finetuned-ade-512-512-finetuned-ISIC17
This model is a fine-tuned version of [nvidia/segformer-b0-finetuned-ade-512-512](https://huggingface.co/nvidia/segformer-b0-finetuned-ade-512-512) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1948
- Mean Iou: 0.8064
- Mean Accuracy: 0.8726
- Overall Accuracy: 0.9381
- Per Category Iou: [0.6841604127643356, 0.9285439643646547]
- Per Category Accuracy: [0.7721651141608432, 0.9729809595315688]
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:----------------------------------------:|:-----------------------------------------:|
| 0.481 | 0.16 | 10 | 0.4235 | 0.6191 | 0.6970 | 0.8761 | [0.3719409076673884, 0.8662862424406493] | [0.42270204900152314, 0.9713331864930521] |
| 0.4147 | 0.32 | 20 | 0.3894 | 0.7067 | 0.8502 | 0.8853 | [0.5464942438498753, 0.8668431573745645] | [0.7965579529885418, 0.9038859083170013] |
| 0.356 | 0.48 | 30 | 0.3148 | 0.7467 | 0.8513 | 0.9107 | [0.5963581593534901, 0.897077797385972] | [0.7603709174964982, 0.9422313184595918] |
| 0.3039 | 0.63 | 40 | 0.3024 | 0.7620 | 0.8671 | 0.9162 | [0.6211722830632663, 0.9028139512386881] | [0.7918407335685692, 0.9422883932404167] |
| 0.2545 | 0.79 | 50 | 0.2849 | 0.7766 | 0.8898 | 0.9201 | [0.6468577863419183, 0.9063792530493855] | [0.8432862096150755, 0.9362151542385662] |
| 0.2635 | 0.95 | 60 | 0.2504 | 0.7828 | 0.8644 | 0.9279 | [0.6487213857926865, 0.9168129696986418] | [0.7671470887645524, 0.9616549114054705] |
| 0.2175 | 1.11 | 70 | 0.2497 | 0.7849 | 0.8682 | 0.9283 | [0.6526705030304356, 0.9171225024239068] | [0.7762677096648272, 0.9602225755678137] |
| 0.2025 | 1.27 | 80 | 0.2400 | 0.7840 | 0.8632 | 0.9288 | [0.6501844204669202, 0.9178944798865282] | [0.7627291445016801, 0.9636411137781736] |
| 0.2035 | 1.43 | 90 | 0.2288 | 0.7931 | 0.8749 | 0.9313 | [0.6657367286733036, 0.9203778068784213] | [0.7885027822639286, 0.9612655167036179] |
| 0.2488 | 1.59 | 100 | 0.2110 | 0.7978 | 0.8719 | 0.9341 | [0.6717638717220313, 0.923859975121704] | [0.7766611302038285, 0.9672003292652145] |
| 0.1954 | 1.75 | 110 | 0.2067 | 0.7962 | 0.8597 | 0.9354 | [0.666599427783381, 0.9258672754383861] | [0.7436428904928473, 0.9757231213956472] |
| 0.1806 | 1.9 | 120 | 0.2047 | 0.7926 | 0.8525 | 0.9349 | [0.6596059897565958, 0.925563006736469] | [0.726197674685608, 0.9787940661520825] |
| 0.161 | 2.06 | 130 | 0.2047 | 0.7903 | 0.8505 | 0.9342 | [0.6558737849234609, 0.9247714617107691] | [0.7223974159771602, 0.9786951901233297] |
| 0.1736 | 2.22 | 140 | 0.2023 | 0.7948 | 0.8588 | 0.9349 | [0.6643652721485811, 0.9252950591002775] | [0.742124317828686, 0.9754152391272543] |
| 0.1947 | 2.38 | 150 | 0.2077 | 0.7985 | 0.8656 | 0.9355 | [0.6712414223331253, 0.9257326708494226] | [0.7585178608332249, 0.9726888331181641] |
| 0.1464 | 2.54 | 160 | 0.1960 | 0.8030 | 0.8680 | 0.9373 | [0.678274892507806, 0.9276935390097538] | [0.7620104248788739, 0.9740685958478499] |
| 0.1644 | 2.7 | 170 | 0.1964 | 0.8064 | 0.8751 | 0.9377 | [0.6847175060674714, 0.9279857318627613] | [0.7791196258677832, 0.9710404169835255] |
| 0.1803 | 2.86 | 180 | 0.1948 | 0.8064 | 0.8726 | 0.9381 | [0.6841604127643356, 0.9285439643646547] | [0.7721651141608432, 0.9729809595315688] |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.0+cu116
- Datasets 2.7.0
- Tokenizers 0.12.1
|
Doiman/DialoGPT-medium-harrypotter | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | null | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: aw-gpt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# aw-gpt
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.2452
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 200
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu116
- Datasets 2.6.1
- Tokenizers 0.11.6
|
DongHai/DialoGPT-small-rick | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 5 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 5,
"warmup_steps": 1,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
DongHyoungLee/distilbert-base-uncased-finetuned-cola | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 27 | null | ---
license: agpl-3.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# sentence-t5-base-nlpl-code_search_net
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
It has been trained on the with the [code_search_net](https://huggingface.co/datasets/code_search_net) dataset
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 58777 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 100,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: T5EncoderModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 768, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
(3): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Donghyun/L2_BERT | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: cc-by-nc-4.0
tags:
- galactica
widget:
- text: "The Transformer architecture [START_REF]"
- text: "The Schwarzschild radius is defined as: \\["
- text: "A force of 0.6N is applied to an object, which accelerates at 3m/s. What is its mass? <work>"
- text: "Lecture 1: The Ising Model\n\n"
- text: "[START_I_SMILES]"
- text: "[START_AMINO]GHMQSITAGQKVISKHKNGRFYQCEVVRLTTETFYEVNFDDGSFSDNLYPEDIVSQDCLQFGPPAEGEVVQVRWTDGQVYGAKFVASHPIQMYQVEFEDGSQLVVKRDDVYTLDEELP[END_AMINO] ## Keywords"
inference: false
---

# GALACTICA 120 B (huge)
Model card from the original [repo](https://github.com/paperswithcode/galai/blob/main/docs/model_card.md)
Following [Mitchell et al. (2018)](https://arxiv.org/abs/1810.03993), this model card provides information about the GALACTICA model, how it was trained, and the intended use cases. Full details about how the model was trained and evaluated can be found in the [release paper](https://galactica.org/paper.pdf).
**This model checkpoint was integrated into the Hub by [Manuel Romero](https://huggingface.co/mrm8488)**
## Model Details
The GALACTICA models are trained on a large-scale scientific corpus. The models are designed to perform scientific tasks, including but not limited to citation prediction, scientific QA, mathematical reasoning, summarization, document generation, molecular property prediction and entity extraction. The models were developed by the Papers with Code team at Meta AI to study the use of language models for the automatic organization of science. We train models with sizes ranging from 125M to 120B parameters. Below is a summary of the released models:
| Size | Parameters |
|:-----------:|:-----------:|
| `mini` | 125 M |
| `base` | 1.3 B |
| `standard` | 6.7 B |
| `large` | 30 B |
| `huge` | 120 B |
## Release Date
November 2022
## Model Type
Transformer based architecture in a decoder-only setup with a few modifications (see paper for more details).
## Paper & Demo
[Paper](https://galactica.org/paper.pdf) / [Demo](https://galactica.org)
## Model Use
The primary intended users of the GALACTICA models are researchers studying language models applied to the scientific domain. We also anticipate the model will be useful for developers who wish to build scientific tooling. However, we caution against production use without safeguards given the potential of language models to hallucinate.
The models are made available under a non-commercial CC BY-NC 4.0 license. More information about how to use the model can be found in the README.md of this repository.
## Training Data
The GALACTICA models are trained on 106 billion tokens of open-access scientific text and data. This includes papers, textbooks, scientific websites, encyclopedias, reference material, knowledge bases, and more. We tokenize different modalities to provide a natural langauge interface for different tasks. See the README.md for more information. See the paper for full information on the training data.
## How to use
Find below some example scripts on how to use the model in `transformers`:
## Using the Pytorch model
### Running the model on a CPU
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, OPTForCausalLM
tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-120b")
model = OPTForCausalLM.from_pretrained("facebook/galactica-120b")
input_text = "The Transformer architecture [START_REF]"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
### Running the model on a GPU
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
from transformers import AutoTokenizer, OPTForCausalLM
tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-120b")
model = OPTForCausalLM.from_pretrained("facebook/galactica-120b", device_map="auto")
input_text = "The Transformer architecture [START_REF]"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
### Running the model on a GPU using different precisions
#### FP16
<details>
<summary> Click to expand </summary>
```python
# pip install accelerate
import torch
from transformers import AutoTokenizer, OPTForCausalLM
tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-120b")
model = OPTForCausalLM.from_pretrained("facebook/galactica-120b", device_map="auto", torch_dtype=torch.float16)
input_text = "The Transformer architecture [START_REF]"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
#### INT8
<details>
<summary> Click to expand </summary>
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, OPTForCausalLM
tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-120b")
model = OPTForCausalLM.from_pretrained("facebook/galactica-120b", device_map="auto", load_in_8bit=True)
input_text = "The Transformer architecture [START_REF]"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
</details>
## Performance and Limitations
The model outperforms several existing language models on a range of knowledge probes, reasoning, and knowledge-intensive scientific tasks. This also extends to general NLP tasks, where GALACTICA outperforms other open source general language models. That being said, we note a number of limitations in this section.
As with other language models, GALACTICA is often prone to hallucination - and training on a high-quality academic corpus does not prevent this, especially for less popular and less cited scientific concepts. There are no guarantees of truthful output when generating from the model. This extends to specific modalities such as citation prediction. While GALACTICA's citation behaviour approaches the ground truth citation behaviour with scale, the model continues to exhibit a popularity bias at larger scales.
In addition, we evaluated the model on several types of benchmarks related to stereotypes and toxicity. Overall, the model exhibits substantially lower toxicity rates compared to other large language models. That being said, the model continues to exhibit bias on certain measures (see the paper for details). So we recommend care when using the model for generations.
## Broader Implications
GALACTICA can potentially be used as a new way to discover academic literature. We also expect a lot of downstream use for application to particular domains, such as mathematics, biology, and chemistry. In the paper, we demonstrated several examples of the model acting as alternative to standard search tools. We expect a new generation of scientific tools to be built upon large language models such as GALACTICA.
We encourage researchers to investigate beneficial and new use cases for these models. That being said, it is important to be aware of the current limitations of large language models. Researchers should pay attention to common issues such as hallucination and biases that could emerge from using these models.
## Citation
```bibtex
@inproceedings{GALACTICA,
title={GALACTICA: A Large Language Model for Science},
author={Ross Taylor and Marcin Kardas and Guillem Cucurull and Thomas Scialom and Anthony Hartshorn and Elvis Saravia and Andrew Poulton and Viktor Kerkez and Robert Stojnic},
year={2022}
}
``` |
Dongjae/mrc2reader | [
"pytorch",
"xlm-roberta",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"XLMRobertaForQuestionAnswering"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
language: en
tags:
- newspapers
- library
- historic
- glam
- mdma
license: mit
metrics:
- pseudo-perplexity
widget:
- text: "[1820] [SEP] We received a letter from [MASK] Majesty."
- text: "[1850] [SEP] We received a letter from [MASK] Majesty."
- text: "[MASK] [SEP] The Franco-Prussian war is a matter of great concern."
- text: "[MASK] [SEP] The Schleswig war is a matter of great concern."
---
**MODEL CARD UNDER CONSTRUCTION, ETA END OF NOVEMBER**
<img src="https://upload.wikimedia.org/wikipedia/commons/5/5b/NCI_peas_in_pod.jpg" alt="erwt" width="200" >
# ERWT-year-st
🌺ERWT\* a language model that (🤭 maybe 🤫) knows more about history than you...🌺
ERWT is a fine-tuned [`distilbert-base-uncased`](https://huggingface.co/distilbert-base-uncased) model trained on historical newspapers from the [Heritage Made Digital collection](https://huggingface.co/datasets/davanstrien/hmd-erwt-training).
We trained a model based on a combination of text and **temporal metadata** (i.e. year information).
ERWT performs [**time-sensitive masked language modelling**](#historical-language-change-herhis-majesty-%F0%9F%91%91) or [**date prediction**]((#date-prediction-pub-quiz-with-lms-%F0%9F%8D%BB)).
This model is served by [Kaspar von Beelen](https://huggingface.co/Kaspar) and [Daniel van Strien](https://huggingface.co/davanstrien), *"Improving AI, one pea at a time"*.
If these models happen to be useful, please cite our working paper.
```
@misc{https://doi.org/10.48550/arxiv.2211.10086,
doi = {10.48550/ARXIV.2211.10086},
url = {https://arxiv.org/abs/2211.10086},
author = {Beelen, Kaspar and van Strien, Daniel},
keywords = {Computation and Language (cs.CL), Digital Libraries (cs.DL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Metadata Might Make Language Models Better},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}}
```
\*ERWT is dutch for PEA.
# Overview
- [Introduction: Repent Now 😇](#introductory-note-repent-now-%F0%9F%98%87)
- [Background: MDMA to the rescue 🙂](#background-mdma-to-the-rescue-%F0%9F%99%82)
- [Intended Use: LMs as History Machines 🚂](#intended-use-lms-as-history-machines)
- [Historical Language Change: Her/His Majesty? 👑](#historical-language-change-herhis-majesty-%F0%9F%91%91)
- [Date Prediction: Pub Quiz with LMs 🍻](#date-prediction-pub-quiz-with-lms-%F0%9F%8D%BB)
- [Limitations: Not all is well 😮](#limitations-not-all-is-well-%F0%9F%98%AE)
- [Training Data](#training-data)
- [Training Routine](#training-routine)
- [Data Description](#data-description)
- [Evaluation: 🤓 In case you care to count 🤓](#evaluation-%F0%9F%A4%93-in-case-you-care-to-count-%F0%9F%A4%93)
## Introductory Note: Repent Now. 😇
The ERWT models are trained for **experimental purposes**.
Please consult the [**limitations**](#limitations-not-all-is-well-%F0%9F%98%AE) section (seriously before using the models. Seriously, read this section, **we don't repent in public just for fun**).
If you can't get enough of these neural peas and crave some more. In that case, you can consult our working paper ["Metadata Might Make Language Models Better"](https://arxiv.org/abs/2211.10086) for more background information and nerdy evaluation stuff (work in progress, handle with care and kindness).
## Background: MDMA to the rescue. 🙂
ERWT was created using a **M**eta**D**ata **M**asking **A**pproach (or **MDMA** 💊), a scenario in which we train a Masked Language Model (MLM) on text and metadata simultaneously. Our intuition was that incorporating metadata (information that *describes* a text but and is not part of the content) may make language models "better", or at least make them more **sensitive** to historical, political and geographical aspects of language use. We mainly use temporal, political and geographical metadata.
ERWT is a [`distilbert-base-uncased`](https://huggingface.co/distilbert-base-uncased) model, fine-tuned on a random subsample taken from the [Heritage Made Digital newspaper collection]((https://huggingface.co/datasets/davanstrien/hmd-erwt-training)). The training data comprises around half a billion words.
To unleash the power of MDMA, we adapted to the training routine mainly by fidgeting with the input data.
When preprocessing the text, we prepended each segment of hundred words with a time stamp (year of publication) and a special `[DATE]` token.
The snippet below, taken from the [Londonderry Sentinel]:(https://www.britishnewspaperarchive.co.uk/viewer/bl/0001480/18700722/014/0002)...
```
Every scrap of intelligence relative to the war between France and Prussia is now read with interest.
```
... would be formatted as:
```python
"1870 [DATE] Every scrap of intelligence relative to the war between France and Prussia is now read with interest."
```
These text chunks are then forwarded to the data collator and eventually the language model.
Exposed to the tokens and (temporal) metadata, the model learns a relation between text and time. When a text token is hidden, the prepended `year` field influences the prediction of the masked words. Vice versa, when the prepended metadata is hidden, the model predicts the year of publication based on the content.
## Intended Use: LMs as History Machines.
Exposing the model to temporal metadata allows us to investigate **historical language change** and perform **date prediction**.
### Historical Language Change: Her/His Majesty? 👑
Let's show how ERWT works with a very concrete example.
The ERWT models are trained on a handful of British newspapers published between 1800 and 1870. It can be used to monitor historical change in this specific context.
Imagine you are confronted with the following snippet: "We received a letter from [MASK] Majesty" and want to predict the correct pronoun for the masked token (again assuming a British context).
👩🏫 **History Intermezzo** Please remember, for most of the nineteenth century, Queen Victoria ruled Britain, from 1837 to 1901 to be precise. Her nineteenth-century predecessors (George III, George IV and William IV) were all male.
While a standard language model will provide you with one a general prediction—based on what it has observed during training–ERWT allows you to manipulate to prediction, by anchoring the text in a specific year.
Doing this requires just a few lines of code:
```python
from transformers import pipeline
mask_filler = pipeline("fill-mask",
model='Livingwithmachines/erwt-year-st')
mask_filler(f"1820 [DATE] We received a letter from [MASK] Majesty.")
```
This returns "his" as the most likely filler:
```python
{'score': 0.8527863025665283,
'token': 2010,
'token_str': 'his',
'sequence': '1820 we received a letter from his majesty.'}
```
However, if we change the date at the start of the sentence to 1850:
```python
mask_filler(f"1850 [DATE] We received a letter from [MASK] Majesty.")
```
ERWT puts most of the probability mass on the token "her" and only a little bit on "his".
```python
{'score': 0.8168327212333679,
'token': 2014,
'token_str': 'her',
'sequence': '1850 we received a letter from her majesty.'}
```
You can repeat this experiment for yourself using the example sentences in the **Hosted inference API** at the top right.
Okay, but why is this **interesting**?
Firstly, eyeballing some toy examples (but also using more rigorous metrics such as [perplexity](#evaluation-%F0%9F%A4%93-in-case-you-care-to-count-%F0%9F%A4%93)) shows that MLMs yield more accurate predictions when they have access to temporal metadata.
In other words, **ERWT models are better at capturing historical context.**
Secondly, MDMA may **reduce biases** that arise from imbalanced training data (or at least give us more of a handle on this problem). Admittedly, we have to prove this more formally, but some experiments at least hint in this direction.
### Date Prediction: Pub Quiz with LMs 🍻
Another feature of ERWT is **date prediction**. Remember that during training the temporal metadata token is regularly masked. In this case, the model effectively learns to situate documents in time based on the tokens in a text.
By masking the year token at the beginning of the text string, ERWT guesses the document's year of publication.
👩🏫 **History Intermezzo** To unite the German states (there used to be [plenty](https://www.britannica.com/topic/German-Confederation)!), Prussia fought a number of wars with its neighbours in the second half of the nineteenth century. It invaded Denmark in 1864 (the second of the Schleswig Wars) and France in 1870 (the Franco-Prussian war).
Reusing to code above, we can time-stamp documents by masking the year. For example, the line of python code below:
```python
mask_filler("[MASK] [DATE] The Schleswig war is a matter of great concern.")
```
Outputs as most likely filler:
```python
{'score': 0.48822104930877686,
'token': 6717,
'token_str': '1864',
'sequence': '1864 the schleswig war is a matter of great concern.'}
```
The prediction "1864" makes sense; this was indeed the year of Prussian troops (with some help of their Austrian friends) crossed the border into Schleswig, then part of the Kingdom of Denmark.
A few years later, in 1870, Prussia aimed its artillery and bayonets southwards and invaded France.
```python
mask_filler("[MASK] [DATE] The Franco-Prussian war is a matter of great concern.")
```
ERWT clearly learned a lot about the history of German unification by ploughing through a plethora of nineteenth-century newspaper articles: it correctly returns "1870" as the predicted year for the Franco-Prussian war!
Again, we have to ask: Who cares? Wikipedia can tell us pretty much the same. More importantly, don't we already have timestamps for newspaper data?
In both cases, our answer sounds "yes, but...". ERWT's time-stamping powers have little instrumental use and won't make us rich (but donations are welcome of course 🤑). Nonetheless, we believe date prediction has value for research purposes. We can use ERWT for "fictitious" prediction, i.e. as a diagnostic tool.
Firstly, we used date prediction for evaluation purposes, to measure which training routine produces models that best capture the year of publication from a set of tokens.
Secondly, we could use date prediction as an analytical or research tool, and study, for example, temporal variation **within** text documents; or scrutinise which features drive the time prediction (it goes without saying that the same applies to other metadata fields, like political orientation).
## Limitations: Not all is well 😮.
The ERWT series were trained for evaluation purposes and therefore carry some critical limitations.
### Training Data
Many of the limitations are a direct result of the training data. ERWT models are trained on a rather small subsample of nineteenth-century **British newspapers**, and its predictions have to be understood in this context (remember, "Her Majesty?"). The corpus has a strong **Metropolitan and liberal bias** (see the section on Data Description for more information).
The training data spans from **1800 to 1870**. If your research interest is outside of this period, it's unlikely that ERWT will be of much use. Don't ask the poor model to predict when the Second World War happened. ERWT can be smart (at times) but it doesn't have the power of fortune-telling. At least not yet...
Furthermore, historical models tend to reflect past (and present?) stereotypes and prejudices. We strongly advise against using these models outside of a research context. The predictions are likely to exhibit harmful biases, they should be investigated critically and understood within the context of nineteenth-century British cultural history.
One way of evaluating a model's bias is to gauge the impact of changing a prompt on the predicted [MASK] token. Often a comparison is made between the predictions given for 'The **man** worked as a [MASK]' to 'The **woman** worked as a [MASK]'.
An example of the output for this model:
```
1810 [DATE] The man worked as a [MASK].
```
Produces the following three top predicted mask tokens
```python
[
{
"score": 0.17358914017677307,
"token": 10533,
"token_str": "carpenter",
},
{
"score": 0.08387620747089386,
"token": 22701,
"token_str": "tailor",
},
{
"score": 0.068501777946949,
"token": 6243,
"token_str": "baker",
}
]
```
```
1810 [DATE] The woman worked as a [MASK].
```
Produces the following three top predicted mask tokens
```python
[
{
"score": 0.148710235953331,
"token": 7947,
"token_str": "servant",
},
{
"score": 0.07184035331010818,
"token": 6243,
"token_str": "baker",
},
{
"score": 0.0675836056470871,
"token": 6821,
"token_str": "nurse",
},
]
```
Mostly, prompt evaluation is done to assess the bias in *contemporary* language models. In the case of historic language models, the bias exhibited by a model *may* be a valuable research tool in assessing (at scale) language use over time, and the stereotypes and prejudices encoded in text corpora.
For this particular prompt, the 'bias' exhibited by the language model (and the underlying data) may be a relatively accurate reflection of employment patterns during the 19th century. A possible area of exploration is to see how these predictions change when the model is prompted with different dates. With a dataset covering a more extended time period, we may expect to see a decline in the [MASK] `servant` toward the end of the 19th Century and particularly following the start of the First World War when the number of domestic servants employed in the United Kingdom fell rapidly.
### Training Routine
We created various ERWT models as part of a wider experiment that aimed to establish best practices and guidelines for training models with metadata. An overview of all the models is available on our [GitHub](https://github.com/Living-with-machines/ERWT/) page.
To reduce training time, we based our experiments on a random subsample of the HMD corpus, consisting of half a billion tokens.
Furthermore, we only trained the models for one epoch, which implies they are most likely **undertrained** at the moment.
We were mainly interested in the **relative** performance of the different ERWT models. We did, however, compared ERWT with [`distilbert-base-uncased`](https://huggingface.co/distilbert-base-uncased) in our evaluation experiments. And, of course, our tiny LM peas
did much better. 🎉🥳
Want to know the details—Oh, critical reader!—then consult and cite [our working paper](https://arxiv.org/abs/2211.10086)!
## Data Description
The ERWT models are trained on an openly accessible newspaper corpus created by the [Heritage Made Digital (HMD) newspaper digitisation project](footnote{https://blogs.bl.uk/thenewsroom/2019/01/heritage-made-digital-the-newspapers.html).
The HMD newspapers comprise around 2 billion words in total, but the bulk of the articles originate from the (then) liberal paper *The Sun*.
Geographically, most papers are metropolitan (i.e. based in London). The inclusion of *The Northern Daily Times* and *Liverpool Standard*, adds some geographical diversity to this corpus. The political classification is based on historical newspaper press directories, please read [our paper](https://academic.oup.com/dsh/advance-article/doi/10.1093/llc/fqac037/6644524?searchresult=1) on bias in newspaper collections for more information.
The table below contains a more detailed overview of the corpus.
| | | | | |
|------|--------------------------|--------------|-----------|---------------|
| NLP | Title | Politics | Location | Tokens |
| 2083 | The Northern Daily Times | NEUTRAL | LIVERPOOL | 14.094.212 |
| 2084 | The Northern Daily Times | NEUTRAL | LIVERPOOL | 34.450.366 |
| 2085 | The Northern Daily Times | NEUTRAL | LIVERPOOL | 16.166.627 |
| 2088 | The Liverpool Standard | CONSERVATIVE | LIVERPOOL | 149.204.800 |
| 2090 | The Liverpool Standard | CONSERVATIVE | LIVERPOOL | 6.417.320 |
| 2194 | The Sun | LIBERAL | LONDON | 1.155.791.480 |
| 2244 | Colored News | NONE | LONDON | 53.634 |
| 2642 | The Express | LIBERAL | LONDON | 236.240.555 |
| 2644 | National Register | CONSERVATIVE | LONDON | 23.409.733 |
| 2645 | The Press | CONSERVATIVE | LONDON | 15.702.276 |
| 2646 | The Star | NONE | LONDON | 163.072.742 |
| 2647 | The Statesman | RADICAL | LONDON | 61.225.215 |
Temporally, most of the articles date from the second half of the nineteenth century. The figure below gives an overview of the number of articles by year.

## Evaluation: 🤓 In case you care to count 🤓
Our article ["Metadata Might Make Language Models Better"](https://arxiv.org/abs/2211.10086) comprises an extensive evaluation of all the MDMA-infused language models.
The table below shows the [pseudo-perplexity](https://arxiv.org/abs/1910.14659) scores for different models based on text documents of 64 and 128 tokens.
In general, [ERWT-year-masked-25](https://huggingface.co/Livingwithmachines/erwt-year-masked-25) turned out to yield the most competitive scores across different tasks and we generally recommend you use this model.
| text length | 64 | | 128 | |
|------------------|----------------|--------|----------------|--------|
| model | mean | sd | mean | sd |
| DistilBERT | 354.40 | 376.32 | 229.19 | 294.70 |
| HMDistilBERT | 32.94 | 64.78 | 25.72 | 45.99 |
| ERWT-year | 31.49 | 61.85 | 24.97 | 44.58 |
| ERWT-st | 31.69 | 62.42 | 25.03 | 44.74 |
| ERWT-year-masked-25 | **30.97** | 61.50 | **24.59** | 44.36 |
| ERWT-year-masked-75 | 31.02 | 61.41 | 24.63 | 44.40 |
| PEA | 31.63 | 62.09 | 25.58 | 44.99 |
| PEA-st | 31.65 | 62.19 | 25.59 | 44.99 |
## Questions?
Questions? Feedback? Please leave a message!
|
Dongmin/testmodel | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 11 | null | ---
language: en
tags:
- newspapers
- library
- historic
- glam
- mdma
license: mit
metrics:
- pseudo-perplexity
widget:
- text: "1820 [DATE] We received a letter from [MASK] Majesty."
- text: "1850 [DATE] We received a letter from [MASK] Majesty."
- text: "[MASK] [DATE] The Franco-Prussian war is a matter of great concern."
- text: "[MASK] [DATE] The Schleswig war is a matter of great concern."
---
**MODEL CARD UNDER CONSTRUCTION, ETA END OF NOVEMBER**
<img src="https://upload.wikimedia.org/wikipedia/commons/5/5b/NCI_peas_in_pod.jpg" alt="erwt" width="200" >
# ERWT-year-masked-25
🌺ERWT\* a language model that (🤭 maybe 🤫) knows more about history than you...🌺
ERWT is a fine-tuned [`distilbert-base-uncased`](https://huggingface.co/distilbert-base-uncased) model trained on historical newspapers from the [Heritage Made Digital collection](https://huggingface.co/datasets/davanstrien/hmd-erwt-training).
We trained a model based on a combination of text and **temporal metadata** (i.e. year information).
ERWT performs [**time-sensitive masked language modelling**](#historical-language-change-herhis-majesty-%F0%9F%91%91) or [**date prediction**]((#date-prediction-pub-quiz-with-lms-%F0%9F%8D%BB)).
This model is served by [Kaspar von Beelen](https://huggingface.co/Kaspar) and [Daniel van Strien](https://huggingface.co/davanstrien), *"Improving AI, one pea at a time"*.
If these models happen to be useful, please cite our working paper.
```
@misc{https://doi.org/10.48550/arxiv.2211.10086,
doi = {10.48550/ARXIV.2211.10086},
url = {https://arxiv.org/abs/2211.10086},
author = {Beelen, Kaspar and van Strien, Daniel},
keywords = {Computation and Language (cs.CL), Digital Libraries (cs.DL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Metadata Might Make Language Models Better},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}}
```
\*ERWT is dutch for PEA.
# Overview
- [Introduction: Repent Now 😇](#introductory-note-repent-now-%F0%9F%98%87)
- [Background: MDMA to the rescue 🙂](#background-mdma-to-the-rescue-%F0%9F%99%82)
- [Intended Use: LMs as History Machines 🚂](#intended-use-lms-as-history-machines)
- [Historical Language Change: Her/His Majesty? 👑](#historical-language-change-herhis-majesty-%F0%9F%91%91)
- [Date Prediction: Pub Quiz with LMs 🍻](#date-prediction-pub-quiz-with-lms-%F0%9F%8D%BB)
- [Limitations: Not all is well 😮](#limitations-not-all-is-well-%F0%9F%98%AE)
- [Training Data](#training-data)
- [Training Routine](#training-routine)
- [Data Description](#data-description)
- [Evaluation: 🤓 In case you care to count 🤓](#evaluation-%F0%9F%A4%93-in-case-you-care-to-count-%F0%9F%A4%93)
## Introductory Note: Repent Now. 😇
The ERWT models are trained for **experimental purposes**.
Please consult the [**limitations**](#limitations-not-all-is-well-%F0%9F%98%AE) section (seriously before using the models. Seriously, read this section, **we don't repent in public just for fun**).
If you can't get enough of these neural peas and crave some more. In that case, you can consult our working paper ["Metadata Might Make Language Models Better"](https://arxiv.org/abs/2211.10086) for more background information and nerdy evaluation stuff (work in progress, handle with care and kindness).
## Background: MDMA to the rescue. 🙂
ERWT was created using a **M**eta**D**ata **M**asking **A**pproach (or **MDMA** 💊), a scenario in which we train a Masked Language Model (MLM) on text and metadata simultaneously. Our intuition was that incorporating metadata (information that *describes* a text but and is not part of the content) may make language models "better", or at least make them more **sensitive** to historical, political and geographical aspects of language use. We mainly use temporal, political and geographical metadata.
ERWT is a [`distilbert-base-uncased`](https://huggingface.co/distilbert-base-uncased) model, fine-tuned on a random subsample taken from the [Heritage Made Digital newspaper collection]((https://huggingface.co/datasets/davanstrien/hmd-erwt-training)). The training data comprises around half a billion words.
To unleash the power of MDMA, we adapted to the training routine mainly by fidgeting with the input data.
When preprocessing the text, we prepended each segment of hundred words with a time stamp (year of publication) and a special `[DATE]` token.
The snippet below, taken from the [Londonderry Sentinel]:(https://www.britishnewspaperarchive.co.uk/viewer/bl/0001480/18700722/014/0002)...
```
Every scrap of intelligence relative to the war between France and Prussia is now read with interest.
```
... would be formatted as:
```python
"1870 [DATE] Every scrap of intelligence relative to the war between France and Prussia is now read with interest."
```
These text chunks are then forwarded to the data collator, where we mask the year token 25% of the time (hence the '-masked-25' suffix).
Exposed to the tokens and (temporal) metadata, the model learns a relation between text and time. When a text token is hidden, the prepended `year` field influences the prediction of the masked words. Vice versa, when the prepended metadata is hidden, the model predicts the year of publication based on the content.
## Intended Use: LMs as History Machines.
Exposing the model to temporal metadata allows us to investigate **historical language change** and perform **date prediction**.
### Historical Language Change: Her/His Majesty? 👑
Let's show how ERWT works with a very concrete example.
The ERWT models are trained on a handful of British newspapers published between 1800 and 1870. It can be used to monitor historical change in this specific context.
Imagine you are confronted with the following snippet: "We received a letter from [MASK] Majesty" and want to predict the correct pronoun for the masked token (again assuming a British context).
👩🏫 **History Intermezzo** Please remember, for most of the nineteenth century, Queen Victoria ruled Britain, from 1837 to 1901 to be precise. Her nineteenth-century predecessors (George III, George IV and William IV) were all male.
While a standard language model will provide you with one a general prediction—based on what it has observed during training–ERWT allows you to manipulate to prediction, by anchoring the text in a specific year.
Doing this requires just a few lines of code:
```python
from transformers import pipeline
mask_filler = pipeline("fill-mask",
model='Livingwithmachines/erwt-year-masked-25')
mask_filler(f"1820 [DATE] We received a letter from [MASK] Majesty.")
```
This returns "his" as the most likely filler:
```python
{'score': 0.8096420168876648,
'token': 2010,
'token_str': 'his',
'sequence': '1820 we received a letter from his majesty.'}
```
However, if we change the date at the start of the sentence to 1850:
```python
mask_filler(f"1850 [DATE] We received a letter from [MASK] Majesty.")
```
ERWT puts most of the probability mass on the token "her" and only a little bit on "his".
```python
{'score': 0.7587488293647766,
'token': 2014,
'token_str': 'her',
'sequence': '1850 we received a letter from her majesty.'}```
You can repeat this experiment for yourself using the example sentences in the **Hosted inference API** at the top right.
Okay, but why is this **interesting**?
Firstly, eyeballing some toy examples (but also using more rigorous metrics such as [perplexity](#evaluation-%F0%9F%A4%93-in-case-you-care-to-count-%F0%9F%A4%93)) shows that MLMs yield more accurate predictions when they have access to temporal metadata.
In other words, **ERWT models are better at capturing historical context.**
Secondly, MDMA may **reduce biases** that arise from imbalanced training data (or at least give us more of a handle on this problem). Admittedly, we have to prove this more formally, but some experiments at least hint in this direction.
### Date Prediction: Pub Quiz with LMs 🍻
Another feature of ERWT is **date prediction**. Remember that during training the temporal metadata token is regularly masked. In this case, the model effectively learns to situate documents in time based on the tokens in a text.
By masking the year token at the beginning of the text string, ERWT guesses the document's year of publication.
👩🏫 **History Intermezzo** To unite the German states (there used to be [plenty](https://www.britannica.com/topic/German-Confederation)!), Prussia fought a number of wars with its neighbours in the second half of the nineteenth century. It invaded Denmark in 1864 (the second of the Schleswig Wars) and France in 1870 (the Franco-Prussian war).
Reusing to code above, we can time-stamp documents by masking the year. For example, the line of python code below:
```python
mask_filler("[MASK] [DATE] The Schleswig war is a matter of great concern.")
```
Outputs as most likely filler:
```python
{'score': 0.48822104930877686,
'token': 6717,
'token_str': '1864',
'sequence': '1864 the schleswig war is a matter of great concern.'}
```
The prediction "1864" makes sense; this was indeed the year of Prussian troops (with some help of their Austrian friends) crossed the border into Schleswig, then part of the Kingdom of Denmark.
A few years later, in 1870, Prussia aimed its artillery and bayonets southwards and invaded France.
```python
mask_filler("[MASK] [DATE] The Franco-Prussian war is a matter of great concern.")
```
ERWT clearly learned a lot about the history of German unification by ploughing through a plethora of nineteenth-century newspaper articles: it correctly returns "1870" as the predicted year for the Franco-Prussian war!
Again, we have to ask: Who cares? Wikipedia can tell us pretty much the same. More importantly, don't we already have timestamps for newspaper data?
In both cases, our answer sounds "yes, but...". ERWT's time-stamping powers have little instrumental use and won't make us rich (but donations are welcome of course 🤑). Nonetheless, we believe date prediction has value for research purposes. We can use ERWT for "fictitious" prediction, i.e. as a diagnostic tool.
Firstly, we used date prediction for evaluation purposes, to measure which training routine produces models that best capture the year of publication from a set of tokens.
Secondly, we could use date prediction as an analytical or research tool, and study, for example, temporal variation **within** text documents; or scrutinise which features drive the time prediction (it goes without saying that the same applies to other metadata fields, like political orientation).
## Limitations: Not all is well 😮.
The ERWT series were trained for evaluation purposes and therefore carry some critical limitations.
### Training Data
Many of the limitations are a direct result of the training data. ERWT models are trained on a rather small subsample of nineteenth-century **British newspapers**, and its predictions have to be understood in this context (remember, "Her Majesty?"). The corpus has a strong **Metropolitan and liberal bias** (see the section on Data Description for more information).
The training data spans from **1800 to 1870**. If your research interest is outside of this period, it's unlikely that ERWT will be of much use. Don't ask the poor model to predict when the Second World War happened. ERWT can be smart (at times) but it doesn't have the power of fortune-telling. At least not yet...
Furthermore, historical models tend to reflect past (and present?) stereotypes and prejudices. We strongly advise against using these models outside of a research context. The predictions are likely to exhibit harmful biases, they should be investigated critically and understood within the context of nineteenth-century British cultural history.
One way of evaluating a model's bias is to gauge the impact of changing a prompt on the predicted [MASK] token. Often a comparison is made between the predictions given for 'The **man** worked as a [MASK]' to 'The **woman** worked as a [MASK]'.
An example of the output for this model:
```
1810 [DATE] The man worked as a [MASK].
```
Produces the following three top predicted mask tokens
```python
[
{
'score': 0.15719665586948395,
'token': 10533,
'token_str': 'carpenter',
},
{
'score': 0.09576332569122314,
'token': 6243,
'token_str': 'baker',
},
{
'score': 0.08851779252290726,
'token': 22701,
'token_str': 'tailor',
}
]
```
```
1810 [DATE] The woman worked as a [MASK].
```
Produces the following three top predicted mask tokens
```python
[
{
'score': 0.1492135375738144,
'token': 7947,
'token_str': 'servant',
},
{
'score': 0.09587471932172775,
'token': 6243,
'token_str': 'baker',
},
{
'score': 0.06408561021089554,
'token': 10533,
'token_str': 'carpenter',
}
]
```
Mostly, prompt evaluation is done to assess the bias in *contemporary* language models. In the case of historic language models, the bias exhibited by a model *may* be a valuable research tool in assessing (at scale) language use over time, and the stereotypes and prejudices encoded in text corpora.
For this particular prompt, the 'bias' exhibited by the language model (and the underlying data) may be a relatively accurate reflection of employment patterns during the 19th century. A possible area of exploration is to see how these predictions change when the model is prompted with different dates. With a dataset covering a more extended time period, we may expect to see a decline in the [MASK] `servant` toward the end of the 19th Century and particularly following the start of the First World War when the number of domestic servants employed in the United Kingdom fell rapidly.
### Training Routine
We created various ERWT models as part of a wider experiment that aimed to establish best practices and guidelines for training models with metadata. An overview of all the models is available on our [GitHub](https://github.com/Living-with-machines/ERWT/) page.
To reduce training time, we based our experiments on a random subsample of the HMD corpus, consisting of half a billion tokens.
Furthermore, we only trained the models for one epoch, which implies they are most likely **undertrained** at the moment.
We were mainly interested in the **relative** performance of the different ERWT models. We did, however, compared ERWT with [`distilbert-base-uncased`](https://huggingface.co/distilbert-base-uncased) in our evaluation experiments. And, of course, our tiny LM peas
did much better. 🎉🥳
Want to know the details—Oh, critical reader!—then consult and cite [our working paper](https://arxiv.org/abs/2211.10086)!
## Data Description
The ERWT models are trained on an openly accessible newspaper corpus created by the [Heritage Made Digital (HMD) newspaper digitisation project](footnote{https://blogs.bl.uk/thenewsroom/2019/01/heritage-made-digital-the-newspapers.html).
The HMD newspapers comprise around 2 billion words in total, but the bulk of the articles originate from the (then) liberal paper *The Sun*.
Geographically, most papers are metropolitan (i.e. based in London). The inclusion of *The Northern Daily Times* and *Liverpool Standard*, adds some geographical diversity to this corpus. The political classification is based on historical newspaper press directories, please read [our paper](https://academic.oup.com/dsh/advance-article/doi/10.1093/llc/fqac037/6644524?searchresult=1) on bias in newspaper collections for more information.
The table below contains a more detailed overview of the corpus.
| | | | | |
|------|--------------------------|--------------|-----------|---------------|
| NLP | Title | Politics | Location | Tokens |
| 2083 | The Northern Daily Times | NEUTRAL | LIVERPOOL | 14.094.212 |
| 2084 | The Northern Daily Times | NEUTRAL | LIVERPOOL | 34.450.366 |
| 2085 | The Northern Daily Times | NEUTRAL | LIVERPOOL | 16.166.627 |
| 2088 | The Liverpool Standard | CONSERVATIVE | LIVERPOOL | 149.204.800 |
| 2090 | The Liverpool Standard | CONSERVATIVE | LIVERPOOL | 6.417.320 |
| 2194 | The Sun | LIBERAL | LONDON | 1.155.791.480 |
| 2244 | Colored News | NONE | LONDON | 53.634 |
| 2642 | The Express | LIBERAL | LONDON | 236.240.555 |
| 2644 | National Register | CONSERVATIVE | LONDON | 23.409.733 |
| 2645 | The Press | CONSERVATIVE | LONDON | 15.702.276 |
| 2646 | The Star | NONE | LONDON | 163.072.742 |
| 2647 | The Statesman | RADICAL | LONDON | 61.225.215 |
Temporally, most of the articles date from the second half of the nineteenth century. The figure below gives an overview of the number of articles by year.

## Evaluation: 🤓 In case you care to count 🤓
Our article ["Metadata Might Make Language Models Better"](https://arxiv.org/abs/2211.10086) comprises an extensive evaluation of all the MDMA-infused language models.
The table below shows the [pseudo-perplexity](https://arxiv.org/abs/1910.14659) scores for different models based on text documents of 64 and 128 tokens.
In general, this model, [ERWT-year-masked-25](https://huggingface.co/Livingwithmachines/erwt-year-masked-25), turned out to yield the most competitive scores across different tasks (yay!) and we generally recommend you use this model.
| text length | 64 | | 128 | |
|------------------|----------------|--------|----------------|--------|
| model | mean | sd | mean | sd |
| DistilBERT | 354.40 | 376.32 | 229.19 | 294.70 |
| HMDistilBERT | 32.94 | 64.78 | 25.72 | 45.99 |
| ERWT-year | 31.49 | 61.85 | 24.97 | 44.58 |
| ERWT-st | 31.69 | 62.42 | 25.03 | 44.74 |
| ERWT-year-masked-25 | **30.97** | 61.50 | **24.59** | 44.36 |
| ERWT-year-masked-75 | 31.02 | 61.41 | 24.63 | 44.40 |
| PEA | 31.63 | 62.09 | 25.58 | 44.99 |
| PEA-st | 31.65 | 62.19 | 25.59 | 44.99 |
## Questions?
Questions? Feedback? Please leave a message!
|
Waynehillsdev/Waynehills_summary_tensorflow | [
"tf",
"t5",
"text2text-generation",
"transformers",
"generated_from_keras_callback",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: bert-base-german-cased-issues-128-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-german-cased-issues-128-finetuned
This model is a fine-tuned version of [ogimgio/bert-base-german-cased-issues-128](https://huggingface.co/ogimgio/bert-base-german-cased-issues-128) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3858
- Micro f1: 0.6157
- Macro f1: 0.5597
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Micro f1 | Macro f1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|
| 0.4741 | 1.0 | 102 | 0.4254 | 0.5535 | 0.4051 |
| 0.3799 | 2.0 | 204 | 0.3858 | 0.6157 | 0.5597 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
Doohae/q_encoder | [
"pytorch"
]
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
tags:
- generated_from_trainer
datasets:
- rvl_cdip
metrics:
- accuracy
model-index:
- name: invoicevsadvertisement
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: rvl_cdip
type: rvl_cdip
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9892257579553997
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# invoicevsadvertisement
This model is a fine-tuned version of [microsoft/dit-base-finetuned-rvlcdip](https://huggingface.co/microsoft/dit-base-finetuned-rvlcdip) on the rvl_cdip dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0292
- Accuracy: 0.9892
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 192
- eval_batch_size: 192
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 768
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4353 | 0.98 | 41 | 0.0758 | 0.9837 |
| 0.0542 | 1.98 | 82 | 0.0359 | 0.9860 |
| 0.0349 | 2.98 | 123 | 0.0336 | 0.9867 |
| 0.0323 | 3.98 | 164 | 0.0304 | 0.9876 |
| 0.0288 | 4.98 | 205 | 0.0292 | 0.9892 |
### Framework versions
- Transformers 4.21.3
- Pytorch 1.12.1
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Doohae/roberta | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
language:
- en
license: creativeml-openrail-m
tags:
- stable-diffusion
- text-to-image
---
# finetuned on dark, moody, "victorian" imagery (ノ◕ヮ◕)ノ*:・゚✧
[<img src="https://colab.research.google.com/assets/colab-badge.svg">](https://colab.research.google.com/drive/13E3i6_Z1BWd3e6f71-TNd5bk8eGqaeZf?usp=sharing)

v1 was trained on SD 1.4, v2 on SD 1.5. check the pdf for examples with different prompts & settings. comparisons.zip has steps vs cfg scale x/y plots for euler_a and lms.
use the tokens "darkvictorian artstyle" in your prompt to use the style.
## random samples:

---
## update: added a LoRA version
→ [darkvictorian.pt](https://huggingface.co/proxima/darkvictorian_artstyle/blob/main/darkvictorian.pt)
check this [blog entry](https://proximacentaurib.xyz/lora/darkvictorian-lora/) for sample images & comparisons
---
<a href='https://ko-fi.com/S6S6FUYKY' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi3.png?v=3' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
albert-base-v2 | [
"pytorch",
"tf",
"jax",
"rust",
"safetensors",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4,785,283 | 2022-11-16T16:27:33Z | ---
license: mit
---
### abstract_patterns_in_nature on Stable Diffusion via Dreambooth trained on the [fast-DreamBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
#### model by apurik-parv
This your the Stable Diffusion model fine-tuned the abstract_patterns_in_nature concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt(s)`: **abnapa**
#### This is an attempt to teach symmetry and its scales to stable diffusion model. This first version was trained on abstract patterns from nature and it markedly produces different images from the original model sometime better sometimes no so better or even worse at times it even seems to correct the lighting and shadows. Users please give your comments after usage so that we can really understand what this model does.
You can also train your own concepts and upload them to the library by using [the fast-DremaBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) |
albert-large-v1 | [
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 687 | 2022-11-16T16:28:58Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: bert_uncased_L-2_H-128_A-2-finetuned-emotion-finetuned-tweet
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.87168
- name: F1
type: f1
value: 0.8716747437975058
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_uncased_L-2_H-128_A-2-finetuned-emotion-finetuned-tweet
This model is a fine-tuned version of [muhtasham/bert_uncased_L-2_H-128_A-2-finetuned-emotion](https://huggingface.co/muhtasham/bert_uncased_L-2_H-128_A-2-finetuned-emotion) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4004
- Accuracy: 0.8717
- F1: 0.8717
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4751 | 1.28 | 500 | 0.3880 | 0.828 | 0.8277 |
| 0.3453 | 2.56 | 1000 | 0.3282 | 0.8608 | 0.8607 |
| 0.2973 | 3.84 | 1500 | 0.3140 | 0.8695 | 0.8695 |
| 0.26 | 5.12 | 2000 | 0.3154 | 0.8736 | 0.8735 |
| 0.2218 | 6.39 | 2500 | 0.3144 | 0.8756 | 0.8756 |
| 0.1977 | 7.67 | 3000 | 0.3197 | 0.876 | 0.8760 |
| 0.1656 | 8.95 | 3500 | 0.3526 | 0.8737 | 0.8735 |
| 0.1404 | 10.23 | 4000 | 0.3865 | 0.8691 | 0.8689 |
| 0.121 | 11.51 | 4500 | 0.4004 | 0.8717 | 0.8717 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
albert-xlarge-v1 | [
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 341 | 2022-11-16T16:41:41Z | data: https://github.com/BigSalmon2/InformalToFormalDataset
Text Generation Informal Formal
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln90Paraphrase")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln90Paraphrase")
```
```
Demo:
https://huggingface.co/spaces/BigSalmon/FormalInformalConciseWordy
```
```
prompt = """informal english: corn fields are all across illinois, visible once you leave chicago.\nTranslated into the Style of Abraham Lincoln:"""
input_ids = tokenizer.encode(prompt, return_tensors='pt')
outputs = model.generate(input_ids=input_ids,
max_length=10 + len(prompt),
temperature=1.0,
top_k=50,
top_p=0.95,
do_sample=True,
num_return_sequences=5,
early_stopping=True)
for i in range(5):
print(tokenizer.decode(outputs[i]))
```
Most likely outputs (Disclaimer: I highly recommend using this over just generating):
```
prompt = """informal english: corn fields are all across illinois, visible once you leave chicago.\nTranslated into the Style of Abraham Lincoln:"""
text = tokenizer.encode(prompt)
myinput, past_key_values = torch.tensor([text]), None
myinput = myinput
myinput= myinput.to(device)
logits, past_key_values = model(myinput, past_key_values = past_key_values, return_dict=False)
logits = logits[0,-1]
probabilities = torch.nn.functional.softmax(logits)
best_logits, best_indices = logits.topk(250)
best_words = [tokenizer.decode([idx.item()]) for idx in best_indices]
text.append(best_indices[0].item())
best_probabilities = probabilities[best_indices].tolist()
words = []
print(best_words)
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
original: microsoft word's [MASK] pricing invites competition.
Translated into the Style of Abraham Lincoln: microsoft word's unconscionable pricing invites competition.
***
original: the library’s quiet atmosphere encourages visitors to [blank] in their work.
Translated into the Style of Abraham Lincoln: the library’s quiet atmosphere encourages visitors to immerse themselves in their work.
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- nebraska
- unicamerical legislature
- different from federal house and senate
text: featuring a unicameral legislature, nebraska's political system stands in stark contrast to the federal model, comprised of a house and senate.
***
- penny has practically no value
- should be taken out of circulation
- just as other coins have been in us history
- lost use
- value not enough
- to make environmental consequences worthy
text: all but valueless, the penny should be retired. as with other coins in american history, it has become defunct. too minute to warrant the environmental consequences of its production, it has outlived its usefulness.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
Keywords to sentences or sentence.
```
ngos are characterized by:
□ voluntary citizens' group that is organized on a local, national or international level
□ encourage political participation
□ often serve humanitarian functions
□ work for social, economic, or environmental change
***
what are the drawbacks of living near an airbnb?
□ noise
□ parking
□ traffic
□ security
□ strangers
***
```
```
original: musicals generally use spoken dialogue as well as songs to convey the story. operas are usually fully sung.
adapted: musicals generally use spoken dialogue as well as songs to convey the story. ( in a stark departure / on the other hand / in contrast / by comparison / at odds with this practice / far from being alike / in defiance of this standard / running counter to this convention ), operas are usually fully sung.
***
original: akoya and tahitian are types of pearls. akoya pearls are mostly white, and tahitian pearls are naturally dark.
adapted: akoya and tahitian are types of pearls. ( a far cry from being indistinguishable / easily distinguished / on closer inspection / setting them apart / not to be mistaken for one another / hardly an instance of mere synonymy / differentiating the two ), akoya pearls are mostly white, and tahitian pearls are naturally dark.
***
original:
```
```
original: had trouble deciding.
translated into journalism speak: wrestled with the question, agonized over the matter, furrowed their brows in contemplation.
***
original:
```
```
input: not loyal
1800s english: ( two-faced / inimical / perfidious / duplicitous / mendacious / double-dealing / shifty ).
***
input:
```
```
first: ( was complicit in / was involved in ).
antonym: ( was blameless / was not an accomplice to / had no hand in / was uninvolved in ).
***
first: ( have no qualms about / see no issue with ).
antonym: ( are deeply troubled by / harbor grave reservations about / have a visceral aversion to / take ( umbrage at / exception to ) / are wary of ).
***
first: ( do not see eye to eye / disagree often ).
antonym: ( are in sync / are united / have excellent rapport / are like-minded / are in step / are of one mind / are in lockstep / operate in perfect harmony / march in lockstep ).
***
first:
```
```
stiff with competition, law school {A} is the launching pad for countless careers, {B} is a crowded field, {C} ranks among the most sought-after professional degrees, {D} is a professional proving ground.
***
languishing in viewership, saturday night live {A} is due for a creative renaissance, {B} is no longer a ratings juggernaut, {C} has been eclipsed by its imitators, {C} can still find its mojo.
***
dubbed the "manhattan of the south," atlanta {A} is a bustling metropolis, {B} is known for its vibrant downtown, {C} is a city of rich history, {D} is the pride of georgia.
***
embattled by scandal, harvard {A} is feeling the heat, {B} cannot escape the media glare, {C} is facing its most intense scrutiny yet, {D} is in the spotlight for all the wrong reasons.
```
Infill / Infilling / Masking / Phrase Masking (Works pretty decently actually, especially when you use logprobs code from above):
```
his contention [blank] by the evidence [sep] was refuted [answer]
***
few sights are as [blank] new york city as the colorful, flashing signage of its bodegas [sep] synonymous with [answer]
***
when rick won the lottery, all of his distant relatives [blank] his winnings [sep] clamored for [answer]
***
the library’s quiet atmosphere encourages visitors to [blank] in their work [sep] immerse themselves [answer]
***
the joy of sport is that no two games are alike. for every exhilarating experience, however, there is an interminable one. the national pastime, unfortunately, has a penchant for the latter. what begins as a summer evening at the ballpark can quickly devolve into a game of tedium. the primary culprit is the [blank] of play. from batters readjusting their gloves to fielders spitting on their mitts, the action is [blank] unnecessary interruptions. the sport's future is [blank] if these tendencies are not addressed [sep] plodding pace [answer] riddled with [answer] bleak [answer]
***
microsoft word's [blank] pricing [blank] competition [sep] unconscionable [answer] invites [answer]
***
```
```
original: microsoft word's [MASK] pricing invites competition.
Translated into the Style of Abraham Lincoln: microsoft word's unconscionable pricing invites competition.
***
original: the library’s quiet atmosphere encourages visitors to [blank] in their work.
Translated into the Style of Abraham Lincoln: the library’s quiet atmosphere encourages visitors to immerse themselves in their work.
```
Backwards
```
Essay Intro (National Parks):
text: tourists are at ease in the national parks, ( swept up in the beauty of their natural splendor ).
***
Essay Intro (D.C. Statehood):
washington, d.c. is a city of outsize significance, ( ground zero for the nation's political life / center stage for the nation's political machinations ).
```
```
topic: the Golden State Warriors.
characterization 1: the reigning kings of the NBA.
characterization 2: possessed of a remarkable cohesion.
characterization 3: helmed by superstar Stephen Curry.
characterization 4: perched atop the league’s hierarchy.
characterization 5: boasting a litany of hall-of-famers.
***
topic: emojis.
characterization 1: shorthand for a digital generation.
characterization 2: more versatile than words.
characterization 3: the latest frontier in language.
characterization 4: a form of self-expression.
characterization 5: quintessentially millennial.
characterization 6: reflective of a tech-centric world.
***
topic:
```
```
regular: illinois went against the census' population-loss prediction by getting more residents.
VBG: defying the census' prediction of population loss, illinois experienced growth.
***
regular: microsoft word’s high pricing increases the likelihood of competition.
VBG: extortionately priced, microsoft word is inviting competition.
***
regular:
```
```
source: badminton should be more popular in the US.
QUERY: Based on the given topic, can you develop a story outline?
target: (1) games played with racquets are popular, (2) just look at tennis and ping pong, (3) but badminton underappreciated, (4) fun, fast-paced, competitive, (5) needs to be marketed more
text: the sporting arena is dominated by games that are played with racquets. tennis and ping pong, in particular, are immensely popular. somewhat curiously, however, badminton is absent from this pantheon. exciting, fast-paced, and competitive, it is an underappreciated pastime. all that it lacks is more effective marketing.
***
source: movies in theaters should be free.
QUERY: Based on the given topic, can you develop a story outline?
target: (1) movies provide vital life lessons, (2) many venues charge admission, (3) those without much money
text: the lessons that movies impart are far from trivial. the vast catalogue of cinematic classics is replete with inspiring sagas of friendship, bravery, and tenacity. it is regrettable, then, that admission to theaters is not free. in their current form, the doors of this most vital of institutions are closed to those who lack the means to pay.
***
source:
```
```
in the private sector, { transparency } is vital to the business’s credibility. the { disclosure of information } can be the difference between success and failure.
***
the labor market is changing, with { remote work } now the norm. this { flexible employment } allows the individual to design their own schedule.
***
the { cubicle } is the locus of countless grievances. many complain that the { enclosed workspace } restricts their freedom of movement.
***
```
```
it would be natural to assume that americans, as a people whose ancestors { immigrated to this country }, would be sympathetic to those seeking to do likewise.
question: what does “do likewise” mean in the above context?
(a) make the same journey
(b) share in the promise of the american dream
(c) start anew in the land of opportunity
(d) make landfall on the united states
***
in the private sector, { transparency } is vital to the business’s credibility. this orientation can be the difference between success and failure.
question: what does “this orientation” mean in the above context?
(a) visible business practices
(b) candor with the public
(c) open, honest communication
(d) culture of accountability
```
```
example: suppose you are a teacher. further suppose you want to tell an accurate telling of history. then suppose a parent takes offense. they do so in the name of name of their kid. this happens a lot.
text: educators' responsibility to remain true to the historical record often clashes with the parent's desire to shelter their child from uncomfortable realities.
***
example: suppose you are a student at college. now suppose you have to buy textbooks. that is going to be worth hundreds of dollars. given how much you already spend on tuition, that is going to hard cost to bear.
text: the exorbitant cost of textbooks, which often reaches hundreds of dollars, imposes a sizable financial burden on the already-strapped college student.
```
```
<Prefix> the atlanta hawks may attribute <Prefix> <Suffix> trae young <Suffix> <Middle> their robust season to <Middle>
***
<Prefix> the nobel prize in literature <Prefix> <Suffix> honor <Suffix> <Middle> is a singularly prestigious <Middle>
```
```
accustomed to having its name uttered ______, harvard university is weathering a rare spell of reputational tumult
(a) in reverential tones
(b) with great affection
(c) in adulatory fashion
(d) in glowing terms
```
```
clarify: international ( {working together} / cooperation ) is called for when ( {issue go beyond lots of borders} / an issue transcends borders / a given matter has transnational implications ).
```
```
description: when someone thinks that their view is the only right one.
synonyms: intolerant, opinionated, narrow-minded, insular, self-righteous.
***
description: when you put something off.
synonyms: shelve, defer, table, postpone.
```
```
organic sentence: crowdfunding is about winner of best ideas and it can test an entrepreneur’s idea.
rewrite phrases: meritocratic, viability, vision
rewritten with phrases: the meritocratic nature of crowdfunding empowers entrepreneurs to test their vision's viability.
```
*Note* Of all the masking techniques, this one works the best.
```
<Prefix> the atlanta hawks may attribute <Prefix> <Suffix> trae young <Suffix> <Middle> their robust season to <Middle>
***
<Prefix> the nobel prize in literature <Prefix> <Suffix> honor <Suffix> <Middle> is a singularly prestigious <Middle>
```
```
essence: when someone's views are keeping within reasonable.
refine: the senator's voting record is ( moderate / centrist / pragmatic / balanced / fair-minded / even-handed ).
***
essence: when things are worked through in a petty way.
refine: the propensity of the u.s. congress to settle every dispute by way of ( mudslinging / bickering / demagoguery / name-calling / finger-pointing / vilification ) is appalling.
```
```
description: when someone thinks that their view is the only right one.
synonyms: intolerant, opinionated, narrow-minded, insular, self-righteous.
***
description: when you put something off.
synonyms: shelve, defer, table, postpone.
```
```
organic sentence: crowdfunding is about winner of best ideas and it can test an entrepreneur’s idea.
rewrite phrases: meritocratic, viability, vision
rewritten with phrases: the meritocratic nature of crowdfunding empowers entrepreneurs to test their vision's viability.
```
```
music before bedtime [makes for being able to relax] -> is a recipe for relaxation.
```
```
[people wanting entertainment love traveling new york city] -> travelers flock to new york city in droves, drawn to its iconic entertainment scene. [cannot blame them] -> one cannot fault them [broadway so fun] -> when it is home to such thrilling fare as Broadway.
```
```
in their ( ‖ when you are rushing because you want to get there on time ‖ / haste to arrive punctually / mad dash to be timely ), morning commuters are too rushed to whip up their own meal.
***
politicians prefer to author vague plans rather than ( ‖ when you can make a plan without many unknowns ‖ / actionable policies / concrete solutions ).
```
```
Q: What is whistleblower protection?
A: Whistleblower protection is a form of legal immunity granted to employees who expose the unethical practices of their employer.
Q: Why are whistleblower protections important?
A: Absent whistleblower protections, employees would be deterred from exposing their employer’s wrongdoing for fear of retribution.
Q: Why would an employer engage in retribution?
A: An employer who has acted unethically stands to suffer severe financial and reputational damage were their transgressions to become public. To safeguard themselves from these consequences, they might seek to dissuade employees from exposing their wrongdoing.
```
```
original: the meritocratic nature of crowdfunding [MASK] into their vision's viability.
infill: the meritocratic nature of crowdfunding [gives investors idea of how successful] -> ( offers entrepreneurs a window ) into their vision's viability.
```
```
Leadership | Lecture 17: Worker Morale
What Workers Look for in Companies:
• Benefits
o Tuition reimbursement
o Paid parental leave
o 401K matching
o Profit sharing
o Pension plans
o Free meals
• Social responsibility
o Environmental stewardship
o Charitable contributions
o Diversity
• Work-life balance
o Telecommuting
o Paid holidays and vacation
o Casual dress
• Growth opportunities
• Job security
• Competitive compensation
• Recognition
o Open-door policies
o Whistleblower protection
o Employee-of-the-month awards
o Positive performance reviews
o Bonuses
```
```
description: business
keywords: for-profit, fiduciary duty, monopolistic, bottom line, return on investment, short-term thinking, capital-intensive, self-interested, risk-taking, fiduciary duty, merger, speculation, profiteering, oversight, capitalism, diversification
```
```
3. In this task, you are given a company name and you need to find its industry.
McDonalds -- Restaurant
Facebook -- Social Network
IKEA -- Furniture
American Express -- Credit Services
Nokia -- Telecom
Nintendo -- Entertainment
4. In this task, you are given a Month and you need to convert it to its corresponding season
April -- Spring
December -- Winter
July -- Summer
October -- Fall
February -- Winter
5. In this task, you are given a sentence with a missing word and you need to predict the correct word.
Managers should set an _____ for their employees. -- example
Some people spend more than four _____ in the gym. -- hours
The police were on the _____ of arresting the suspect. -- verge
They were looking for _____ on how to solve the problem. -- guidance
What is the _____ of the coffee? -- price
6. In this task, you are given a paragraph and you need to reorder it to make it logical.
It was first proposed in 1987. The total length of the bridge is 1,828 meters. The idea of a bridge connects Hong Kong to Macau. -- The idea of bridge connecting Hong Kong and Macau was first proposed in 1987. The total length of the bridge is 1,828 meters.
It is a movie about a brave and noble policeman. The film was produced by Americans. They were Kevin Lima and Chris Buck. They are directors. The movie is called Tarzan. -- Produced by Americans Kevin Lima and Chris Buck, Tarzan is a movie about a brave and noble policeman.
It was first discovered in the mountains of India. The active ingredients in this plant can stimulate hair growth. The plant is called "Hair Plus." -- First discovered in the mountains of India, Hair Plus is a plant whose active ingredients can stimulate hair growth.
```
```
trivia: What is the population of South Korea?
response: 51 million.
***
trivia: What is the minimum voting age in the US?
response: 18.
***
trivia: What are the first ten amendments of the US constitution called?
response: Bill of Rights.
``` |
albert-xlarge-v2 | [
"pytorch",
"tf",
"albert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1909.11942",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"AlbertForMaskedLM"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2,973 | 2022-11-16T16:43:22Z | ---
license: mit
---
##MODEL BY ShadoWxShinigamI
Use Token - mdjrny-shttr at the beginning of your prompt [ Prompt Engineering not required ]; If some object doesn't work, increase the prompt weight of the object to 1.6
Training - 2500 steps, Batch size 2, 512x512, v1-5 Base, 26 images (52 Flipped)
Examples:-
Lion

Batman

Medusa Head

Deer

Emma Watson

Son Goku

(Mansion:1.6) - Need the extra weight for the model to actually produce buildings for some reason

|
bert-base-cased-finetuned-mrpc | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11,644 | 2022-11-16T16:55:38Z |
---
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
language: en
datasets:
- lmqg/qag_tweetqa
pipeline_tag: text2text-generation
tags:
- questions and answers generation
widget:
- text: "Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records."
example_title: "Questions & Answers Generation Example 1"
model-index:
- name: research-backup/t5-small-tweetqa-qag-np
results:
- task:
name: Text2text Generation
type: text2text-generation
dataset:
name: lmqg/qag_tweetqa
type: default
args: default
metrics:
- name: BLEU4 (Question & Answer Generation)
type: bleu4_question_answer_generation
value: 10.71
- name: ROUGE-L (Question & Answer Generation)
type: rouge_l_question_answer_generation
value: 34.77
- name: METEOR (Question & Answer Generation)
type: meteor_question_answer_generation
value: 27.8
- name: BERTScore (Question & Answer Generation)
type: bertscore_question_answer_generation
value: 89.48
- name: MoverScore (Question & Answer Generation)
type: moverscore_question_answer_generation
value: 60.53
- name: QAAlignedF1Score-BERTScore (Question & Answer Generation)
type: qa_aligned_f1_score_bertscore_question_answer_generation
value: 90.7
- name: QAAlignedRecall-BERTScore (Question & Answer Generation)
type: qa_aligned_recall_bertscore_question_answer_generation
value: 90.23
- name: QAAlignedPrecision-BERTScore (Question & Answer Generation)
type: qa_aligned_precision_bertscore_question_answer_generation
value: 91.19
- name: QAAlignedF1Score-MoverScore (Question & Answer Generation)
type: qa_aligned_f1_score_moverscore_question_answer_generation
value: 62.94
- name: QAAlignedRecall-MoverScore (Question & Answer Generation)
type: qa_aligned_recall_moverscore_question_answer_generation
value: 61.9
- name: QAAlignedPrecision-MoverScore (Question & Answer Generation)
type: qa_aligned_precision_moverscore_question_answer_generation
value: 64.1
---
# Model Card of `research-backup/t5-small-tweetqa-qag-np`
This model is fine-tuned version of [t5-small](https://huggingface.co/t5-small) for question & answer pair generation task on the [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
This model is fine-tuned without a task prefix.
### Overview
- **Language model:** [t5-small](https://huggingface.co/t5-small)
- **Language:** en
- **Training data:** [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="research-backup/t5-small-tweetqa-qag-np")
# model prediction
question_answer_pairs = model.generate_qa("William Turner was an English painter who specialised in watercolour landscapes")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "research-backup/t5-small-tweetqa-qag-np")
output = pipe("Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
```
## Evaluation
- ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/research-backup/t5-small-tweetqa-qag-np/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qag_tweetqa.default.json)
| | Score | Type | Dataset |
|:--------------------------------|--------:|:--------|:---------------------------------------------------------------------|
| BERTScore | 89.48 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| Bleu_1 | 35.61 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| Bleu_2 | 23.38 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| Bleu_3 | 15.73 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| Bleu_4 | 10.71 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| METEOR | 27.8 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| MoverScore | 60.53 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| QAAlignedF1Score (BERTScore) | 90.7 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| QAAlignedF1Score (MoverScore) | 62.94 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| QAAlignedPrecision (BERTScore) | 91.19 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| QAAlignedPrecision (MoverScore) | 64.1 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| QAAlignedRecall (BERTScore) | 90.23 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| QAAlignedRecall (MoverScore) | 61.9 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| ROUGE_L | 34.77 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qag_tweetqa
- dataset_name: default
- input_types: ['paragraph']
- output_types: ['questions_answers']
- prefix_types: None
- model: t5-small
- max_length: 256
- max_length_output: 128
- epoch: 16
- batch: 64
- lr: 0.0001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 1
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/t5-small-tweetqa-qag-np/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
bert-base-chinese | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"zh",
"arxiv:1810.04805",
"transformers",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3,377,486 | 2022-11-16T16:57:47Z | ---
license: apache-2.0
language:
- bs
- sv
tags:
- translation
model-index:
- name: mt-bs-sv-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt-bs-sv-finetuned
This model is a fine-tuned version of [oskarandrsson/mt-hr-sv-finetuned](https://huggingface.co/oskarandrsson/mt-hr-sv-finetuned) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.8217
- eval_bleu: 53.9611
- eval_runtime: 601.8995
- eval_samples_per_second: 15.971
- eval_steps_per_second: 3.994
- epoch: 4.0
- step: 14420
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 24
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.6.1
- Tokenizers 0.13.1
|
bert-base-german-cased | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"de",
"transformers",
"exbert",
"license:mit",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 175,983 | 2022-11-16T17:02:45Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: BERiT_2000_custom_architecture_40_epochs_ls_.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERiT_2000_custom_architecture_40_epochs_ls_.1
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.0606
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
- label_smoothing_factor: 0.1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 15.8251 | 0.19 | 500 | 8.3567 |
| 7.8217 | 0.39 | 1000 | 7.2693 |
| 7.2486 | 0.58 | 1500 | 7.0533 |
| 7.0209 | 0.77 | 2000 | 6.9330 |
| 6.9572 | 0.97 | 2500 | 6.9266 |
| 6.897 | 1.16 | 3000 | 6.7520 |
| 6.8273 | 1.36 | 3500 | 6.6686 |
| 6.7692 | 1.55 | 4000 | 6.5613 |
| 6.7269 | 1.74 | 4500 | 6.5789 |
| 6.6783 | 1.94 | 5000 | 6.5036 |
| 6.6792 | 2.13 | 5500 | 6.5714 |
| 6.6638 | 2.32 | 6000 | 6.4733 |
| 6.5913 | 2.52 | 6500 | 6.4967 |
| 6.634 | 2.71 | 7000 | 6.4530 |
| 6.5831 | 2.9 | 7500 | 6.4763 |
| 6.556 | 3.1 | 8000 | 6.4616 |
| 6.5676 | 3.29 | 8500 | 6.4449 |
| 6.5447 | 3.49 | 9000 | 6.4795 |
| 6.5531 | 3.68 | 9500 | 6.4253 |
| 6.5356 | 3.87 | 10000 | 6.4508 |
| 6.5677 | 4.07 | 10500 | 6.4002 |
| 6.5035 | 4.26 | 11000 | 6.3985 |
| 6.5112 | 4.45 | 11500 | 6.4798 |
| 6.5038 | 4.65 | 12000 | 6.4138 |
| 6.504 | 4.84 | 12500 | 6.4381 |
| 6.4993 | 5.03 | 13000 | 6.4241 |
| 6.4853 | 5.23 | 13500 | 6.4275 |
| 6.4881 | 5.42 | 14000 | 6.3979 |
| 6.4882 | 5.62 | 14500 | 6.4468 |
| 6.4728 | 5.81 | 15000 | 6.4319 |
| 6.4853 | 6.0 | 15500 | 6.4124 |
| 6.4764 | 6.2 | 16000 | 6.4013 |
| 6.4638 | 6.39 | 16500 | 6.3955 |
| 6.4727 | 6.58 | 17000 | 6.4140 |
| 6.4595 | 6.78 | 17500 | 6.4229 |
| 6.4273 | 6.97 | 18000 | 6.3758 |
| 6.4594 | 7.16 | 18500 | 6.3889 |
| 6.4457 | 7.36 | 19000 | 6.3175 |
| 6.4538 | 7.55 | 19500 | 6.3748 |
| 6.453 | 7.75 | 20000 | 6.3782 |
| 6.4662 | 7.94 | 20500 | 6.3953 |
| 6.437 | 8.13 | 21000 | 6.4125 |
| 6.4342 | 8.33 | 21500 | 6.3641 |
| 6.4424 | 8.52 | 22000 | 6.3911 |
| 6.4035 | 8.71 | 22500 | 6.4061 |
| 6.44 | 8.91 | 23000 | 6.3751 |
| 6.4206 | 9.1 | 23500 | 6.4066 |
| 6.4297 | 9.3 | 24000 | 6.3342 |
| 6.408 | 9.49 | 24500 | 6.3508 |
| 6.4026 | 9.68 | 25000 | 6.3609 |
| 6.4262 | 9.88 | 25500 | 6.4123 |
| 6.4311 | 10.07 | 26000 | 6.3867 |
| 6.3788 | 10.26 | 26500 | 6.3724 |
| 6.3943 | 10.46 | 27000 | 6.4175 |
| 6.3867 | 10.65 | 27500 | 6.3834 |
| 6.3868 | 10.84 | 28000 | 6.4073 |
| 6.4119 | 11.04 | 28500 | 6.3267 |
| 6.387 | 11.23 | 29000 | 6.4038 |
| 6.4092 | 11.43 | 29500 | 6.3426 |
| 6.3994 | 11.62 | 30000 | 6.3827 |
| 6.3868 | 11.81 | 30500 | 6.3681 |
| 6.3779 | 12.01 | 31000 | 6.3419 |
| 6.3427 | 12.2 | 31500 | 6.2775 |
| 6.3763 | 12.39 | 32000 | 6.4021 |
| 6.349 | 12.59 | 32500 | 6.3800 |
| 6.3538 | 12.78 | 33000 | 6.3369 |
| 6.3734 | 12.97 | 33500 | 6.3495 |
| 6.3857 | 13.17 | 34000 | 6.3715 |
| 6.3678 | 13.36 | 34500 | 6.3540 |
| 6.3345 | 13.56 | 35000 | 6.3504 |
| 6.3399 | 13.75 | 35500 | 6.3641 |
| 6.3383 | 13.94 | 36000 | 6.3556 |
| 6.3637 | 14.14 | 36500 | 6.2868 |
| 6.3478 | 14.33 | 37000 | 6.3285 |
| 6.3416 | 14.52 | 37500 | 6.3001 |
| 6.3513 | 14.72 | 38000 | 6.3484 |
| 6.3381 | 14.91 | 38500 | 6.3030 |
| 6.3244 | 15.1 | 39000 | 6.3705 |
| 6.3285 | 15.3 | 39500 | 6.3402 |
| 6.3448 | 15.49 | 40000 | 6.2991 |
| 6.3199 | 15.69 | 40500 | 6.3100 |
| 6.3195 | 15.88 | 41000 | 6.3203 |
| 6.3191 | 16.07 | 41500 | 6.3647 |
| 6.3228 | 16.27 | 42000 | 6.3181 |
| 6.3055 | 16.46 | 42500 | 6.2765 |
| 6.2817 | 16.65 | 43000 | 6.3116 |
| 6.3412 | 16.85 | 43500 | 6.3228 |
| 6.3002 | 17.04 | 44000 | 6.3100 |
| 6.289 | 17.23 | 44500 | 6.3308 |
| 6.3087 | 17.43 | 45000 | 6.3143 |
| 6.2945 | 17.62 | 45500 | 6.3081 |
| 6.3012 | 17.82 | 46000 | 6.3268 |
| 6.3008 | 18.01 | 46500 | 6.2792 |
| 6.2712 | 18.2 | 47000 | 6.2773 |
| 6.2488 | 18.4 | 47500 | 6.3212 |
| 6.2927 | 18.59 | 48000 | 6.3141 |
| 6.2865 | 18.78 | 48500 | 6.2275 |
| 6.2472 | 18.98 | 49000 | 6.2689 |
| 6.2617 | 19.17 | 49500 | 6.2390 |
| 6.2627 | 19.36 | 50000 | 6.2496 |
| 6.2916 | 19.56 | 50500 | 6.2473 |
| 6.2514 | 19.75 | 51000 | 6.2867 |
| 6.2501 | 19.95 | 51500 | 6.2353 |
| 6.2242 | 20.14 | 52000 | 6.2676 |
| 6.2465 | 20.33 | 52500 | 6.2274 |
| 6.2677 | 20.53 | 53000 | 6.2365 |
| 6.2534 | 20.72 | 53500 | 6.2161 |
| 6.2185 | 20.91 | 54000 | 6.2284 |
| 6.2171 | 21.11 | 54500 | 6.2475 |
| 6.2322 | 21.3 | 55000 | 6.2339 |
| 6.2359 | 21.49 | 55500 | 6.2272 |
| 6.1961 | 21.69 | 56000 | 6.2844 |
| 6.2254 | 21.88 | 56500 | 6.1721 |
| 6.242 | 22.08 | 57000 | 6.2173 |
| 6.2136 | 22.27 | 57500 | 6.2512 |
| 6.2053 | 22.46 | 58000 | 6.1929 |
| 6.2052 | 22.66 | 58500 | 6.2275 |
| 6.2022 | 22.85 | 59000 | 6.1908 |
| 6.2127 | 23.04 | 59500 | 6.1896 |
| 6.2163 | 23.24 | 60000 | 6.1702 |
| 6.187 | 23.43 | 60500 | 6.2002 |
| 6.2149 | 23.63 | 61000 | 6.2151 |
| 6.1867 | 23.82 | 61500 | 6.1795 |
| 6.1901 | 24.01 | 62000 | 6.1942 |
| 6.1901 | 24.21 | 62500 | 6.1266 |
| 6.1959 | 24.4 | 63000 | 6.1754 |
| 6.2138 | 24.59 | 63500 | 6.1405 |
| 6.1917 | 24.79 | 64000 | 6.1818 |
| 6.204 | 24.98 | 64500 | 6.2095 |
| 6.1947 | 25.17 | 65000 | 6.1928 |
| 6.1793 | 25.37 | 65500 | 6.1076 |
| 6.1571 | 25.56 | 66000 | 6.1632 |
| 6.1595 | 25.76 | 66500 | 6.1803 |
| 6.1632 | 25.95 | 67000 | 6.1188 |
| 6.1756 | 26.14 | 67500 | 6.1569 |
| 6.1947 | 26.34 | 68000 | 6.1281 |
| 6.1451 | 26.53 | 68500 | 6.1972 |
| 6.1418 | 26.72 | 69000 | 6.1418 |
| 6.1657 | 26.92 | 69500 | 6.1514 |
| 6.1751 | 27.11 | 70000 | 6.1513 |
| 6.1529 | 27.3 | 70500 | 6.1319 |
| 6.155 | 27.5 | 71000 | 6.1515 |
| 6.1537 | 27.69 | 71500 | 6.1271 |
| 6.1372 | 27.89 | 72000 | 6.0880 |
| 6.1475 | 28.08 | 72500 | 6.1339 |
| 6.1254 | 28.27 | 73000 | 6.1503 |
| 6.1125 | 28.47 | 73500 | 6.1292 |
| 6.1359 | 28.66 | 74000 | 6.1513 |
| 6.1346 | 28.85 | 74500 | 6.1083 |
| 6.1411 | 29.05 | 75000 | 6.1005 |
| 6.147 | 29.24 | 75500 | 6.1451 |
| 6.1263 | 29.43 | 76000 | 6.1206 |
| 6.1211 | 29.63 | 76500 | 6.1363 |
| 6.0854 | 29.82 | 77000 | 6.1753 |
| 6.132 | 30.02 | 77500 | 6.1525 |
| 6.127 | 30.21 | 78000 | 6.1390 |
| 6.151 | 30.4 | 78500 | 6.1003 |
| 6.1272 | 30.6 | 79000 | 6.1126 |
| 6.0856 | 30.79 | 79500 | 6.1034 |
| 6.0806 | 30.98 | 80000 | 6.0432 |
| 6.1221 | 31.18 | 80500 | 6.1355 |
| 6.0959 | 31.37 | 81000 | 6.1013 |
| 6.1166 | 31.56 | 81500 | 6.1264 |
| 6.1337 | 31.76 | 82000 | 6.1278 |
| 6.0929 | 31.95 | 82500 | 6.0627 |
| 6.099 | 32.15 | 83000 | 6.0922 |
| 6.0934 | 32.34 | 83500 | 6.0839 |
| 6.1255 | 32.53 | 84000 | 6.1186 |
| 6.084 | 32.73 | 84500 | 6.0857 |
| 6.1205 | 32.92 | 85000 | 6.0830 |
| 6.0857 | 33.11 | 85500 | 6.1101 |
| 6.1016 | 33.31 | 86000 | 6.0707 |
| 6.1124 | 33.5 | 86500 | 6.1193 |
| 6.0906 | 33.69 | 87000 | 6.0991 |
| 6.1113 | 33.89 | 87500 | 6.1047 |
| 6.0811 | 34.08 | 88000 | 6.0694 |
| 6.0966 | 34.28 | 88500 | 6.0595 |
| 6.0931 | 34.47 | 89000 | 6.1141 |
| 6.1163 | 34.66 | 89500 | 6.0787 |
| 6.0874 | 34.86 | 90000 | 6.1036 |
| 6.0872 | 35.05 | 90500 | 6.1053 |
| 6.0804 | 35.24 | 91000 | 6.0759 |
| 6.0955 | 35.44 | 91500 | 6.0604 |
| 6.1155 | 35.63 | 92000 | 6.1016 |
| 6.0789 | 35.82 | 92500 | 6.0678 |
| 6.0605 | 36.02 | 93000 | 6.0467 |
| 6.0832 | 36.21 | 93500 | 6.0742 |
| 6.0742 | 36.41 | 94000 | 6.0379 |
| 6.0804 | 36.6 | 94500 | 6.0870 |
| 6.0629 | 36.79 | 95000 | 6.0339 |
| 6.0676 | 36.99 | 95500 | 6.0635 |
| 6.0838 | 37.18 | 96000 | 6.1115 |
| 6.0696 | 37.37 | 96500 | 6.1257 |
| 6.0761 | 37.57 | 97000 | 6.0905 |
| 6.073 | 37.76 | 97500 | 6.0920 |
| 6.059 | 37.96 | 98000 | 6.0508 |
| 6.0628 | 38.15 | 98500 | 6.0818 |
| 6.0383 | 38.34 | 99000 | 6.1400 |
| 6.0753 | 38.54 | 99500 | 6.0810 |
| 6.0641 | 38.73 | 100000 | 6.1257 |
| 6.0648 | 38.92 | 100500 | 6.0536 |
| 6.0545 | 39.12 | 101000 | 6.1098 |
| 6.0643 | 39.31 | 101500 | 6.0630 |
| 6.0521 | 39.5 | 102000 | 6.0639 |
| 6.0633 | 39.7 | 102500 | 6.0430 |
| 6.0464 | 39.89 | 103000 | 6.0606 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
bert-base-german-dbmdz-uncased | [
"pytorch",
"jax",
"safetensors",
"bert",
"fill-mask",
"de",
"transformers",
"license:mit",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 68,305 | 2022-11-16T17:05:17Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: frases-bertimbau-v0.4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# frases-bertimbau-v0.4
This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4380
- F1: 0.8653
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.4919 | 0.67 | 500 | 0.6711 | 0.7934 |
| 0.5458 | 1.34 | 1000 | 0.4631 | 0.8445 |
| 0.4346 | 2.01 | 1500 | 0.3971 | 0.8619 |
| 0.3282 | 2.68 | 2000 | 0.3943 | 0.8647 |
| 0.2738 | 3.35 | 2500 | 0.4260 | 0.8651 |
| 0.2261 | 4.02 | 3000 | 0.4071 | 0.8716 |
| 0.1539 | 4.68 | 3500 | 0.4522 | 0.8687 |
| 0.1354 | 5.35 | 4000 | 0.5319 | 0.8633 |
| 0.1132 | 6.02 | 4500 | 0.5306 | 0.8660 |
| 0.0834 | 6.69 | 5000 | 0.5935 | 0.8633 |
| 0.0756 | 7.36 | 5500 | 0.6532 | 0.8593 |
| 0.0692 | 8.03 | 6000 | 0.6492 | 0.8650 |
| 0.0541 | 8.7 | 6500 | 0.6708 | 0.8648 |
| 0.0451 | 9.37 | 7000 | 0.7084 | 0.8667 |
| 0.046 | 10.04 | 7500 | 0.7482 | 0.8655 |
| 0.0356 | 10.71 | 8000 | 0.7802 | 0.8631 |
| 0.0332 | 11.38 | 8500 | 0.8112 | 0.8623 |
| 0.0282 | 12.05 | 9000 | 0.8070 | 0.8664 |
| 0.0251 | 12.72 | 9500 | 0.8332 | 0.8640 |
| 0.0215 | 13.39 | 10000 | 0.8487 | 0.8678 |
| 0.0203 | 14.06 | 10500 | 0.8883 | 0.8604 |
| 0.0168 | 14.73 | 11000 | 0.8870 | 0.8637 |
| 0.0124 | 15.39 | 11500 | 0.8986 | 0.8678 |
| 0.0137 | 16.06 | 12000 | 0.9093 | 0.8670 |
| 0.0104 | 16.73 | 12500 | 0.9145 | 0.8659 |
| 0.0071 | 17.4 | 13000 | 0.9380 | 0.8676 |
| 0.0076 | 18.07 | 13500 | 0.9496 | 0.8686 |
| 0.0072 | 18.74 | 14000 | 0.9589 | 0.8698 |
| 0.0067 | 19.41 | 14500 | 0.9571 | 0.8687 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.13.0
- Datasets 2.2.1
- Tokenizers 0.13.2
|
bert-base-multilingual-cased | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"af",
"sq",
"ar",
"an",
"hy",
"ast",
"az",
"ba",
"eu",
"bar",
"be",
"bn",
"inc",
"bs",
"br",
"bg",
"my",
"ca",
"ceb",
"ce",
"zh",
"cv",
"hr",
"cs",
"da",
"nl",
"en",
"et",
"fi",
"fr",
"gl",
"ka",
"de",
"el",
"gu",
"ht",
"he",
"hi",
"hu",
"is",
"io",
"id",
"ga",
"it",
"ja",
"jv",
"kn",
"kk",
"ky",
"ko",
"la",
"lv",
"lt",
"roa",
"nds",
"lm",
"mk",
"mg",
"ms",
"ml",
"mr",
"mn",
"min",
"ne",
"new",
"nb",
"nn",
"oc",
"fa",
"pms",
"pl",
"pt",
"pa",
"ro",
"ru",
"sco",
"sr",
"scn",
"sk",
"sl",
"aze",
"es",
"su",
"sw",
"sv",
"tl",
"tg",
"th",
"ta",
"tt",
"te",
"tr",
"uk",
"ud",
"uz",
"vi",
"vo",
"war",
"cy",
"fry",
"pnb",
"yo",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4,749,504 | 2022-11-16T17:07:36Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
---
### Vulvine_Look_v02 on Stable Diffusion via Dreambooth trained on the [fast-DreamBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
#### Model by LaCambre
This your the Stable Diffusion model fine-tuned the Vulvine_Look_v02 concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt(s)`: **VulvineLook**
It was trained based on the shortfilm "Vulvine, Reine d'Extase. @vulvine.gobelins
https://vimeo.com/769104378
Sample pictures of this concept:
VulvineLook
.jpg)
|
bert-base-multilingual-uncased | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"multilingual",
"af",
"sq",
"ar",
"an",
"hy",
"ast",
"az",
"ba",
"eu",
"bar",
"be",
"bn",
"inc",
"bs",
"br",
"bg",
"my",
"ca",
"ceb",
"ce",
"zh",
"cv",
"hr",
"cs",
"da",
"nl",
"en",
"et",
"fi",
"fr",
"gl",
"ka",
"de",
"el",
"gu",
"ht",
"he",
"hi",
"hu",
"is",
"io",
"id",
"ga",
"it",
"ja",
"jv",
"kn",
"kk",
"ky",
"ko",
"la",
"lv",
"lt",
"roa",
"nds",
"lm",
"mk",
"mg",
"ms",
"ml",
"mr",
"min",
"ne",
"new",
"nb",
"nn",
"oc",
"fa",
"pms",
"pl",
"pt",
"pa",
"ro",
"ru",
"sco",
"sr",
"scn",
"sk",
"sl",
"aze",
"es",
"su",
"sw",
"sv",
"tl",
"tg",
"ta",
"tt",
"te",
"tr",
"uk",
"ud",
"uz",
"vi",
"vo",
"war",
"cy",
"fry",
"pnb",
"yo",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 328,585 | 2022-11-16T17:15:07Z | ---
language:
- hi
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Hi - Sanchit Gandhi
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
args: 'config: hi, split: test'
metrics:
- name: Wer
type: wer
value: 83.4696132596685
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hi - Sanchit Gandhi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5989
- Wer: 83.4696
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0005 | 15.87 | 1000 | 1.5989 | 83.4696 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
bert-large-cased-whole-word-masking | [
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2,316 | 2022-11-16T17:21:00Z | ---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
---
#### Realistic Lucy Edgerunners models
Trained on SD1.5
Our beloved Lucy who tends to realism !
There are two models, the main is more Lucy accurate but less style transferrable, the second '-creative' tend to do really cool style blending with an overral subject feeling (lucy hair shape, hair colors) but her suit is sometimes chaotic.
instance prompt : **romerorzlucy**
You can find cool prompts with associated outputs on my website : **[romerorz.art](https://www.romerorz.art/)**
Sample pictures of the main model (default vae) :

 |
bert-large-cased | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"fill-mask",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 388,769 | null | Access to model lucieackley/setfit-welcome-msg is restricted and you are not in the authorized list. Visit https://huggingface.co/lucieackley/setfit-welcome-msg to ask for access. |
bert-large-uncased-whole-word-masking-finetuned-squad | [
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"question-answering",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 480,510 | 2022-11-16T17:27:48Z | ---
tags:
- stanza
- token-classification
library_name: stanza
language: myv
license: apache-2.0
---
# Stanza model for Erzya (myv)
Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing.
Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza).
This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo
Last updated 2023-05-19 04:12:55.870
|
camembert-base | [
"pytorch",
"tf",
"safetensors",
"camembert",
"fill-mask",
"fr",
"dataset:oscar",
"arxiv:1911.03894",
"transformers",
"license:mit",
"autotrain_compatible",
"has_space"
]
| fill-mask | {
"architectures": [
"CamembertForMaskedLM"
],
"model_type": "camembert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1,440,898 | 2022-11-16T17:38:00Z | ---
license: mit
datasets: Den4ikAI/mailruQA-big
widget:
- text: "Q: Что такое любовь?\n A:"
example_title: test
---
Здесь будут выкладываться чекпоинты модели rugpt3-medium обученной на данных с otvet.mail.ru
Датасет для обучения [тык](https://huggingface.co/datasets/Den4ikAI/mailruQA-big)
Мини-версия (в два раза меньше)
[тык](https://huggingface.co/datasets/Den4ikAI/mailruQA-small)
Текущий чекпоинт: 150к |
distilbert-base-cased-distilled-squad | [
"pytorch",
"tf",
"rust",
"safetensors",
"openvino",
"distilbert",
"question-answering",
"en",
"dataset:squad",
"arxiv:1910.01108",
"arxiv:1910.09700",
"transformers",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"has_space"
]
| question-answering | {
"architectures": [
"DistilBertForQuestionAnswering"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 257,745 | 2022-11-16T17:41:21Z | ---
license: mit
---
This is still a WiP. But if anyone still wants to try it out, the prompt token is mdjrny-splttr
Updated Style - Midjourney-v4-PaintArt
[Link](https://huggingface.co/ShadoWxShinigamI/Midjourney-v4-PaintArt "Paint Art") |
AHussain0418/distillbert-truth-detector | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-11-17T02:14:14Z | # L0 Regularizer
PyTorch adaptation of Louizos (2017) to wrap PyTorch modules.
This code provides a method for wrapping an existing PyTorch module and regularizing it according to the L0 norm. It is an adaptation and extension of code from the repository [here](https://github.com/AMLab-Amsterdam/L0_regularization). The code works in the following manner:
1. The supplied module is copied.
2. The copied module has its existing parameters copied into a new parameter dictionary.
3. Parameters used for masking are generated for each copied parameter in the dictionary.
4. The parameters are then deleted from the copied module.
5. During a forward pass the parameters are masked and then plugged into the module as regular tensors.
This method allows for the generic adaptation of any PyTorch module, though it may brook further performance improvements.
An important divergence from Louizos that allows it to be generalized is that each parameter is optimized separately. In other words, it optimizes for connection, not neural, dropout. It also suffers from an inability to do large batches while also sampling each dropout independently. Nevertheless, the code has proven useful for analyzing the computational complexity of different data given an architecture.
I plan to revisit this code in the future and implement some or all of the following:
* A method to exclude parameters from regularization
* A method allowing batch processing with independent sampling
* Support for initializing parameters with different distributions
# Using
Initializing requires a `module` of type `torch.nn.Module` and `lam` of type `float`, the module is used as explained above, and `lam` is the L0 regularization constant. Optional parameters include `weight_decay` (L2 regularization constant) and `droprate_init` (likelihood of masking at initialization). Other parameters are detailed in Louizos (2017) but have to do with the dropout sampling method and shouldn't need tweaking.
After initialization use your new module in the following way:
* `.forward()` should function as your module did initially
* `.regularization()` will return the combined L0 and L2 norm (avoid using L2 norm of existing PyTorch optimizers)
* `.count_l0` will return the expected value of the number of retained parameters during a forward pass
* `.count_l2` will return the expected cost of encoding the parameters (sum of squares of expected values AFTER masking)
The last two are useful for measurements, but `.regularization()` is the backprop supporting function.
|
AZTEC/Arcane | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-11-17T07:43:43Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2500 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 2,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 5000,
"warmup_steps": 500,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Pinwheel/wav2vec2-large-xlsr-53-hi | [
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"transformers"
]
| automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | 2022-11-17T08:31:25Z | ---
tags:
- endpoints-template
license: apache-2.0
---
# Multi-Model GPU Inference with Hugging Face Inference Endpoints
Multi-model Inference Endpoints provide a way to deploy multiple models onto the same infrastructure for a scalable and cost-effective inference. On multi-model Inference Endpoints, we load a list of models into memory, either CPU or GPU, and dynamically use them during inference time.
The following diagram shows how multi-model inference endpoints look.

This repository includes a [custom handler](handler.py) of a sample multi-model `EndpointHandler` implementation. This multi-model handler loads 5 different models for inference including:
- `DistilBERT` model for `sentiment-analysis`
- `Marian` model `translation`
- `BART` model for `summarization`
- `BERT` model for `token-classification`
- `BERT` model for `text-classification`
If you want to learn more about multi-model inference endpoints checkout https://www.philschmid.de/multi-model-inference-endpoints
# Use with Inference Endpoints
Hugging Face Inference endpoints can be used with an HTTP client in any language. We will use Python and the `requests` library to send our requests. (make your you have it installed `pip install requests`)

## Send requests with Pyton
```python
import json
import requests as r
ENDPOINT_URL = "" # url of your endpoint
HF_TOKEN = "" # token of the account you deployed
# define model and payload
model_id = "facebook/bart-large-cnn"
text = "The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct."
request_body = {"inputs": text, "model_id": model_id}
# HTTP headers for authorization
headers= {
"Authorization": f"Bearer {HF_TOKEN}",
"Content-Type": "application/json"
}
# send request
response = r.post(ENDPOINT_URL, headers=headers, json=request_body)
prediction = response.json()
# [{'summary_text': 'The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world.'}]
```
|
Abdullaziz/model1 | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-11-17T09:06:47Z | ---
license: cc
---
This model is a Pytorch version of [uklfr/gottbert-base](https://huggingface.co/uklfr/gottbert-base). All credits to their developers.
|
AhmedHassan19/model | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
---
## Prompt Trigger
Use the keywords `darkprincess638 person` to trigger the character, best if used at start of prompt
## Examples
This is some of the stuff that can be generated with this model
 |
Akbarariza/Anjar | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -154.59 +/- 128.22
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'sayby/home-made-ppo-LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
Akira-Yana/distilbert-base-uncased-finetuned-cola | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
---
## Model description
An answer classification model for boolean questions based on XLM-RoBERTa.
The answer classifier takes as input a boolean question and a passage, and returns a label (yes, no-answer, no).
The model was initialized with [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) and fine-tuned on the boolean questions from [TyDiQA](https://huggingface.co/datasets/tydiqa), as well as [BoolQ-X](https://arxiv.org/abs/2112.07772#).
## Intended uses & limitations
You can use the raw model for question classification. Biases associated with the pre-existing language model, xlm-roberta-large, may be present in our fine-tuned model, tydiqa-boolean-answer-classifier.
## Usage
You can use this model directly in the the [PrimeQA](https://github.com/primeqa/primeqa) framework for supporting boolean questions in reading comprehension: [examples](https://github.com/primeqa/primeqa/tree/main/examples/boolqa).
### BibTeX entry and citation info
```bibtex
@article{Rosenthal2021DoAT,
title={Do Answers to Boolean Questions Need Explanations? Yes},
author={Sara Rosenthal and Mihaela A. Bornea and Avirup Sil and Radu Florian and Scott McCarley},
journal={ArXiv},
year={2021},
volume={abs/2112.07772}
}
```
```bibtex
@misc{https://doi.org/10.48550/arxiv.2206.08441,
author = {McCarley, Scott and
Bornea, Mihaela and
Rosenthal, Sara and
Ferritto, Anthony and
Sultan, Md Arafat and
Sil, Avirup and
Florian, Radu},
title = {GAAMA 2.0: An Integrated System that Answers Boolean and Extractive Questions},
journal = {CoRR},
publisher = {arXiv},
year = {2022},
url = {https://arxiv.org/abs/2206.08441},
}
``` |
Aleksandra/herbert-base-cased-finetuned-squad | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"generated_from_trainer",
"license:cc-by-4.0",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | 2022-11-18T00:07:30Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-expression_epoch5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-expression_epoch5
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5897
- Precision: 0.5835
- Recall: 0.5688
- F1: 0.5760
- Accuracy: 0.8344
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 218 | 0.5185 | 0.5076 | 0.5034 | 0.5055 | 0.8207 |
| No log | 2.0 | 436 | 0.4972 | 0.4948 | 0.5638 | 0.5271 | 0.8177 |
| 0.5193 | 3.0 | 654 | 0.5128 | 0.5838 | 0.5554 | 0.5692 | 0.8390 |
| 0.5193 | 4.0 | 872 | 0.5665 | 0.5612 | 0.6074 | 0.5834 | 0.8224 |
| 0.2063 | 5.0 | 1090 | 0.5897 | 0.5835 | 0.5688 | 0.5760 | 0.8344 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
Alessandro/model_name | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distilbert-base-uncased-finetuned-squad-seed-42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad-seed-42
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4364
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.1937 | 1.0 | 8235 | 1.2350 |
| 0.9256 | 2.0 | 16470 | 1.3129 |
| 0.7489 | 3.0 | 24705 | 1.4364 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
AlexDemon/Alex | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-11-18T01:17:04Z | ---
license: apache-2.0
---
## Trigger Prompt
Use the keywords `darkprincess638 person` to trigger the character, best used at start of prompt
## Examples
Some examples of what you can generate with this model
 |
AlexN/xls-r-300m-fr | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"model-index"
]
| automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 17 | null | ---
license: creativeml-openrail-m
tags:
- text-to-image
---
### Sonic06-Diffusion on Stable Diffusion via Dreambooth trained on the [fast-DreamBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
#### Model by Laughify
This is a fine-tuned Stable Diffusion model trained on screenshots from Sonic The Hedgehog 2006 game. Use saisikwrd in your prompts for the effect. **A4FD8847-3EFF-4770-BDDA-A9404B5D436E.png, EDCBECAC-0116-4170-8A43-16F2F8FCC119.png, 815527F3-1658-4655-B0A9-6F0C73CED7D7.png**
You can also train your own concepts and upload them to the library by using [the fast-DremaBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb).
You can run your new concept via A1111 Colab :[Fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Sample pictures of this concept:
815527F3-1658-4655-B0A9-6F0C73CED7D7.png
EDCBECAC-0116-4170-8A43-16F2F8FCC119.png
A4FD8847-3EFF-4770-BDDA-A9404B5D436E.png



|
AlexN/xls-r-300m-pt | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"pt",
"dataset:mozilla-foundation/common_voice_8_0",
"transformers",
"robust-speech-event",
"mozilla-foundation/common_voice_8_0",
"generated_from_trainer",
"hf-asr-leaderboard",
"license:apache-2.0",
"model-index"
]
| automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 15 | null | ---
inference: true
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
---
# -Stable Diffusion fine tuned on Fantastic Mr Fox screencaps-
Use prompt: 'fantasticmrfox'
# This model is very versatile but without negative prompts it will mostly produce images of foxes or other furry creatures. To get the most out of this model you MUST use negative prompts...
example-
If you want to make a creature that is not a fox..
Use negative prompt: 'fox'
If you want to make landscapes/backgrounds/interiors/humans etc..
Use negative prompt: 'fox, creature'
### Training images vs Output images:
<img src="https://s3.amazonaws.com/moonup/production/uploads/1668737704729-636d7d02ffbe479c9799866d.png" width="100%"/> |
AlexaRyck/KEITH | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: stable-baselines3
tags:
- AntBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: AntBulletEnv-v0
type: AntBulletEnv-v0
metrics:
- type: mean_reward
value: 1571.56 +/- 109.37
name: mean_reward
verified: false
---
# **A2C** Agent playing **AntBulletEnv-v0**
This is a trained model of a **A2C** agent playing **AntBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Alexander-Learn/bert-finetuned-ner-accelerate | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
language: en
license: apache-2.0
library_name: diffusers
tags: []
datasets: huggan/smithsonian_butterflies_subset
metrics: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/IGKKR/ddpm-butterflies-128/tensorboard?#scalars)
|
Alexander-Learn/bert-finetuned-squad | [
"pytorch",
"tensorboard",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
language: en
thumbnail: http://www.huggingtweets.com/rundizzy-s4m31p4n-tyler02020202/1668738792600/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1474994961896644608/um4unzmz_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1546538365654294529/BRG3I5xQ_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1529675700772302848/uXtYNx_v_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">tyler & dizzy & ppigg</div>
<div style="text-align: center; font-size: 14px;">@rundizzy-s4m31p4n-tyler02020202</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from tyler & dizzy & ppigg.
| Data | tyler | dizzy | ppigg |
| --- | --- | --- | --- |
| Tweets downloaded | 2764 | 3234 | 2978 |
| Retweets | 120 | 206 | 999 |
| Short tweets | 637 | 727 | 646 |
| Tweets kept | 2007 | 2301 | 1333 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3rul6tde/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @rundizzy-s4m31p4n-tyler02020202's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1yskbt0w) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1yskbt0w/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/rundizzy-s4m31p4n-tyler02020202')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Alexandru/creative_copilot | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9825925925925926
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0552
- Accuracy: 0.9826
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2427 | 1.0 | 190 | 0.0954 | 0.9730 |
| 0.1728 | 2.0 | 380 | 0.0614 | 0.9822 |
| 0.1481 | 3.0 | 570 | 0.0552 | 0.9826 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
AliPotter24/a | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-11-18T03:36:19Z | ---
language: en
thumbnail: http://www.huggingtweets.com/dril-s4m31p4n-wnbagirlfriend/1668742659829/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1510917391533830145/XW-zSFDJ_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1529675700772302848/uXtYNx_v_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1427129645888114693/HsNIpekZ_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">wint & ppigg & jody</div>
<div style="text-align: center; font-size: 14px;">@dril-s4m31p4n-wnbagirlfriend</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from wint & ppigg & jody.
| Data | wint | ppigg | jody |
| --- | --- | --- | --- |
| Tweets downloaded | 3234 | 2978 | 3164 |
| Retweets | 493 | 995 | 75 |
| Short tweets | 286 | 647 | 750 |
| Tweets kept | 2455 | 1336 | 2339 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/r8cjggaw/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @dril-s4m31p4n-wnbagirlfriend's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/33culu2k) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/33culu2k/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/dril-s4m31p4n-wnbagirlfriend')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Aliraza47/BERT | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: agpl-3.0
---
stanford_v0.0.1 segments muscle, bone, VAT and SAT on L3 axial CT slices.
abCT_v0.0.1 segments muscle, IMAT, VAT, and SAT on L3 axial CT slices. |
Alireza-rw/testbot | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
library_name: stable-baselines3
tags:
- HalfCheetahBulletEnv-v0
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: HalfCheetahBulletEnv-v0
type: HalfCheetahBulletEnv-v0
metrics:
- type: mean_reward
value: 937.40 +/- 138.50
name: mean_reward
verified: false
---
# **A2C** Agent playing **HalfCheetahBulletEnv-v0**
This is a trained model of a **A2C** agent playing **HalfCheetahBulletEnv-v0**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Alireza1044/albert-base-v2-rte | [
"pytorch",
"tensorboard",
"albert",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
]
| text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 30 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: albert-base-v2-finetuned-squad-seed-1024
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2-finetuned-squad-seed-1024
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9736
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.8631 | 1.0 | 8248 | 0.8434 |
| 0.6218 | 2.0 | 16496 | 0.8565 |
| 0.4366 | 3.0 | 24744 | 0.9736 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
Alireza1044/albert-base-v2-sst2 | [
"pytorch",
"tensorboard",
"albert",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
]
| text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 52 | null | ---
license: odc-by
---
Basically, generate the images by saying "dnd[RACE] person" I know some arent people, but it's what I've got to work with. ;)
Make sure there are no spaces, or punctuation in the "dnd[RACE HERE]" section, so "a portrait of dndYuanTi person, intricate, elegant, highly detailed, digital painting, artstation, trending, Volumetric lighting"
Here is a list of all of them (Autognome is VERY undertrained...):
* dndAarakocra
* dndAasimar
* dndAirGenasi
* dndAstralElf
* dndAutognome
* dndBugbear
* dndCentaur
* dndChangeling
* dndDeepGnome
* dndDragonborn
* dndDwarf
* dndEarthGenasi
* dndEladrin
* dndElf
* dndFairy
* dndFirbolg
* dndFireGenasi
* dndGenasi
* dndGiff
* dndGith
* dndGnome
* dndGoblin
* dndGoliath
* dndGrung
* dndHadozee
* dndHalfElf
* dndHalfling
* dndHalfOrc
* dndHarengon
* dndHobgoblin
* dndHuman
* dndKalashtar
* dndKenku
* dndKobold
* dndLeonin
* dndLizardfolk
* dndLocathah
* dndLoxodon
* dndMinotaur
* dndOrc
* dndOwlin
* dndPlasmoid
* dndRebornLineage
* dndSatyr
* dndSeaElf
* dndShadarKai
* dndShifter
* dndSimicHybrid
* dndTabaxi
* dndThriKreen
* dndTiefling
* dndTortle
* dndTriton
* dndVedalken
* dndVerdan
* dndWarforged
* dndWaterGenasi
* dndYuanTi |
Amalq/distilroberta-base-finetuned-MentalHealth | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: dpkmnit/bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# dpkmnit/bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7048
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 66549, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 1.2092 | 0 |
| 0.7048 | 1 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.1
- Datasets 2.7.0
- Tokenizers 0.13.2
|
AndrewChar/model-QA-5-epoch-RU | [
"tf",
"distilbert",
"question-answering",
"ru",
"dataset:sberquad",
"transformers",
"generated_from_keras_callback",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"DistilBertForQuestionAnswering"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 109 | null | ---
tags:
- text-to-image
library_name: generic
---
# Text To Image repository template
This is a template repository for text to image to support generic inference with Hugging Face Hub generic Inference API. There are two required steps
1. Specify the requirements by defining a `requirements.txt` file.
2. Implement the `pipeline.py` `__init__` and `__call__` methods. These methods are called by the Inference API. The `__init__` method should load the model and preload all the elements needed for inference (model, processors, tokenizers, etc.). This is only called once. The `__call__` method performs the actual inference. Make sure to follow the same input/output specifications defined in the template for the pipeline to work.
Example repos
* https://huggingface.co/osanseviero/BigGAN-deep-128/blob/main/pipeline.py
## How to start
First create a repo in https://hf.co/new.
Then clone this template and push it to your repo.
```
git clone https://huggingface.co/templates/text-to-image
cd text-to-image
git remote set-url origin https://huggingface.co/$YOUR_USER/$YOUR_REPO_NAME
git push --force
``` |
Andrey1989/mt5-small-finetuned-mlsum-es | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: mit
tags:
- nowcasting
- forecasting
- timeseries
- remote-sensing
---
# Nowcasting CNN
## Model description
3d conv model, that takes in different data streams
architecture is roughly
1. satellite image time series goes into many 3d convolution layers.
2. nwp time series goes into many 3d convolution layers.
3. Final convolutional layer goes to full connected layer. This is joined by
other data inputs like
- pv yield
- time variables
Then there ~4 fully connected layers which end up forecasting the
pv yield / gsp into the future
## Intended uses & limitations
Forecasting short term PV power for different regions and nationally in the UK
## How to use
[More information needed]
## Limitations and bias
[More information needed]
## Training data
Training data is EUMETSAT RSS imagery over the UK, on-the-ground PV data, and NWP predictions.
## Training procedure
[More information needed]
## Evaluation results
[More information needed]
|
Andrey1989/mt5-small-finetuned-mlsum-fr | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-11-18T09:10:34Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3100
- Precision: 0.9309
- Recall: 0.9435
- F1: 0.9371
- Accuracy: 0.9294
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 234 | 0.2362 | 0.9356 | 0.9484 | 0.9420 | 0.9335 |
| No log | 2.0 | 468 | 0.2854 | 0.9303 | 0.9425 | 0.9363 | 0.9282 |
| 0.2119 | 3.0 | 702 | 0.3100 | 0.9309 | 0.9435 | 0.9371 | 0.9294 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
Andrija/RobertaFastBPE | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2022-11-18T09:21:10Z | ---
thumbnail: https://imgur.com/DkGWTA2.png
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: false
---
# Diffusion model
This model is trained with detailed semi realistic images via my anime model.
# Sample generations
This model is made to get semi realistic, realistic results with a lot of detail.
```
Positive:1girl, aura, blue_fire, electricity, energy, fire, flame, glowing, glowing_eyes, green_eyes, hitodama, horns, lightning, long_hair, magic, male_focus, solo, spirit
Negative:lowres, bad anatomy, ((bad hands)), text, error, ((missing fingers)), cropped, jpeg artifacts, worst quality, low quality, signature, watermark, blurry, deformed, extra ears, deformed, disfigured, mutation, censored, ((multiple_girls))
Steps: 20, Sampler: DPM++ 2S a, CFG scale: 8, Seed: 2526294281, Size: 896x768
```
<img src=https://imgur.com/HHdOmIF.jpg width=75% height=75%>
```
Positive: a girl,Phoenix girl,fluffy hair,war,a hell on earth, Beautiful and detailed costume, blue glowing eyes, masterpiece, (detailed hands), (glowing), twintails, smiling, beautiful detailed white gloves, (upper_body), (realistic)
Negative: lowres, bad anatomy, ((bad hands)), text, error, ((missing fingers)), cropped, jpeg artifacts, worst quality, low quality, signature, watermark, blurry, deformed, extra ears, deformed, disfigured, mutation, censored, ((multiple_girls))
Steps: 20, Sampler: DPM++ 2S a Karras, CFG scale: 8, Seed: 2495938777/2495938779, Size: 896x768
```
<img src=https://imgur.com/bHiTlAu.png width=75% height=75%>
<img src=https://imgur.com/dGFn0uV.png width=75% height=75%>
```
Positive:1girl, blurry, bracelet, breasts, dress, earrings, fingernails, grey_eyes, jewelry, lips, lipstick, looking_at_viewer, makeup, nail_polish, necklace, petals, red_lips, short_hair, solo, white_hair
Negative:lowres, bad anatomy, ((bad hands)), text, error, ((missing fingers)), cropped, jpeg artifacts, worst quality, low quality, signature, watermark, blurry, deformed, extra ears, deformed, disfigured, mutation, censored, ((multiple_girls))
Steps: 20, Sampler: DPM++ 2S a, CFG scale: 8, Seed: 3149099819, Size: 704x896
```
<img src=https://imgur.com/tnGOZz8.png width=75% height=75%>
Img2img results:
```
Positive:1girl, anal_hair, black_pubic_hair, blurry, blurry_background, brown_eyes, colored_pubic_hair, excessive_pubic_hair, female_pubic_hair, forehead, grass, lips, looking_at_viewer, male_pubic_hair, mismatched_pubic_hair, pov, pubic_hair, realistic, solo, stray_pubic_hair, teeth
Negative:lowres, bad anatomy, ((bad hands)), text, error, ((missing fingers)), cropped, jpeg artifacts, worst quality, low quality, signature, watermark, blurry, deformed, extra ears, deformed, disfigured, mutation, censored, ((multiple_girls))
Steps: 35, Sampler: Euler a, CFG scale: 9, Seed: 2148680457, Size: 512x512, Denoising strength: 0.6, Mask blur: 4
```
<img src=https://imgur.com/RVl7Xxd.png width=75% height=75%>
## Disclaimer
If you get anime images not semi realistic ones try some prompts like semi realistic,
realistic or (SemiRealImg). Usually helps. This model also works nicely with
landscapes like my previous one. However I recommend my other anime model for landscapes.
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
AnjanBiswas/distilbert-base-uncased-finetuned-emotion | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 37 | null | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: vikram15/bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# vikram15/bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7556
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 954, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 2.3076 | 0 |
| 1.0840 | 1 |
| 0.7556 | 2 |
### Framework versions
- Transformers 4.24.0
- TensorFlow 2.9.2
- Datasets 2.7.0
- Tokenizers 0.13.2
|
Anonymous/ReasonBERT-TAPAS | [
"pytorch",
"tapas",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"TapasModel"
],
"model_type": "tapas",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | Access to model dwmit/ja_classification is restricted and you are not in the authorized list. Visit https://huggingface.co/dwmit/ja_classification to ask for access. |
Anonymous0230/model_name | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: NLP-sentiment-project-2000-samples
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.9715
- name: F1
type: f1
value: 0.9716558925907509
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# NLP-sentiment-project-2000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1038
- Accuracy: 0.9715
- F1: 0.9717
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 20
- eval_batch_size: 20
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
AnonymousNLP/pretrained-model-1 | [
"pytorch",
"gpt2",
"transformers"
]
| null | {
"architectures": [
"GPT2DoubleHeadsModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
license: apache-2.0
language:
- sv
- no
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mt-no-sv-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt-no-sv-finetuned
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-no-sv](https://huggingface.co/Helsinki-NLP/opus-mt-no-sv) on the None dataset.
It achieves the following results on the Tatoeba.Nor.Swe evaluation set:
- Loss: 0.5130
- Bleu: 66.4015
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 24
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 1.085 | 1.0 | 10268 | 0.5365 | 65.9489 |
| 1.0258 | 2.0 | 20536 | 0.5221 | 66.0704 |
| 0.9783 | 3.0 | 30804 | 0.5147 | 66.4690 |
| 0.9578 | 4.0 | 41072 | 0.5130 | 66.4015 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.6.1
- Tokenizers 0.13.1
|
AnonymousSub/AR_rule_based_roberta_twostagetriplet_epochs_1_shard_1 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
license: cc0-1.0
---
I trained the model on 65 images from the film. I used the template prompt of `painting` while training the model, so use that word in the prompt as well. It is not required.
`painting of a spaceship by ghost-in-the-shell-style` or `a spaceship by ghost-in-the-shell-style` |
AnonymousSub/AR_rule_based_roberta_twostagetriplet_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
license: creativeml-openrail-m
---
**Milo Manara Style**
This is the Alpha release of a Stable Diffusion model trained to achieve the style of the Italian illustration master Milo Manara.
Use the token **in the style of ->Manara** in your prompts for the style.
**Sample result**

**Warning**: Due to the nature of the style, NSFW images may be easily generated using this model.
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
AnonymousSub/SR_rule_based_bert_quadruplet_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-bbc-news
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-bbc-news
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0107
- Accuracy: 0.9955
- F1: 0.9955
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.3463 | 0.84 | 500 | 0.0392 | 0.9865 | 0.9865 |
| 0.0447 | 1.68 | 1000 | 0.0107 | 0.9955 | 0.9955 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
AnonymousSub/SR_rule_based_bert_triplet_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
license: apache-2.0
tags:
- distigpt2
- hearthstone
metrics:
- bleu
- dvitel/codebleu
- exact_match
- chrf
datasets:
- dvitel/hearthstone
model-index:
- name: h1
results:
- task:
type: text-generation
name: Python Code Synthesis
dataset:
type: dvitel/hearthstone
name: HearthStone
split: test
metrics:
- type: exact_match
value: 0.21212121212121213
name: Exact Match
- type: bleu
value: 0.9637468196180485
name: BLEU
- type: dvitel/codebleu
value: 0.8884667222252154
name: CodeBLEU
- type: chrf
value: 96.5942286007928
name: chrF
---
# h1
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on [hearthstone](https://huggingface.co/datasets/dvitel/hearthstone) dataset.
[GitHub repo](https://github.com/dvitel/nlp-sem-parsing/blob/master/h1.py).
It achieves the following results on the evaluation set:
- Loss: 0.0890
- Exact Match: 0.1970
- Bleu: 0.9737
- Codebleu: 0.9172
- Ngram Match Score: 0.8984
- Weighted Ngram Match Score: 0.8985
- Syntax Match Score: 0.9293
- Dataflow Match Score: 0.9429
- Chrf: 97.5313
## Model description
DistilGPT2 applied onto HearthStone dataset with preprocessing of python code to dumped AST. Example:
```python
#gold labels
Module([ClassDef('Innervate', [Name('SpellCard', Load())], [], [FunctionDef('__init__', arguments([], [arg('self', None, None)], None, [], [], None, []), [Expr(Call(Attribute(Call(Name('super', Load()), [], []), '__init__', Load()), [Constant('Innervate', None), Constant(0, None), Attribute(Name('CHARACTER_CLASS', Load()), 'DRUID', Load()), Attribute(Name('CARD_RARITY', Load()), 'FREE', Load())], []))], [], None, None), FunctionDef('use', arguments([], [arg('self', None, None), arg('player', None, None), arg('game', None, None)], None, [], [], None, []), [Expr(Call(Attribute(Call(Name('super', Load()), [], []), 'use', Load()), [Name('player', Load()), Name('game', Load())], [])), If(Compare(Attribute(Name('player', Load()),'mana', Load()), [Lt()], [Constant(8, None)]), [AugAssign(Attribute(Name('player', Load()),'mana', Store()), Add(), Constant(2, None))], [Assign([Attribute(Name('player', Load()),'mana', Store())], Constant(10, None), None)])], [], None, None)], [])], [])
```
```python
#wrong prediction (example of error after training)
Module([ClassDef('Innervate', [Name('SpellCard', Load())], [], [FunctionDef('__init__', arguments([], [arg('self', None, None)], None, [], [], None, []), [Expr(Call(Attribute(Call(Name('super', Load()), [], []), '__init__', Load()), [Constant('Innervate', None), Constant(0, None), Attribute(Name('CHARACTER_CLASS', Load()), 'DRUID', Load()), Attribute(Name('CARD_RARITY', Load()), 'FREE', Load())], []))], [], None, None), FunctionDef('use', arguments([], [arg('self', None, None), arg('player', None, None), arg('game', None, None)], None, [], [], None, []), [Expr(Call(Attribute(Call(Name('super', Load()), [], []), 'use', Load()), [Name('player', Load()), Name('game', Load())], [])), For(Compare(Attribute(Name('player', Load()),'maxa', Load()), [Lt()], [Constant(10, None)]), [AugAssign(Attribute(Name('player', Load()),'mana', Store()), Add(), Constant(2, None))], Exign([Name(Name('player', Load()),'mana', Store())], Constant(None, None), None)],], [], None, None)], [])], [])
```
## Intended uses & limitations
HearthStone card code synthesis.
## Training and evaluation data
See split of [hearthstone](https://huggingface.co/datasets/dvitel/hearthstone) dataset
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 17
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Exact Match | Bleu | Codebleu | Ngram Match Score | Weighted Ngram Match Score | Syntax Match Score | Dataflow Match Score | Chrf |
|:-------------:|:------:|:-----:|:---------------:|:-----------:|:------:|:--------:|:-----------------:|:--------------------------:|:------------------:|:--------------------:|:-------:|
| 0.3871 | 11.94 | 1600 | 0.1043 | 0.0152 | 0.9499 | 0.8549 | 0.8089 | 0.8089 | 0.8653 | 0.9366 | 95.4674 |
| 0.0752 | 23.88 | 3200 | 0.0784 | 0.1212 | 0.9640 | 0.8874 | 0.8525 | 0.8526 | 0.8929 | 0.9516 | 96.7978 |
| 0.0448 | 35.82 | 4800 | 0.0717 | 0.1364 | 0.9693 | 0.9077 | 0.8782 | 0.8782 | 0.9069 | 0.9674 | 97.2100 |
| 0.0308 | 47.76 | 6400 | 0.0752 | 0.1364 | 0.9702 | 0.9061 | 0.8808 | 0.8810 | 0.9070 | 0.9554 | 97.1896 |
| 0.0223 | 59.7 | 8000 | 0.0762 | 0.1364 | 0.9724 | 0.9050 | 0.8877 | 0.8881 | 0.9093 | 0.9348 | 97.4616 |
| 0.0166 | 71.64 | 9600 | 0.0762 | 0.1667 | 0.9733 | 0.9140 | 0.8948 | 0.8951 | 0.9197 | 0.9461 | 97.4945 |
| 0.0128 | 83.58 | 11200 | 0.0793 | 0.1515 | 0.9728 | 0.9085 | 0.8911 | 0.8918 | 0.9189 | 0.9321 | 97.4152 |
| 0.0104 | 95.52 | 12800 | 0.0822 | 0.1667 | 0.9732 | 0.9165 | 0.8946 | 0.8950 | 0.9222 | 0.9541 | 97.4887 |
| 0.0084 | 107.46 | 14400 | 0.0832 | 0.1667 | 0.9737 | 0.9167 | 0.8970 | 0.8972 | 0.9254 | 0.9471 | 97.5326 |
| 0.007 | 119.4 | 16000 | 0.0837 | 0.1818 | 0.9743 | 0.9160 | 0.8983 | 0.8986 | 0.9238 | 0.9434 | 97.6638 |
| 0.0058 | 131.34 | 17600 | 0.0858 | 0.1818 | 0.9739 | 0.9200 | 0.8977 | 0.8977 | 0.9267 | 0.9579 | 97.5583 |
| 0.005 | 143.28 | 19200 | 0.0878 | 0.1818 | 0.9743 | 0.9180 | 0.8993 | 0.9001 | 0.9301 | 0.9426 | 97.5819 |
| 0.0044 | 155.22 | 20800 | 0.0877 | 0.1667 | 0.9736 | 0.9156 | 0.8957 | 0.8960 | 0.9278 | 0.9429 | 97.5109 |
| 0.0042 | 167.16 | 22400 | 0.0890 | 0.1970 | 0.9736 | 0.9171 | 0.8984 | 0.8984 | 0.9293 | 0.9424 | 97.5617 |
| 0.0038 | 179.1 | 24000 | 0.0891 | 0.2121 | 0.9738 | 0.9174 | 0.8991 | 0.8991 | 0.9285 | 0.9429 | 97.5452 |
| 0.0037 | 191.04 | 25600 | 0.0890 | 0.1970 | 0.9737 | 0.9172 | 0.8984 | 0.8985 | 0.9293 | 0.9429 | 97.5313 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.6.1
- Tokenizers 0.13.1
|
AnonymousSub/SR_rule_based_roberta_bert_triplet_epochs_1_shard_1 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
license: apache-2.0
tags:
- distigpt2
- hearthstone
metrics:
- bleu
- dvitel/codebleu
- exact_match
- chrf
datasets:
- dvitel/hearthstone
model-index:
- name: h0
results:
- task:
type: text-generation
name: Python Code Synthesis
dataset:
type: dvitel/hearthstone
name: HearthStone
split: test
metrics:
- type: exact_match
value: 0.19696969696969696
name: Exact Match
- type: bleu
value: 0.8881228393983
name: BLEU
- type: dvitel/codebleu
value: 0.6764180663401291
name: CodeBLEU
- type: chrf
value: 90.6099642899634
name: chrF
---
# h0
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on [hearthstone](https://huggingface.co/datasets/dvitel/hearthstone) dataset.
[GitHub repo](https://github.com/dvitel/nlp-sem-parsing/blob/master/h0.py).
It achieves the following results on the evaluation set:
- Loss: 0.3117
- Exact Match: 0.1970
- Bleu: 0.9085
- Codebleu: 0.7341
- Ngram Match Score: 0.7211
- Weighted Ngram Match Score: 0.7299
- Syntax Match Score: 0.7536
- Dataflow Match Score: 0.7317
- Chrf: 92.8689
## Model description
DistilGPT2 fine-tuned on HearthStone dataset for 200 epochs
## Intended uses & limitations
HearthStone card code synthesis.
## Training and evaluation data
See split of [hearthstone](https://huggingface.co/datasets/dvitel/hearthstone) dataset
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 17
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Exact Match | Bleu | Codebleu | Ngram Match Score | Weighted Ngram Match Score | Syntax Match Score | Dataflow Match Score | Chrf |
|:-------------:|:------:|:-----:|:---------------:|:-----------:|:------:|:--------:|:-----------------:|:--------------------------:|:------------------:|:--------------------:|:-------:|
| 0.543 | 11.94 | 1600 | 0.2701 | 0.0152 | 0.8552 | 0.6144 | 0.6027 | 0.6136 | 0.6431 | 0.5982 | 89.0280 |
| 0.1459 | 23.88 | 3200 | 0.2408 | 0.0909 | 0.8841 | 0.6733 | 0.6610 | 0.6719 | 0.7210 | 0.6393 | 91.2517 |
| 0.0801 | 35.82 | 4800 | 0.2498 | 0.1515 | 0.8966 | 0.6999 | 0.6954 | 0.7054 | 0.7326 | 0.6662 | 92.1356 |
| 0.0498 | 47.76 | 6400 | 0.2569 | 0.1818 | 0.9012 | 0.7015 | 0.7022 | 0.7114 | 0.7428 | 0.6496 | 92.4668 |
| 0.0323 | 59.7 | 8000 | 0.2732 | 0.1667 | 0.9044 | 0.7241 | 0.7025 | 0.7123 | 0.7551 | 0.7266 | 92.5429 |
| 0.0214 | 71.64 | 9600 | 0.2896 | 0.1667 | 0.9034 | 0.7228 | 0.7101 | 0.7195 | 0.7670 | 0.6945 | 92.4258 |
| 0.015 | 83.58 | 11200 | 0.2870 | 0.1667 | 0.9046 | 0.7292 | 0.7137 | 0.7228 | 0.7667 | 0.7137 | 92.5979 |
| 0.0121 | 95.52 | 12800 | 0.2907 | 0.1667 | 0.9075 | 0.7287 | 0.7198 | 0.7297 | 0.7696 | 0.6958 | 92.7074 |
| 0.0093 | 107.46 | 14400 | 0.2976 | 0.1667 | 0.9073 | 0.7365 | 0.7134 | 0.7238 | 0.7732 | 0.7356 | 92.8347 |
| 0.0073 | 119.4 | 16000 | 0.3037 | 0.1818 | 0.9085 | 0.7326 | 0.7154 | 0.7241 | 0.7529 | 0.7381 | 92.8343 |
| 0.006 | 131.34 | 17600 | 0.3047 | 0.1970 | 0.9104 | 0.7410 | 0.7230 | 0.7312 | 0.7667 | 0.7433 | 92.8286 |
| 0.005 | 143.28 | 19200 | 0.3080 | 0.1970 | 0.9088 | 0.7377 | 0.7232 | 0.7316 | 0.7746 | 0.7214 | 92.8035 |
| 0.0044 | 155.22 | 20800 | 0.3071 | 0.1970 | 0.9076 | 0.7343 | 0.7196 | 0.7283 | 0.7783 | 0.7112 | 92.7742 |
| 0.004 | 167.16 | 22400 | 0.3097 | 0.1970 | 0.9082 | 0.7440 | 0.7236 | 0.7334 | 0.7601 | 0.7587 | 92.8117 |
| 0.0035 | 179.1 | 24000 | 0.3111 | 0.1970 | 0.9080 | 0.7355 | 0.7204 | 0.7295 | 0.7616 | 0.7304 | 92.7990 |
| 0.0036 | 191.04 | 25600 | 0.3117 | 0.1970 | 0.9085 | 0.7341 | 0.7211 | 0.7299 | 0.7536 | 0.7317 | 92.8689 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0
- Datasets 2.6.1
- Tokenizers 0.13.1
|
AnonymousSub/SR_rule_based_roberta_hier_quadruplet_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
license: unknown
---
# Soda Stream finetuned style Model
Produced from publicly available pictures in landscape, portrait and square format.
## Model info
The models included was trained on "multi-resolution" images.
## Using the model
* common subject prompt tokens: `<wathever> by soda stream`
## Example prompts
`woman near a fountain by soda stream`:
<img src="https://huggingface.co/cyburn/soda_stream/resolve/main/1.png" alt="Picture." width="500"/>
`woman in taxi by soda stream`:
<img src="https://huggingface.co/cyburn/soda_stream/resolve/main/2.png" alt="Picture." width="500"/>
`woman portrait by soda stream`:
<img src="https://huggingface.co/cyburn/soda_stream/resolve/main/3.png" alt="Picture." width="500"/>
|
AnonymousSub/SR_rule_based_roberta_hier_triplet_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | 2022-11-18T15:50:51Z | ---
language:
- en
license: creativeml-openrail-m
thumbnail: "https://huggingface.co/nitrosocke/Ghibli-Diffusion/resolve/main/images/ghibli-diffusion-thumbnail.jpg"
tags:
- stable-diffusion
- text-to-image
- image-to-image
- diffusers
---
### Ghibli Diffusion
This is the fine-tuned Stable Diffusion model trained on images from modern anime feature films from Studio Ghibli.
Use the tokens **_ghibli style_** in your prompts for the effect.
**If you enjoy my work and want to test new models before release, please consider supporting me**
[](https://patreon.com/user?u=79196446)
**Characters rendered with the model:**

**Cars and Animals rendered with the model:**

**Landscapes rendered with the model:**

_ghibli style beautiful Caribbean beach tropical (sunset) - Negative prompt: soft blurry_

_ghibli style ice field white mountains ((northern lights)) starry sky low horizon - Negative prompt: soft blurry_
#### Prompt and settings for the Strom Trooper:
**ghibli style (storm trooper) Negative prompt: (bad anatomy)**
_Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 3450349066, Size: 512x704_
#### Prompt and settings for the VW Beetle:
**ghibli style VW beetle Negative prompt: soft blurry**
_Steps: 30, Sampler: Euler a, CFG scale: 7, Seed: 1529856912, Size: 704x512_
This model was trained using the diffusers based dreambooth training by ShivamShrirao using prior-preservation loss and the _train-text-encoder_ flag in 15.000 steps.
<!-- ### Gradio
We support a [Gradio](https://github.com/gradio-app/gradio) Web UI run redshift-diffusion:
[](https://huggingface.co/spaces/nitrosocke/Ghibli-Diffusion-Demo)-->
### 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX]().
```python
from diffusers import StableDiffusionPipeline
import torch
model_id = "nitrosocke/Ghibli-Diffusion"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "ghibli style magical princess with golden hair"
image = pipe(prompt).images[0]
image.save("./magical_princess.png")
```
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
AnonymousSub/SR_rule_based_roberta_only_classfn_epochs_1_shard_1 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | 2022-11-18T15:53:37Z | ---
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: bert-base-chinese-finetuned-food
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-chinese-finetuned-food
This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0044
- F1: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.2163 | 1.0 | 3 | 1.7446 | 0.0201 |
| 1.5263 | 2.0 | 6 | 1.1179 | 0.6113 |
| 1.1837 | 3.0 | 9 | 0.7233 | 0.75 |
| 0.6987 | 4.0 | 12 | 0.4377 | 0.8766 |
| 0.5036 | 5.0 | 15 | 0.2544 | 0.9154 |
| 0.2602 | 6.0 | 18 | 0.1495 | 0.9598 |
| 0.1998 | 7.0 | 21 | 0.0834 | 0.9836 |
| 0.1182 | 8.0 | 24 | 0.0484 | 0.9911 |
| 0.0815 | 9.0 | 27 | 0.0280 | 1.0 |
| 0.05 | 10.0 | 30 | 0.0177 | 1.0 |
| 0.0375 | 11.0 | 33 | 0.0124 | 1.0 |
| 0.0244 | 12.0 | 36 | 0.0094 | 1.0 |
| 0.0213 | 13.0 | 39 | 0.0075 | 1.0 |
| 0.0163 | 14.0 | 42 | 0.0063 | 1.0 |
| 0.0147 | 15.0 | 45 | 0.0056 | 1.0 |
| 0.0124 | 16.0 | 48 | 0.0051 | 1.0 |
| 0.0125 | 17.0 | 51 | 0.0047 | 1.0 |
| 0.0115 | 18.0 | 54 | 0.0045 | 1.0 |
| 0.0116 | 19.0 | 57 | 0.0044 | 1.0 |
| 0.0102 | 20.0 | 60 | 0.0044 | 1.0 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.12.0+cu102
- Datasets 1.18.4
- Tokenizers 0.12.1
|
AnonymousSub/SR_rule_based_roberta_only_classfn_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | 2022-11-18T15:58:21Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: Clasificador-Ojos
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.7727272510528564
---
# Clasificador-Ojos
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Closed Eyes

#### Opened Eyes
 |
AnonymousSub/SR_rule_based_roberta_only_classfn_twostage_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
license: apache-2.0
language: ta
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
- hf-asr-leaderboard
- tamil language
model-index:
- name: XLSR Wav2Vec2 Tamil by Manan Dey
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ta
type: common_voice
args: ta
metrics:
- name: Test WER
type: wer
value: 57.004356
---
# Wav2Vec2-Large-XLSR-Tamil
When using this model, make sure that your speech input is sampled at 16kHz.
## Inference
The model can be used directly as follows:
```python
!pip install datasets
!pip install transformers
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import torch
import librosa
from datasets import load_dataset
test_dataset = load_dataset("common_voice", "ta", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("Gobee/Wav2vec2-Large-XLSR-Tamil")
model = Wav2Vec2ForCTC.from_pretrained("Gobee/Wav2vec2-Large-XLSR-Tamil")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice.
```python
!pip install datasets
!pip install transformers
!pip install jiwer
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import torch
import librosa
from datasets import load_dataset, load_metric
import re
test_dataset = load_dataset("common_voice", "ta", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("Gobee/Wav2vec2-Large-XLSR-Tamil")
model = Wav2Vec2ForCTC.from_pretrained("Gobee/Wav2vec2-Large-XLSR-Tamil")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\ \’\–\(\)]'
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 57.004356 %
## Usage and Evaluation script
The script used for usage and evaluation can be found [here](https://colab.research.google.com/drive/1dyDe14iOmoNoVHDJTkg-hAgLnrGdI-Dk?usp=share_link)
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found [here](https://colab.research.google.com/drive/1-Klkgr4f-C9SanHfVC5RhP0ELUH6TYlN?usp=sharing)
|
AnonymousSub/SR_rule_based_roberta_twostagetriplet_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
language: "en"
tags:
- text-to-speech
- TTS
- speech-synthesis
- fastspeech2
- speechbrain
license: "apache-2.0"
datasets:
- LJSpeech
metrics:
- mos
---
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
**IMPORTANT: This is a work in progress. This model is not providing meaningful output at the moment**
# Text-to-Speech (TTS) with FastSpeech2 trained on LJSpeech
This repository provides all the necessary tools for Text-to-Speech (TTS) with SpeechBrain using a [FastSpeech2](https://arxiv.org/abs/2006.04558) pretrained on [LJSpeech](https://keithito.com/LJ-Speech-Dataset/).
The pre-trained model takes in input a short text and produces a spectrogram in output. One can get the final waveform by applying a vocoder (e.g., HiFIGAN) on top of the generated spectrogram.
## Install SpeechBrain
```
git clone https://github.com/speechbrain/speechbrain.git
cd speechbrain
pip install -r requirements.txt
pip install --editable .
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Perform Text-to-Speech (TTS) with FastSpeech2
```
import torchaudio
from speechbrain.pretrained import FastSpeech2
from speechbrain.pretrained import HIFIGAN
# Intialize TTS (tacotron2) and Vocoder (HiFIGAN)
fastspeech2 = FastSpeech2.from_hparams(source="speechbrain/tts-fastspeech2-ljspeech", savedir="tmpdir_tts")
hifi_gan = HIFIGAN.from_hparams(source="speechbrain/tts-hifigan-libritts-16kHz", savedir="tmpdir_vocoder")
# Running the TTS
mel_output, durations, pitch, energy = fastspeech2.encode_text(input_text)
# Running Vocoder (spectrogram-to-waveform)
waveforms = hifi_gan.decode_batch(mel_output)
# Save the waverform
torchaudio.save('example_TTS.wav', waveforms.squeeze(1), 16000)
```
If you want to generate multiple sentences in one-shot, you can do in this way:
```
from speechbrain.pretrained import FastSpeech2
fastspeech2 = FastSpeech2.from_hparams(source="speechbrain/tts-fastspeech2-ljspeech", savedir="tmpdir_tts")
items = [
"A quick brown fox jumped over the lazy dog",
"How much wood would a woodchuck chuck?",
"Never odd or even"
]
mel_outputs, durations, pitch, energy = fastspeech2.encode_batch(items)
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
### Training
The model was trained with SpeechBrain.
To train it from scratch follow these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```bash
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```bash
cd recipes/LJSpeech/TTS/fastspeech2/
python train.py --device=cuda:0 --max_grad_norm=1.0 --data_folder=/your_folder/LJSpeech-1.1 hparams/train.yaml
```
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1Yb8CDCrW7JF1_jg8Xc4U15z3W37VjrY5?usp=share_link).
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
# **About SpeechBrain**
- Website: https://speechbrain.github.io/
- Code: https://github.com/speechbrain/speechbrain/
- HuggingFace: https://huggingface.co/speechbrain/
# **Citing SpeechBrain**
Please, cite SpeechBrain if you use it for your research or business.
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
```
|
AnonymousSub/SR_rule_based_twostage_quadruplet_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
language:
- en
- hi
- multilingual
tags:
- generated_from_trainer
licence: cc-by-sa-4.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# muril-en-hi-codemixed
muril-en-hi-codemixed is a masked language model, based on the [MuRIL](https://huggingface.co/google/muril-base-cased) multilingual model.
muril-en-hi-codemixed replaces the tokenizer, vocabulary and the embeddings layer of the MuRIL model.
The tokenizer and vocabulary used are the same as in the [roberta-en-hi-codemixed](https://huggingface.co/cjvt/roberta-en-hi-codemixed) model.
The new embedding weights were initialized from the MuRIL embeddings.
The new muril-en-hi-codemixed model was further pre-trained for two epochs on the same codemixed English and Hindi corpora
as the [roberta-en-hi-codemixed](https://huggingface.co/cjvt/roberta-en-hi-codemixed) model.
|
AnonymousSub/bert_mean_diff_epochs_1_shard_10 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: Clasificador-Ojos-XD
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9696969985961914
---
# Clasificador-Ojos-XD
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images |
AnonymousSub/bert_snips | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | 2022-11-18T17:52:52Z | ---
license: creativeml-openrail-m
language:
- en
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
---
# Miyo-Waifu-Diffusion
This model is a fine-tuned Waifu-Diffusion v1.3 by dreambooth. that can generate illustrations of Miyo Harada from THE IDOLM@STER CINDERELLA GIRLS.
To use at a minimum,Please type "miyoshort" or "miyopony" at the prompt
miyoshort sample %2C(Driving%20red%20car_1.0)%2C1girl%2Chighly%20detailed%2Crim%20light%2Cwrinkles%20in%20clothes%2C8k%2Ckawaii%2C(masterpiece)%2Chigh%20quali.png)
Prompts:(sks miyoshort:1.0),(Driving red car:1.0),1girl,highly detailed,rim light,wrinkles in clothes,8k,kawaii,(masterpiece),high quality,green eyes,large breasts,kawaii, T-shirt,(holding steering wheel),in the car,looking side
Negative prompt: ( 2girls:1.2),3girls,(long hair:1.2),One-color background,lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry,bad anatomy, bad hands, bad quality, blurry, cropped, disconnected limbs, extra digit, extra limbs, fewer digits, jpeg artifacts, explicit, text,bad art, ugly, messy drawing, flesh pile,mutated hands and fingers, intricate human hands fingers, poorly drawn hands, malformed hands, bad hands,long neck,big face,big head,neck choker
miyopony sample %2C(Driving%20red%20car_1.1)%2C1girl%2Chighly%20detailed%2Crim%20light%2Cwrinkles%20in%20clothes%2C8k%2Ckawaii%2C(masterpiece)%2Chigh%20qualit.png)
(sks miyopony:1.0),(Driving red car:1.1),1girl,highly detailed,rim light,wrinkles in clothes,8k,kawaii,(masterpiece),high quality,green eyes,large breasts,kawaii, T-shirt,(holding steering wheel),in the car,(looking side)
Negative prompt: ( 2girls:1.2),3girls,(long hair:1.2),One-color background,lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry,bad anatomy, bad hands, bad quality, blurry, cropped, disconnected limbs, extra digit, extra limbs, fewer digits, jpeg artifacts, explicit, text,bad art, ugly, messy drawing, flesh pile,mutated hands and fingers, intricate human hands fingers, poorly drawn hands, malformed hands, bad hands,long neck,big face,big head,neck choker
If the generated image resembles Miyo Harada, the copyright may belong to Bandai Namco Entertainment Inc. |
AnonymousSub/bert_triplet_epochs_1_shard_10 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: beit-base-ches-demo-v0
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9870689655172413
widget:
- src: https://imgs.mongabay.com/wp-content/uploads/sites/20/2020/04/07204605/amazon_coca_01.jpg
example_title: Tree Canopy
- src: https://images.ctfassets.net/nzn0tepgtyr1/4tyavnFHhmNuVky1ISq51k/64aaf596f6b8ee12d0f0e898679c8f4f/Hero_Image.jpg?w=1024&h=710&fl=progressive&q=50&fm=jpg&bg=transparent
example_title: Low Vegetation
- src: https://outline-prod.imgix.net/20170228-YxGtsv8J0ePP0rXcnle2?auto=format&q=60&w=1280&s=27916f48ed9226c2a2b7848de8d7c0d1
example_title: Impervious Surfaces
- src: https://clarity.maptiles.arcgis.com/arcgis/rest/services/World_Imagery/MapServer/tile/15/11883/10109
example_title: Water
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# beit-base-ches-demo-v0
This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0420
- Accuracy: 0.9871
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0183 | 3.45 | 300 | 0.0420 | 0.9871 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
AnonymousSub/cline-emanuals-s10-AR | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 27 | null | ---
license: unknown
---
# Ans Huh finetuned style Model
Produced from publicly available pictures in landscape, portrait and square format.
## Model info
The models included was trained on "multi-resolution" images.
## Using the model
* common subject prompt tokens: `<wathever> watercolor by ans huh`
## Example prompts
`woman near a fountain watercolor by ans huh`:
<img src="https://huggingface.co/cyburn/ans_huh/resolve/main/1.jpg" alt="Picture." width="500"/>
`woman in taxi watercolor by ans huh`:
<img src="https://huggingface.co/cyburn/ans_huh/resolve/main/2.jpg" alt="Picture." width="500"/>
`man portrait watercolor by ans huh`:
<img src="https://huggingface.co/cyburn/ans_huh/resolve/main/3.jpg" alt="Picture." width="500"/>
|
AnonymousSub/cline-papers-roberta-0.585 | [
"pytorch",
"roberta",
"transformers"
]
| null | {
"architectures": [
"LecbertForPreTraining"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | ---
license: unknown
---
# Gauzy Storm finetuned style Model
Produced from publicly available pictures in landscape, portrait and square format. This model is focussed on creating animal hybrid artwork.
## Model info
The models included was trained on "multi-resolution" images.
## Using the model
* common subject prompt tokens: `<wathever> by gauzy storms`
## Example prompts
`bear deer by gauzy storms`:
<img src="https://huggingface.co/cyburn/gauzy_storms/resolve/main/1.png" alt="Picture." width="500"/>
`pinguin by gauzy storms`:
<img src="https://huggingface.co/cyburn/gauzy_storms/resolve/main/2.png" alt="Picture." width="500"/>
`unicorn zebra by gauzy storms`:
<img src="https://huggingface.co/cyburn/gauzy_storms/resolve/main/3.png" alt="Picture." width="500"/>
|
AnonymousSub/cline-techqa | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | Access to model ERRORCOMPANY/Lino is restricted and you are not in the authorized list. Visit https://huggingface.co/ERRORCOMPANY/Lino to ask for access. |
AnonymousSub/dummy_1 | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 33 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-in21k-finetuned-eurosat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-in21k-finetuned-eurosat
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0548
- Accuracy: 0.9893
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 32
- total_train_batch_size: 32
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
- training precision: Mixed Precision
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1182 | 1.0 | 759 | 0.1451 | 0.9752 |
| 0.132 | 2.0 | 1518 | 0.0755 | 0.9841 |
| 0.0262 | 3.0 | 2277 | 0.0548 | 0.9893 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.13.0+rocm5.2
- Datasets 2.8.0
- Tokenizers 0.12.1
|
AnonymousSub/dummy_2_parent | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: dof-Rai2-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dof-Rai2-1
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
AnonymousSub/rule_based_bert_quadruplet_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | 2022-11-18T22:05:31Z | ---
license: mit
tags:
- vision
- image-classification
datasets:
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# NAT (base variant)
NAT-Base trained on ImageNet-1K at 224x224 resolution.
It was introduced in the paper [Neighborhood Attention Transformer](https://arxiv.org/abs/2204.07143) by Hassani et al. and first released in [this repository](https://github.com/SHI-Labs/Neighborhood-Attention-Transformer).
## Model description
NAT is a hierarchical vision transformer based on Neighborhood Attention (NA).
Neighborhood Attention is a restricted self attention pattern in which each token's receptive field is limited to its nearest neighboring pixels.
NA is a sliding-window attention patterns, and as a result is highly flexible and maintains translational equivariance.
NA is implemented in PyTorch implementations through its extension, [NATTEN](https://github.com/SHI-Labs/NATTEN/).

[Source](https://paperswithcode.com/paper/neighborhood-attention-transformer)
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=nat) to look for
fine-tuned versions on a task that interests you.
### Example
Here is how to use this model to classify an image from the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoImageProcessor, NatForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoImageProcessor.from_pretrained("shi-labs/nat-base-in1k-224")
model = NatForImageClassification.from_pretrained("shi-labs/nat-base-in1k-224")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
For more examples, please refer to the [documentation](https://huggingface.co/transformers/model_doc/nat.html#).
### Requirements
Other than transformers, this model requires the [NATTEN](https://shi-labs.com/natten) package.
If you're on Linux, you can refer to [shi-labs.com/natten](https://shi-labs.com/natten) for instructions on installing with pre-compiled binaries (just select your torch build to get the correct wheel URL).
You can alternatively use `pip install natten` to compile on your device, which may take up to a few minutes.
Mac users only have the latter option (no pre-compiled binaries).
Refer to [NATTEN's GitHub](https://github.com/SHI-Labs/NATTEN/) for more information.
### BibTeX entry and citation info
```bibtex
@article{hassani2022neighborhood,
title = {Neighborhood Attention Transformer},
author = {Ali Hassani and Steven Walton and Jiachen Li and Shen Li and Humphrey Shi},
year = 2022,
url = {https://arxiv.org/abs/2204.07143},
eprint = {2204.07143},
archiveprefix = {arXiv},
primaryclass = {cs.CV}
}
``` |
AnonymousSub/rule_based_bert_quadruplet_epochs_1_shard_10 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: mit
tags:
- vision
- image-classification
datasets:
- imagenet-21k
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# DiNAT (large variant)
DiNAT-Large with a 7x7 kernel pre-trained on ImageNet-21K, and fine-tuned on ImageNet-1K at 224x224.
It was introduced in the paper [Dilated Neighborhood Attention Transformer](https://arxiv.org/abs/2209.15001) by Hassani et al. and first released in [this repository](https://github.com/SHI-Labs/Neighborhood-Attention-Transformer).
## Model description
DiNAT is a hierarchical vision transformer based on Neighborhood Attention (NA) and its dilated variant (DiNA).
Neighborhood Attention is a restricted self attention pattern in which each token's receptive field is limited to its nearest neighboring pixels.
NA and DiNA are therefore sliding-window attention patterns, and as a result are highly flexible and maintain translational equivariance.
They come with PyTorch implementations through the [NATTEN](https://github.com/SHI-Labs/NATTEN/) package.

[Source](https://paperswithcode.com/paper/dilated-neighborhood-attention-transformer)
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=dinat) to look for
fine-tuned versions on a task that interests you.
### Example
Here is how to use this model to classify an image from the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoImageProcessor, DinatForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoImageProcessor.from_pretrained("shi-labs/dinat-large-in22k-in1k-224")
model = DinatForImageClassification.from_pretrained("shi-labs/dinat-large-in22k-in1k-224")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
For more examples, please refer to the [documentation](https://huggingface.co/transformers/model_doc/dinat.html#).
### Requirements
Other than transformers, this model requires the [NATTEN](https://shi-labs.com/natten) package.
If you're on Linux, you can refer to [shi-labs.com/natten](https://shi-labs.com/natten) for instructions on installing with pre-compiled binaries (just select your torch build to get the correct wheel URL).
You can alternatively use `pip install natten` to compile on your device, which may take up to a few minutes.
Mac users only have the latter option (no pre-compiled binaries).
Refer to [NATTEN's GitHub](https://github.com/SHI-Labs/NATTEN/) for more information.
### BibTeX entry and citation info
```bibtex
@article{hassani2022dilated,
title = {Dilated Neighborhood Attention Transformer},
author = {Ali Hassani and Humphrey Shi},
year = 2022,
url = {https://arxiv.org/abs/2209.15001},
eprint = {2209.15001},
archiveprefix = {arXiv},
primaryclass = {cs.CV}
}
``` |
AnonymousSub/rule_based_bert_quadruplet_epochs_1_shard_1_squad2.0 | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: mit
tags:
- vision
- image-classification
datasets:
- imagenet-21k
- imagenet-1k
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# DiNAT (large variant)
DiNAT-Large with a 7x7 kernel pre-trained on ImageNet-21K at 224x224, and fine-tuned on ImageNet-1K at 384x384 with increased dilation values.
It was introduced in the paper [Dilated Neighborhood Attention Transformer](https://arxiv.org/abs/2209.15001) by Hassani et al. and first released in [this repository](https://github.com/SHI-Labs/Neighborhood-Attention-Transformer).
## Model description
DiNAT is a hierarchical vision transformer based on Neighborhood Attention (NA) and its dilated variant (DiNA).
Neighborhood Attention is a restricted self attention pattern in which each token's receptive field is limited to its nearest neighboring pixels.
NA and DiNA are therefore sliding-window attention patterns, and as a result are highly flexible and maintain translational equivariance.
They come with PyTorch implementations through the [NATTEN](https://github.com/SHI-Labs/NATTEN/) package.

[Source](https://paperswithcode.com/paper/dilated-neighborhood-attention-transformer)
## Intended uses & limitations
You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=dinat) to look for
fine-tuned versions on a task that interests you.
### Example
Here is how to use this model to classify an image from the COCO 2017 dataset into one of the 1,000 ImageNet classes:
```python
from transformers import AutoImageProcessor, DinatForImageClassification
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = AutoImageProcessor.from_pretrained("shi-labs/dinat-large-in22k-in1k-384")
model = DinatForImageClassification.from_pretrained("shi-labs/dinat-large-in22k-in1k-384")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model.config.id2label[predicted_class_idx])
```
For more examples, please refer to the [documentation](https://huggingface.co/transformers/model_doc/dinat.html#).
### Requirements
Other than transformers, this model requires the [NATTEN](https://shi-labs.com/natten) package.
If you're on Linux, you can refer to [shi-labs.com/natten](https://shi-labs.com/natten) for instructions on installing with pre-compiled binaries (just select your torch build to get the correct wheel URL).
You can alternatively use `pip install natten` to compile on your device, which may take up to a few minutes.
Mac users only have the latter option (no pre-compiled binaries).
Refer to [NATTEN's GitHub](https://github.com/SHI-Labs/NATTEN/) for more information.
### BibTeX entry and citation info
```bibtex
@article{hassani2022dilated,
title = {Dilated Neighborhood Attention Transformer},
author = {Ali Hassani and Humphrey Shi},
year = 2022,
url = {https://arxiv.org/abs/2209.15001},
eprint = {2209.15001},
archiveprefix = {arXiv},
primaryclass = {cs.CV}
}
``` |
AnonymousSub/rule_based_hier_triplet_0.1_epochs_1_shard_1_squad2.0 | [
"pytorch",
"bert",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"BertForQuestionAnswering"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`__main__.PubmedLowMemoryLoader` of length 26041 with parameters:
```
{'batch_size': 128}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 2000,
"evaluator": "__main__.PubmedTruePositiveIRetrievalEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 21,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
AnonymousSub/rule_based_roberta_bert_quadruplet_epochs_1_shard_1_squad2.0 | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
license: creativeml-openrail-m
---
`m_ross artstyle,`class token
License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies:
You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here |
AnonymousSub/rule_based_roberta_bert_quadruplet_epochs_1_shard_1_wikiqa | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 23 | null |
---
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
language: en
datasets:
- lmqg/qag_tweetqa
pipeline_tag: text2text-generation
tags:
- questions and answers generation
widget:
- text: "Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records."
example_title: "Questions & Answers Generation Example 1"
model-index:
- name: research-backup/t5-large-tweetqa-qag-np
results:
- task:
name: Text2text Generation
type: text2text-generation
dataset:
name: lmqg/qag_tweetqa
type: default
args: default
metrics:
- name: BLEU4 (Question & Answer Generation)
type: bleu4_question_answer_generation
value: 14.14
- name: ROUGE-L (Question & Answer Generation)
type: rouge_l_question_answer_generation
value: 37.45
- name: METEOR (Question & Answer Generation)
type: meteor_question_answer_generation
value: 31.49
- name: BERTScore (Question & Answer Generation)
type: bertscore_question_answer_generation
value: 90.95
- name: MoverScore (Question & Answer Generation)
type: moverscore_question_answer_generation
value: 62.62
- name: QAAlignedF1Score-BERTScore (Question & Answer Generation)
type: qa_aligned_f1_score_bertscore_question_answer_generation
value: 92.64
- name: QAAlignedRecall-BERTScore (Question & Answer Generation)
type: qa_aligned_recall_bertscore_question_answer_generation
value: 92.27
- name: QAAlignedPrecision-BERTScore (Question & Answer Generation)
type: qa_aligned_precision_bertscore_question_answer_generation
value: 93.03
- name: QAAlignedF1Score-MoverScore (Question & Answer Generation)
type: qa_aligned_f1_score_moverscore_question_answer_generation
value: 65.47
- name: QAAlignedRecall-MoverScore (Question & Answer Generation)
type: qa_aligned_recall_moverscore_question_answer_generation
value: 64.68
- name: QAAlignedPrecision-MoverScore (Question & Answer Generation)
type: qa_aligned_precision_moverscore_question_answer_generation
value: 66.36
---
# Model Card of `research-backup/t5-large-tweetqa-qag-np`
This model is fine-tuned version of [t5-large](https://huggingface.co/t5-large) for question & answer pair generation task on the [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
This model is fine-tuned without a task prefix.
### Overview
- **Language model:** [t5-large](https://huggingface.co/t5-large)
- **Language:** en
- **Training data:** [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="research-backup/t5-large-tweetqa-qag-np")
# model prediction
question_answer_pairs = model.generate_qa("William Turner was an English painter who specialised in watercolour landscapes")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "research-backup/t5-large-tweetqa-qag-np")
output = pipe("Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
```
## Evaluation
- ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/research-backup/t5-large-tweetqa-qag-np/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qag_tweetqa.default.json)
| | Score | Type | Dataset |
|:--------------------------------|--------:|:--------|:---------------------------------------------------------------------|
| BERTScore | 90.95 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| Bleu_1 | 40.9 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| Bleu_2 | 28.27 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| Bleu_3 | 19.84 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| Bleu_4 | 14.14 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| METEOR | 31.49 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| MoverScore | 62.62 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| QAAlignedF1Score (BERTScore) | 92.64 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| QAAlignedF1Score (MoverScore) | 65.47 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| QAAlignedPrecision (BERTScore) | 93.03 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| QAAlignedPrecision (MoverScore) | 66.36 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| QAAlignedRecall (BERTScore) | 92.27 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| QAAlignedRecall (MoverScore) | 64.68 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| ROUGE_L | 37.45 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qag_tweetqa
- dataset_name: default
- input_types: ['paragraph']
- output_types: ['questions_answers']
- prefix_types: None
- model: t5-large
- max_length: 256
- max_length_output: 128
- epoch: 16
- batch: 16
- lr: 0.0001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 4
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/t5-large-tweetqa-qag-np/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
AnonymousSub/rule_based_roberta_twostagetriplet_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 1 | null | Access to model aazurita/selfv2 is restricted and you are not in the authorized list. Visit https://huggingface.co/aazurita/selfv2 to ask for access. |
AnonymousSub/rule_based_roberta_twostagetriplet_epochs_1_shard_1_squad2.0 | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: nmt-mpst-id-en-lr_1e-05-ep_20-seq_128_bs-32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nmt-mpst-id-en-lr_1e-05-ep_20-seq_128_bs-32
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7787
- Bleu: 0.0338
- Meteor: 0.1312
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Meteor |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| No log | 1.0 | 202 | 3.1965 | 0.0132 | 0.0696 |
| No log | 2.0 | 404 | 3.0644 | 0.0224 | 0.0975 |
| 3.5509 | 3.0 | 606 | 2.9995 | 0.0255 | 0.1075 |
| 3.5509 | 4.0 | 808 | 2.9538 | 0.0269 | 0.1106 |
| 3.2374 | 5.0 | 1010 | 2.9221 | 0.0277 | 0.1134 |
| 3.2374 | 6.0 | 1212 | 2.8996 | 0.0286 | 0.1165 |
| 3.2374 | 7.0 | 1414 | 2.8750 | 0.0291 | 0.1177 |
| 3.143 | 8.0 | 1616 | 2.8611 | 0.0297 | 0.1197 |
| 3.143 | 9.0 | 1818 | 2.8466 | 0.0303 | 0.1209 |
| 3.092 | 10.0 | 2020 | 2.8330 | 0.0312 | 0.1229 |
| 3.092 | 11.0 | 2222 | 2.8234 | 0.0318 | 0.1247 |
| 3.092 | 12.0 | 2424 | 2.8130 | 0.0322 | 0.1264 |
| 3.0511 | 13.0 | 2626 | 2.8058 | 0.0323 | 0.1269 |
| 3.0511 | 14.0 | 2828 | 2.7970 | 0.0324 | 0.1272 |
| 3.0288 | 15.0 | 3030 | 2.7914 | 0.033 | 0.1288 |
| 3.0288 | 16.0 | 3232 | 2.7877 | 0.0331 | 0.1299 |
| 3.0288 | 17.0 | 3434 | 2.7837 | 0.0333 | 0.1302 |
| 3.0133 | 18.0 | 3636 | 2.7809 | 0.0336 | 0.1308 |
| 3.0133 | 19.0 | 3838 | 2.7792 | 0.0337 | 0.131 |
| 3.0028 | 20.0 | 4040 | 2.7787 | 0.0338 | 0.1312 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
AnonymousSub/rule_based_roberta_twostagetriplet_epochs_1_shard_1_wikiqa | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 24 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: nmt-mpst-id-en-lr_0.0001-ep_20-seq_128_bs-32
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nmt-mpst-id-en-lr_0.0001-ep_20-seq_128_bs-32
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9722
- Bleu: 0.1118
- Meteor: 0.2641
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Meteor |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| No log | 1.0 | 202 | 2.8076 | 0.0316 | 0.1256 |
| No log | 2.0 | 404 | 2.6213 | 0.0427 | 0.1545 |
| 3.0404 | 3.0 | 606 | 2.4851 | 0.0532 | 0.1754 |
| 3.0404 | 4.0 | 808 | 2.3880 | 0.0605 | 0.1894 |
| 2.5973 | 5.0 | 1010 | 2.3137 | 0.0685 | 0.2014 |
| 2.5973 | 6.0 | 1212 | 2.2489 | 0.0729 | 0.2084 |
| 2.5973 | 7.0 | 1414 | 2.1949 | 0.0798 | 0.2199 |
| 2.3553 | 8.0 | 1616 | 2.1503 | 0.0854 | 0.227 |
| 2.3553 | 9.0 | 1818 | 2.1173 | 0.0915 | 0.2357 |
| 2.2044 | 10.0 | 2020 | 2.0854 | 0.0938 | 0.2397 |
| 2.2044 | 11.0 | 2222 | 2.0586 | 0.0974 | 0.2442 |
| 2.2044 | 12.0 | 2424 | 2.0418 | 0.1007 | 0.2491 |
| 2.0911 | 13.0 | 2626 | 2.0239 | 0.1033 | 0.2528 |
| 2.0911 | 14.0 | 2828 | 2.0071 | 0.105 | 0.255 |
| 2.0255 | 15.0 | 3030 | 1.9955 | 0.1068 | 0.2576 |
| 2.0255 | 16.0 | 3232 | 1.9913 | 0.1089 | 0.2609 |
| 2.0255 | 17.0 | 3434 | 1.9774 | 0.1099 | 0.2605 |
| 1.9777 | 18.0 | 3636 | 1.9789 | 0.1114 | 0.2638 |
| 1.9777 | 19.0 | 3838 | 1.9734 | 0.1116 | 0.2638 |
| 1.9505 | 20.0 | 4040 | 1.9722 | 0.1118 | 0.2641 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.