modelId
stringlengths 4
81
| tags
list | pipeline_tag
stringclasses 17
values | config
dict | downloads
int64 0
59.7M
| first_commit
timestamp[ns, tz=UTC] | card
stringlengths 51
438k
|
---|---|---|---|---|---|---|
Al/mymodel | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# lambdaofgod/query-readme-nbow-nbow-mnrl
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 200 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('lambdaofgod/query-readme-nbow-nbow-mnrl')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=lambdaofgod/query-readme-nbow-nbow-mnrl)
## Full Model Architecture
```
SentenceTransformer(
(0): WordEmbeddings(
(emb_layer): Embedding(4395, 200)
)
(1): WordWeights(
(emb_layer): Embedding(4395, 1)
)
(2): Pooling({'word_embedding_dimension': 200, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
AlErysvi/Erys | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# lambdaofgod/document-readme-nbow-nbow-mnrl
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 200 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('lambdaofgod/document-readme-nbow-nbow-mnrl')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=lambdaofgod/document-readme-nbow-nbow-mnrl)
## Full Model Architecture
```
SentenceTransformer(
(0): WordEmbeddings(
(emb_layer): Embedding(53559, 200)
)
(1): WordWeights(
(emb_layer): Embedding(53559, 1)
)
(2): Pooling({'word_embedding_dimension': 200, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
AlanDev/DallEMiniButBetter | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# lambdaofgod/document-titles-nbow-nbow-mnrl
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 200 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('lambdaofgod/document-titles-nbow-nbow-mnrl')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=lambdaofgod/document-titles-nbow-nbow-mnrl)
## Full Model Architecture
```
SentenceTransformer(
(0): WordEmbeddings(
(emb_layer): Embedding(53559, 200)
)
(1): WordWeights(
(emb_layer): Embedding(53559, 1)
)
(2): Pooling({'word_embedding_dimension': 200, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
AlanDev/dall-e-better | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# lambdaofgod/query-titles_dependencies-nbow-nbow-mnrl
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 200 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('lambdaofgod/query-titles_dependencies-nbow-nbow-mnrl')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=lambdaofgod/query-titles_dependencies-nbow-nbow-mnrl)
## Full Model Architecture
```
SentenceTransformer(
(0): WordEmbeddings(
(emb_layer): Embedding(4395, 200)
)
(1): WordWeights(
(emb_layer): Embedding(4395, 1)
)
(2): Pooling({'word_embedding_dimension': 200, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
AlanDev/test | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# lambdaofgod/document-titles_dependencies-nbow-nbow-mnrl
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 200 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('lambdaofgod/document-titles_dependencies-nbow-nbow-mnrl')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=lambdaofgod/document-titles_dependencies-nbow-nbow-mnrl)
## Full Model Architecture
```
SentenceTransformer(
(0): WordEmbeddings(
(emb_layer): Embedding(53559, 200)
)
(1): WordWeights(
(emb_layer): Embedding(53559, 1)
)
(2): Pooling({'word_embedding_dimension': 200, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
AlbertHSU/BertTEST | [
"pytorch"
]
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# lambdaofgod/query-readme_dependencies-nbow-nbow-mnrl
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 200 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('lambdaofgod/query-readme_dependencies-nbow-nbow-mnrl')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=lambdaofgod/query-readme_dependencies-nbow-nbow-mnrl)
## Full Model Architecture
```
SentenceTransformer(
(0): WordEmbeddings(
(emb_layer): Embedding(4395, 200)
)
(1): WordWeights(
(emb_layer): Embedding(4395, 1)
)
(2): Pooling({'word_embedding_dimension': 200, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
AlbertHSU/ChineseFoodBert | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 15 | 2023-01-06T11:22:31Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# lambdaofgod/document-readme_dependencies-nbow-nbow-mnrl
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 200 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('lambdaofgod/document-readme_dependencies-nbow-nbow-mnrl')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=lambdaofgod/document-readme_dependencies-nbow-nbow-mnrl)
## Full Model Architecture
```
SentenceTransformer(
(0): WordEmbeddings(
(emb_layer): Embedding(53559, 200)
)
(1): WordWeights(
(emb_layer): Embedding(53559, 1)
)
(2): Pooling({'word_embedding_dimension': 200, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Aleksandar/bert-srb-base-cased-oscar | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: creativeml-openrail-m
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
inference: true
extra_gated_prompt: |-
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. CompVis claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
Please read the full license carefully here: https://huggingface.co/spaces/CompVis/stable-diffusion-license
---
<br>
## This model was created by TheBestJammer and [originally released on CivitAI](https://civitai.com/models/3758/hasdx)
<br>
## I'm merely hosting it here for convenience sake, with permission of the original author, because CivitAI doesn't allow posting diffusers format.
<br>
[![Example][1]][1]
[1]: https://i.imgur.com/DzSjkRa.jpg |
Aleksandar/bert-srb-ner-setimes-lr | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: dietercoppens/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play π
|
Aleksandar/distilbert-srb-ner-setimes | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
license: creativeml-openrail-m
tags:
- pytorch
- diffusers
- stable-diffusion
- text-to-image
- diffusion-models-class
- dreambooth-hackathon
- animal
widget:
- text: a photo of sssimba cat in the Acropolis
---
# DreamBooth model for the sssimba concept trained by Thabet on the Thabet/Simba_dataset dataset.
This is a Stable Diffusion model fine-tuned on the sssimba concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of sssimba cat**
This model was created as part of the DreamBooth Hackathon π₯. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
## Description
This is a Stable Diffusion model fine-tuned on `cat` images for the animal theme.
## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('Thabet/sssimba-cat')
image = pipeline().images[0]
image
```
|
Aleksandar/electra-srb-ner-setimes-lr | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-01-06T12:00:41Z | ---
license: creativeml-openrail-m
tags:
- pytorch
- diffusers
- stable-diffusion
- text-to-image
- diffusion-models-class
- dreambooth-hackathon
- food
widget:
- text: a cute bunny, mazapan
example_title: "Bunny"
- text: a cute robot made of mazapan
example_title: "Robot"
- text: a photograph of a cute dog, mazapan
example_title: "Dog"
datasets:
- kokuma/figuritas-de-mazapan
---
# DreamBooth model for the `mazapan` concept trained by kokuma on the `kokuma/figuritas-de-mazapan` dataset.
This is a Stable Diffusion model fine-tuned on the `mazapan` concept with DreamBooth for the food theme.\
This model was created as part of the DreamBooth Hackathon π₯. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
#### Prompts
- **a cute X, mazapan**: `a cute bunny, mazapan`
- **a cute X made of mazapan**: `a cute robot made of mazapan`
- **a photograph of a cute X, mazapan**: `a photograph of a cute dog, mazapan`
#### Suggested parameters
- **CFG scale**: Between 6 and 8
- **Samplers**: Euler a, Euler, DPM2 a, DPM++ SDE, DPM fast, DPM adaptive, DPM2 a Karras
## Examples
| a cute dog, mazapan | a cute sparrow, mazapan | a cute bear, mazapan |
| -- | -- | -- |
|  |  |  |
| a cute koala, mazapan | a cute robot made of mazapan | a cute fox, mazapan |
| -- | -- | -- |
|  |  |  |
## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('kokuma/mazapan')
image = pipeline().images[0]
image
``` |
Aleksandar/electra-srb-ner-setimes | [
"pytorch",
"electra",
"token-classification",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"ElectraForTokenClassification"
],
"model_type": "electra",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | 2023-01-06T12:02:06Z | ---
license: creativeml-openrail-m
language:
- en
thumbnail: "https://huggingface.co/Norod78/sd21-hearthstone-cards/resolve/main/sample_images/00005-166904889-Snoop%20Dogg%20music%20power%20Hearthstone%20card.png"
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-diffusers
datasets:
- Norod78/hearthstone-cards-512
inference: true
widget:
- text: 3 Cute dog, Fluff. Hearthstone card
- text: Gal Gadot Super Wonderwoman power. Hearthstone card
- text: Cute Pikachu Pokemon Electricity buzzzz Hearthstone card
- text: 4 Snoop Dogg music power Hearthstone card
library_name: diffusers
pipeline_tag: text-to-image
---
# SDv2.1 sd21-hearthstone-cards model
### Stable-Diffusion v2.1 fine-tuned for 10k steps using [Huggingface Diffusers train_text_to_image script](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image.py) upon [Norod78/hearthstone-cards-512](https://huggingface.co/datasets/Norod78/hearthstone-cards-512)
# Stable-Diffusion Hearthstone card generator. First digit in prompt controls the Mana-cost (pretty well) then card name, then special ability and description, then "Hearthstone card".

## A few sample pictures generated with this model are available [here](https://huggingface.co/Norod78/sd21-hearthstone-cards/tree/main/sample_images)
Please note that the entire training set contains actual Hearthstone card images which are copyrighted by Blizzard
So it is possible that the generated images contain copyrighted elements and should only be use for your private entertainment
Trained by [@Norod78](https://twitter.com/Norod78) |
Aleksandar/electra-srb-oscar | [
"pytorch",
"electra",
"fill-mask",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"ElectraForMaskedLM"
],
"model_type": "electra",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 231.31 +/- 14.85
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Aleksandar1932/gpt2-country | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### mpid-hassanblend-better-train Dreambooth model trained by tftgregrge with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
Aleksandar1932/gpt2-hip-hop | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: tiny-mlm-glue-qqp-custom-tokenizer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-glue-qqp-custom-tokenizer
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 5.7811
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 7.4263 | 0.4 | 500 | 6.7180 |
| 6.4992 | 0.8 | 1000 | 6.6456 |
| 6.2737 | 1.2 | 1500 | 6.4546 |
| 5.9994 | 1.6 | 2000 | 6.2448 |
| 5.875 | 2.0 | 2500 | 6.2319 |
| 5.7667 | 2.4 | 3000 | 6.1561 |
| 5.7425 | 2.8 | 3500 | 6.2058 |
| 5.753 | 3.2 | 4000 | 6.0921 |
| 5.5982 | 3.6 | 4500 | 6.1794 |
| 5.6196 | 4.0 | 5000 | 6.1381 |
| 5.512 | 4.4 | 5500 | 6.0225 |
| 5.5096 | 4.8 | 6000 | 6.0408 |
| 5.4474 | 5.2 | 6500 | 5.8967 |
| 5.3589 | 5.6 | 7000 | 5.9714 |
| 5.329 | 6.0 | 7500 | 5.9004 |
| 5.2965 | 6.4 | 8000 | 5.8087 |
| 5.2853 | 6.8 | 8500 | 5.8612 |
| 5.2446 | 7.2 | 9000 | 5.8007 |
| 5.0895 | 7.6 | 9500 | 5.7173 |
| 5.1699 | 8.0 | 10000 | 5.8139 |
| 5.0603 | 8.4 | 10500 | 5.6959 |
| 5.0748 | 8.8 | 11000 | 5.7078 |
| 5.0742 | 9.2 | 11500 | 5.7509 |
| 4.955 | 9.6 | 12000 | 5.7811 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
Aleksandar1932/gpt2-pop | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixel_Copter_check
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 33.50 +/- 26.95
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Aleksandar1932/gpt2-soul | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain π€"
datasets:
- Eip/autotrain-data-real-vs-fake-news
co2_eq_emissions:
emissions: 2.0552688377356976
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 2757281767
- CO2 Emissions (in grams): 2.0553
## Validation Metrics
- Loss: 0.002
- Accuracy: 1.000
- Precision: 1.000
- Recall: 1.000
- AUC: 1.000
- F1: 1.000
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Eip/autotrain-real-vs-fake-news-2757281767
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Eip/autotrain-real-vs-fake-news-2757281767", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Eip/autotrain-real-vs-fake-news-2757281767", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
AlekseyKorshuk/horror-scripts | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 19 | null | ---
license: mit
datasets:
- xnli
language:
- ar
metrics:
- accuracy
pipeline_tag: zero-shot-classification
---
# XLM-ROBERTA-BASE-XNLI-AR
## Model description
This model takes the XLM-Roberta-base model which has been continued to pre-traine on a large corpus of Twitter in multiple languages.
It was developed following a similar strategy as introduced as part of the [Tweet Eval](https://github.com/cardiffnlp/tweeteval) framework.
The model is further finetuned on the arabic part of the XNLI training dataset.
## Intended Usage
This model was developed to do Zero-Shot Text Classification in the realm of Hate Speech Detection. It is focused on the language of arabic as it was finetuned on data in said language. Since the base model was pre-trained on 100 different languages it has shown some effectiveness in other languages. Please refer to the list of languages in the [XLM Roberta paper](https://arxiv.org/abs/1911.02116)
### Usage with Zero-Shot Classification pipeline
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification",
model="morit/arabic_xlm_xnli")
```
## Training
This model was pre-trained on a set of 100 languages and follwed further training on 198M multilingual tweets as described in the original [paper](https://arxiv.org/abs/2104.12250). Further it was trained on the training set of XNLI dataset in arabic which is a machine translated version of the MNLI dataset. It was trained on 5 epochs of the XNLI train set and evaluated on the XNLI eval dataset at the end of every epoch to find the best performing model. The model which had the highest accuracy on the eval set was chosen at the end.

- learning rate: 2e-5
- batch size: 32
- max sequence: length 128
using a GPU (NVIDIA GeForce RTX 3090) resulting in a training time of 1h 47 mins.
## Evaluation
The best performing model was evaluatated on the XNLI test set to get a comparable result
```
predict_accuracy = 74.19 %
``` |
AlekseyKulnevich/Pegasus-HeaderGeneration | [
"pytorch",
"pegasus",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"PegasusForConditionalGeneration"
],
"model_type": "pegasus",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="yahia-ferchichi/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AlexKay/xlm-roberta-large-qa-multilingual-finedtuned-ru | [
"pytorch",
"xlm-roberta",
"question-answering",
"en",
"ru",
"multilingual",
"arxiv:1912.09723",
"transformers",
"license:apache-2.0",
"autotrain_compatible",
"has_space"
]
| question-answering | {
"architectures": [
"XLMRobertaForQuestionAnswering"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10,012 | null | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="steffel/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AlexMaclean/sentence-compression-roberta | [
"pytorch",
"roberta",
"token-classification",
"transformers",
"generated_from_trainer",
"license:mit",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"RobertaForTokenClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 13 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: tiny-mlm-glue-rte-custom-tokenizer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-glue-rte-custom-tokenizer
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 7.3646
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.71 | 1.6 | 500 | 7.1503 |
| 6.8618 | 3.21 | 1000 | 7.2787 |
| 6.816 | 4.81 | 1500 | 7.2543 |
| 6.7094 | 6.41 | 2000 | 7.3646 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
AliReza/distilbert-emotion | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: tiny-mlm-squad-plain_text
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-squad-plain_text
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0170
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.4628 | 0.4 | 500 | 3.9931 |
| 4.0687 | 0.8 | 1000 | 3.9571 |
| 3.9256 | 1.2 | 1500 | 3.9381 |
| 3.7901 | 1.6 | 2000 | 3.9680 |
| 3.715 | 2.0 | 2500 | 3.9487 |
| 3.6632 | 2.4 | 3000 | 4.0170 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
Alireza1044/albert-base-v2-cola | [
"pytorch",
"tensorboard",
"albert",
"text-classification",
"en",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0"
]
| text-classification | {
"architectures": [
"AlbertForSequenceClassification"
],
"model_type": "albert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 32 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: distilbart-xsum-12-3-whole_summary_chatGPT_and_tweetsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-xsum-12-3-whole_summary_chatGPT_and_tweetsum
This model is a fine-tuned version of [sshleifer/distilbart-xsum-12-3](https://huggingface.co/sshleifer/distilbart-xsum-12-3) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7952
- Rouge1: 45.7353
- Rouge2: 29.1566
- Rougel: 45.8429
- Rougelsum: 45.7353
- Gen Len: 16.6
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 397 | 2.8069 | 42.233 | 23.7538 | 39.2701 | 39.2701 | 17.0 |
| 2.8673 | 2.0 | 794 | 2.7736 | 48.2389 | 29.6927 | 43.5004 | 43.5004 | 17.4 |
| 1.8043 | 3.0 | 1191 | 2.7952 | 45.7353 | 29.1566 | 45.8429 | 45.7353 | 16.6 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.12.1
|
Anamika/autonlp-Feedback1-479512837 | [
"pytorch",
"xlm-roberta",
"text-classification",
"unk",
"dataset:Anamika/autonlp-data-Feedback1",
"transformers",
"autonlp",
"co2_eq_emissions"
]
| text-classification | {
"architectures": [
"XLMRobertaForSequenceClassification"
],
"model_type": "xlm-roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 34 | 2023-01-06T15:30:06Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="vjkrish/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Andrija/SRoBERTa-F | [
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"hr",
"sr",
"multilingual",
"dataset:oscar",
"dataset:srwac",
"dataset:leipzig",
"dataset:cc100",
"dataset:hrwac",
"transformers",
"masked-lm",
"license:apache-2.0",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 59 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: my_awesome_billsum_model
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1417
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_billsum_model
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5537
- Rouge1: 0.1417
- Rouge2: 0.0517
- Rougel: 0.1173
- Rougelsum: 0.1172
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.7255 | 0.1315 | 0.0434 | 0.1091 | 0.109 | 19.0 |
| No log | 2.0 | 124 | 2.6129 | 0.1351 | 0.0458 | 0.1121 | 0.112 | 19.0 |
| No log | 3.0 | 186 | 2.5659 | 0.1402 | 0.0498 | 0.1161 | 0.1161 | 19.0 |
| No log | 4.0 | 248 | 2.5537 | 0.1417 | 0.0517 | 0.1173 | 0.1172 | 19.0 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
AndyJ/prompt_finetune | [
"pytorch",
"bert",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"BertForMaskedLM"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
AndyyyCai/bert-base-uncased-finetuned-copa | [
"pytorch",
"bert",
"multiple-choice",
"transformers"
]
| multiple-choice | {
"architectures": [
"BertForMultipleChoice"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 265.15 +/- 20.41
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Anirbanbhk/Hate-speech-Pretrained-movies | [
"tf",
"bert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 20 | null | ---
language:
- vi
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: Hieu Dam Model Shuffle
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Hieu Dam Model Shuffle
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Dataset by HieuDam dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 450
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
AnonymousNLP/pretrained-model-2 | [
"pytorch",
"gpt2",
"transformers"
]
| null | {
"architectures": [
"GPT2DoubleHeadsModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3-v2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="mmontecino/Taxi-v3-v2", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AnonymousSub/AR_EManuals-RoBERTa | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 572.00 +/- 146.55
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga OliP -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga OliP -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga OliP
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
AnonymousSub/AR_rule_based_roberta_hier_quadruplet_epochs_1_shard_1 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-dpv-finetuned-WITH-AUGMENTATION-LOWER-LR
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-dpv-finetuned-WITH-AUGMENTATION-LOWER-LR
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5717
- Wer: 34.5241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.6221 | 0.62 | 1000 | 0.5345 | 35.9711 |
| 0.4318 | 1.25 | 2000 | 0.5271 | 34.9537 |
| 0.3859 | 1.87 | 3000 | 0.5338 | 34.3658 |
| 0.3005 | 2.49 | 4000 | 0.5532 | 34.8858 |
| 0.2444 | 3.12 | 5000 | 0.5628 | 33.7102 |
| 0.315 | 3.74 | 6000 | 0.5717 | 34.5241 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
AnonymousSub/AR_rule_based_roberta_twostage_quadruplet_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 266.09 +/- 14.86
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AnonymousSub/EManuals_BERT_copy | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
AnonymousSub/EManuals_BERT_copy_wikiqa | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"BertForSequenceClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 29 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: manuelblp/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play π
|
AnonymousSub/SR_consert | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3-explore
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.46 +/- 2.83
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="EduardoCGarridoMerchan/Taxi-v3-explore", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AnonymousSub/SR_declutr | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2315
- Accuracy: 0.926
- F1: 0.9260
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8794 | 1.0 | 250 | 0.3392 | 0.8985 | 0.8948 |
| 0.2663 | 2.0 | 500 | 0.2315 | 0.926 | 0.9260 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu117
- Datasets 1.16.1
- Tokenizers 0.13.2
|
AnonymousSub/SR_rule_based_only_classfn_twostage_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | 2023-01-06T21:10:04Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 615.00 +/- 204.69
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga codeSpaghetti -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga codeSpaghetti -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga codeSpaghetti
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
AnonymousSub/SR_rule_based_roberta_bert_quadruplet_epochs_1_shard_1 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 282.18 +/- 13.31
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AnonymousSub/SR_rule_based_roberta_hier_triplet_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | Access to model musqulu/tgp-custom-woman is restricted and you are not in the authorized list. Visit https://huggingface.co/musqulu/tgp-custom-woman to ask for access. |
AnonymousSub/SR_rule_based_roberta_hier_triplet_epochs_1_shard_1_wikiqa_copy | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3-explore_more_slow
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="EduardoCGarridoMerchan/Taxi-v3-explore_more_slow", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
AnonymousSub/SR_rule_based_roberta_only_classfn_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 3 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ppo
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 266.49 +/- 15.74
name: mean_reward
verified: false
---
# **ppo** Agent playing **LunarLander-v2**
This is a trained model of a **ppo** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AnonymousSub/SR_rule_based_roberta_only_classfn_twostage_epochs_1_shard_10 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
tags:
- Freeway-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Freeway-v5
type: Freeway-v5
metrics:
- type: mean_reward
value: 33.70 +/- 0.46
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Freeway-v5**
This is a trained model of a PPO agent playing Freeway-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ppo_atari_envpool_async_jax_scan_impalanet_machado.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[ppo_atari_envpool_async_jax_scan_impalanet_machado]"
python -m cleanrl_utils.enjoy --exp-name ppo_atari_envpool_async_jax_scan_impalanet_machado --env-id Freeway-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Freeway-v5-ppo_atari_envpool_async_jax_scan_impalanet_machado-seed1/raw/main/ppo_atari_envpool_async_jax_scan_impalanet_machado.py
curl -OL https://huggingface.co/cleanrl/Freeway-v5-ppo_atari_envpool_async_jax_scan_impalanet_machado-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Freeway-v5-ppo_atari_envpool_async_jax_scan_impalanet_machado-seed1/raw/main/poetry.lock
poetry install --all-extras
python ppo_atari_envpool_async_jax_scan_impalanet_machado.py --track --wandb-project-name envpool-atari --save-model --upload-model --hf-entity cleanrl --env-id Freeway-v5 --seed 1
```
# Hyperparameters
```python
{'anneal_lr': True,
'async_batch_size': 16,
'batch_size': 2048,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Freeway-v5',
'exp_name': 'ppo_atari_envpool_async_jax_scan_impalanet_machado',
'gae': True,
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1024,
'norm_adv': True,
'num_envs': 64,
'num_minibatches': 2,
'num_steps': 32,
'num_updates': 24414,
'save_model': True,
'seed': 1,
'target_kl': None,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 2,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'envpool-atari'}
```
|
AnonymousSub/SR_rule_based_roberta_twostage_quadruplet_epochs_1_shard_1 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | 2023-01-06T21:59:28Z | ---
license: gpl-3.0
tags:
- object-detection
- computer-vision
- yolov6
- yolo
datasets:
- detection-datasets/coco
---
### Model Description
[YOLOv6:](https://arxiv.org/abs/2209.02976) A single-stage object detection framework dedicated to industrial applications.
[YOLOv6 v3.0](https://arxiv.org/abs/2301.05586): A Full-Scale Reloading
[YOLOv6-Pip: Packaged version of the Yolov6 repository](https://github.com/kadirnar/yolov6-pip/)
[Paper Repo: Implementation of paper - YOLOv6](https://github.com/meituan/YOLOv6/)
### Installation
```
pip install yolov6detect
```
### Yolov6 Inference
```python
from yolov6 import YOLOV6
model = YOLOV6(weights='kadirnar/yolov6m6-v3.0', device='cuda:0', hf_model=True)
model.classes = None
model.conf = 0.25
model.iou = 0.45
model.show = False
model.save = True
pred = model.predict(source='data/images',yaml='data/coco.yaml', img_size=640)
```
### BibTeX Entry and Citation Info
```
@article{li2022yolov6,
title={YOLOv6: A single-stage object detection framework for industrial applications},
author={Li, Chuyi and Li, Lulu and Jiang, Hongliang and Weng, Kaiheng and Geng, Yifei and Li, Liang and Ke, Zaidan and Li, Qingyuan and Cheng, Meng and Nie, Weiqiang and others},
journal={arXiv preprint arXiv:2209.02976},
year={2022}
}
``` |
AnonymousSub/SR_rule_based_twostagetriplet_hier_epochs_1_shard_1 | [
"pytorch",
"bert",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"BertModel"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | null | ---
license: gpl-3.0
tags:
- object-detection
- computer-vision
- yolov6
- yolo
datasets:
- detection-datasets/coco
---
### Model Description
[YOLOv6:](https://arxiv.org/abs/2209.02976) A single-stage object detection framework dedicated to industrial applications.
[YOLOv6 v3.0](https://arxiv.org/abs/2301.05586): A Full-Scale Reloading
[YOLOv6-Pip: Packaged version of the Yolov6 repository](https://github.com/kadirnar/yolov6-pip/)
[Paper Repo: Implementation of paper - YOLOv6](https://github.com/meituan/YOLOv6/)
### Installation
```
pip install yolov6detect
```
### Yolov6 Inference
```python
from yolov6 import YOLOV6
model = YOLOV6(weights='kadirnar/yolov6l-v3.0', device='cuda:0', hf_model=True)
model.classes = None
model.conf = 0.25
model.iou = 0.45
model.show = False
model.save = True
pred = model.predict(source='data/images',yaml='data/coco.yaml', img_size=640)
```
### BibTeX Entry and Citation Info
```
@article{li2022yolov6,
title={YOLOv6: A single-stage object detection framework for industrial applications},
author={Li, Chuyi and Li, Lulu and Jiang, Hongliang and Weng, Kaiheng and Geng, Yifei and Li, Liang and Ke, Zaidan and Li, Qingyuan and Cheng, Meng and Nie, Weiqiang and others},
journal={arXiv preprint arXiv:2209.02976},
year={2022}
}
``` |
AnonymousSub/cline-papers-biomed-0.618 | [
"pytorch",
"roberta",
"transformers"
]
| null | {
"architectures": [
"LecbertForPreTraining"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | 2023-01-06T22:24:48Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: my_awesome_eli5_clm-model2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_eli5_clm-model2
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7470
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.8701 | 1.0 | 1055 | 3.7642 |
| 3.7747 | 2.0 | 2110 | 3.7501 |
| 3.7318 | 3.0 | 3165 | 3.7470 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.12.1
|
AnonymousSub/cline-s10-SR | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- f1
- recall
- precision
model-index:
- name: vit-base-patch16-224-in21k_lung_and_colon_cancer
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9994
language:
- en
pipeline_tag: image-classification
---
# vit-base-patch16-224-in21k_lung_and_colon_cancer
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k).
It achieves the following results on the evaluation set:
- Loss: 0.0016
- Accuracy: 0.9994
- Weighted f1: 0.9994
- Micro f1: 0.9994
- Macro f1: 0.9994
- Weighted recall: 0.9994
- Micro recall: 0.9994
- Macro recall: 0.9994
- Weighted precision: 0.9994
- Micro precision: 0.9994
- Macro precision: 0.9994
## Model description
This is a multiclass image classification model of lung and colon cancers.
For more information on how it was created, check out the following link: https://github.com/DunnBC22/Vision_Audio_and_Multimodal_Projects/blob/main/Computer%20Vision/Image%20Classification/Multiclass%20Classification/Lung%20%26%20Colon%20Cancer/Lung_and_colon_cancer_ViT.ipynb
## Intended uses & limitations
This model is intended to demonstrate my ability to solve a complex problem using technology.
## Training and evaluation data
Dataset Source: https://www.kaggle.com/datasets/andrewmvd/lung-and-colon-cancer-histopathological-images
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Weighted f1 | Micro f1 | Macro f1 | Weighted recall | Micro recall | Macro recall | Weighted precision | Micro precision | Macro precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:--------:|:--------:|:---------------:|:------------:|:------------:|:------------------:|:---------------:|:---------------:|
| 0.0574 | 1.0 | 1250 | 0.0410 | 0.9864 | 0.9864 | 0.9864 | 0.9865 | 0.9864 | 0.9864 | 0.9864 | 0.9872 | 0.9864 | 0.9875 |
| 0.0031 | 2.0 | 2500 | 0.0105 | 0.9972 | 0.9972 | 0.9972 | 0.9972 | 0.9972 | 0.9972 | 0.9973 | 0.9972 | 0.9972 | 0.9972 |
| 0.0007 | 3.0 | 3750 | 0.0016 | 0.9994 | 0.9994 | 0.9994 | 0.9994 | 0.9994 | 0.9994 | 0.9994 | 0.9994 | 0.9994 | 0.9994 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1
- Datasets 2.5.2
- Tokenizers 0.12.1 |
AnonymousSub/rule_based_roberta_only_classfn_twostage_epochs_1_shard_1_squad2.0 | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 2 | 2023-01-07T02:01:18Z | ---
language:
- vi
tags:
- hf-asr-leaderboard
- generated_from_trainer
model-index:
- name: HuyenNguyen
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# HuyenNguyen
This model is a fine-tuned version of [Huyen2310/Vi-gec](https://huggingface.co/Huyen2310/Vi-gec) on the FPT dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
AnonymousSub/rule_based_roberta_only_classfn_twostage_epochs_1_shard_1_wikiqa | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 24 | 2023-01-07T02:01:22Z | ---
language:
- vi
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
model-index:
- name: HuyenNguyen
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# HuyenNguyen
This model is a fine-tuned version of [Huyen2310/FPT-S15000](https://huggingface.co/Huyen2310/FPT-S15000) on the Common Voice 11.0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
AnonymousSub/rule_based_roberta_twostagequadruplet_hier_epochs_1_shard_1 | [
"pytorch",
"roberta",
"feature-extraction",
"transformers"
]
| feature-extraction | {
"architectures": [
"RobertaModel"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | 2023-01-07T02:05:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
- f1
- recall
- precision
model-index:
- name: vit-base-patch16-224-in21k_car_or_motorcycle
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.99375
language:
- en
pipeline_tag: image-classification
---
# vit-base-patch16-224-in21k_car_or_motorcycle
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0301
- Accuracy: 0.9938
- Weighted f1: 0.9939
- Weighted recall: 0.9927
- Weighted precision: 0.9951
## Model description
This is a binary classification model to distinguish between images of cars and images of motorcycles.
For more information on how it was created, check out the following link: https://github.com/DunnBC22/Vision_Audio_and_Multimodal_Projects/blob/main/Computer%20Vision/Image%20Classification/Binary%20Classification/Car%20or%20Motorcycle/Car_or_Motorcycle_ViT.ipynb
## Intended uses & limitations
This model is intended to demonstrate my ability to solve a complex problem using technology.
## Training and evaluation data
Dataset Source: https://www.kaggle.com/datasets/utkarshsaxenadn/car-vs-bike-classification-dataset
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Weighted f1 | Weighted recall | Weighted precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:---------------:|:------------------:|
| 0.6908 | 1.0 | 200 | 0.0372 | 0.99 | 0.9902 | 0.9902 | 0.9902 |
| 0.6908 | 2.0 | 400 | 0.0301 | 0.9938 | 0.9939 | 0.9927 | 0.9951 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1
- Datasets 2.5.2
- Tokenizers 0.12.1 |
Anthos23/FS-distilroberta-fine-tuned | [
"pytorch",
"roberta",
"text-classification",
"transformers",
"has_space"
]
| text-classification | {
"architectures": [
"RobertaForSequenceClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 33 | 2023-01-07T02:28:47Z | ---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum-4
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4165
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6715 | 0.27 | 500 | 1.5317 |
| 1.7387 | 0.54 | 1000 | 1.4421 |
| 1.641 | 0.81 | 1500 | 1.4165 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Anubhav23/model_name | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-01-07T02:34:47Z | ---
tags:
- Frostbite-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Frostbite-v5
type: Frostbite-v5
metrics:
- type: mean_reward
value: 270.00 +/- 0.00
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Frostbite-v5**
This is a trained model of a PPO agent playing Frostbite-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ppo_atari_envpool_async_jax_scan_impalanet_machado.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[ppo_atari_envpool_async_jax_scan_impalanet_machado]"
python -m cleanrl_utils.enjoy --exp-name ppo_atari_envpool_async_jax_scan_impalanet_machado --env-id Frostbite-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Frostbite-v5-ppo_atari_envpool_async_jax_scan_impalanet_machado-seed1/raw/main/ppo_atari_envpool_async_jax_scan_impalanet_machado.py
curl -OL https://huggingface.co/cleanrl/Frostbite-v5-ppo_atari_envpool_async_jax_scan_impalanet_machado-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Frostbite-v5-ppo_atari_envpool_async_jax_scan_impalanet_machado-seed1/raw/main/poetry.lock
poetry install --all-extras
python ppo_atari_envpool_async_jax_scan_impalanet_machado.py --track --wandb-project-name envpool-atari --save-model --upload-model --hf-entity cleanrl --env-id Frostbite-v5 --seed 1
```
# Hyperparameters
```python
{'anneal_lr': True,
'async_batch_size': 16,
'batch_size': 2048,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Frostbite-v5',
'exp_name': 'ppo_atari_envpool_async_jax_scan_impalanet_machado',
'gae': True,
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1024,
'norm_adv': True,
'num_envs': 64,
'num_minibatches': 2,
'num_steps': 32,
'num_updates': 24414,
'save_model': True,
'seed': 1,
'target_kl': None,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 2,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'envpool-atari'}
```
|
Anupam/QuestionClassifier | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-01-07T02:35:18Z | ---
tags:
- espnet
- audio
- automatic-speech-recognition
language: en
datasets:
- librispeech
license: cc-by-4.0
---
## ESPnet2 ASR model
### `asapp/e_branchformer_librispeech`
This model was trained by Kwangyoun Kim using librispeech recipe in [espnet](https://github.com/espnet/espnet/).
References:
- [E-Branchformer: Branchformer with Enhanced merging for speech recognition (SLT 2022)](https://arxiv.org/abs/2210.00077)
- [Branchformer: Parallel MLP-Attention Architectures to Capture Local and Global Context for Speech Recognition and Understanding (ICML 2022)](https://proceedings.mlr.press/v162/peng22a.html)
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout 7a203d55543df02f0369d5608cd6f3033119a135
pip install -e .
cd egs2/librispeech/asr1
./run.sh --skip_data_prep false --skip_train true --download_model asapp/e_branchformer_librispeech
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Mon Jan 2 12:59:49 UTC 2023`
- python version: `3.8.15 (default, Nov 24 2022, 15:19:38) [GCC 11.2.0]`
- espnet version: `espnet 202211`
- pytorch version: `pytorch 1.10.1`
- Git hash: `7a203d55543df02f0369d5608cd6f3033119a135`
- Commit date: `Fri Dec 23 00:58:49 2022 +0000`
## asr_train_asr_e_branchformer_raw_en_bpe5000_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave/dev_clean|2703|54402|98.2|1.6|0.2|0.2|2.0|26.3|
|decode_asr_asr_model_valid.acc.ave/dev_other|2864|50948|95.8|3.8|0.3|0.4|4.6|40.6|
|decode_asr_asr_model_valid.acc.ave/test_clean|2620|52576|98.1|1.8|0.2|0.2|2.2|26.6|
|decode_asr_asr_model_valid.acc.ave/test_other|2939|52343|95.9|3.7|0.4|0.5|4.6|42.0|
|decode_asr_lm_lm_train_lm_transformer2_bpe5000_scheduler_confwarmup_steps25000_batch_bins500000000_accum_grad2_use_amptrue_valid.loss.ave_10best_asr_model_valid.acc.ave/dev_clean|2703|54402|98.5|1.3|0.2|0.2|1.6|22.5|
|decode_asr_lm_lm_train_lm_transformer2_bpe5000_scheduler_confwarmup_steps25000_batch_bins500000000_accum_grad2_use_amptrue_valid.loss.ave_10best_asr_model_valid.acc.ave/dev_other|2864|50948|96.7|3.0|0.3|0.3|3.7|34.7|
|decode_asr_lm_lm_train_lm_transformer2_bpe5000_scheduler_confwarmup_steps25000_batch_bins500000000_accum_grad2_use_amptrue_valid.loss.ave_10best_asr_model_valid.acc.ave/test_clean|2620|52576|98.4|1.5|0.2|0.2|1.9|23.1|
|decode_asr_lm_lm_train_lm_transformer2_bpe5000_scheduler_confwarmup_steps25000_batch_bins500000000_accum_grad2_use_amptrue_valid.loss.ave_10best_asr_model_valid.acc.ave/test_other|2939|52343|96.7|2.9|0.4|0.4|3.7|37.1|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave/dev_clean|2703|288456|99.6|0.2|0.2|0.2|0.6|26.3|
|decode_asr_asr_model_valid.acc.ave/dev_other|2864|265951|98.6|0.9|0.5|0.5|1.9|40.6|
|decode_asr_asr_model_valid.acc.ave/test_clean|2620|281530|99.5|0.2|0.2|0.2|0.7|26.6|
|decode_asr_asr_model_valid.acc.ave/test_other|2939|272758|98.7|0.8|0.5|0.5|1.8|42.0|
|decode_asr_lm_lm_train_lm_transformer2_bpe5000_scheduler_confwarmup_steps25000_batch_bins500000000_accum_grad2_use_amptrue_valid.loss.ave_10best_asr_model_valid.acc.ave/dev_clean|2703|288456|99.6|0.2|0.2|0.2|0.6|22.5|
|decode_asr_lm_lm_train_lm_transformer2_bpe5000_scheduler_confwarmup_steps25000_batch_bins500000000_accum_grad2_use_amptrue_valid.loss.ave_10best_asr_model_valid.acc.ave/dev_other|2864|265951|98.7|0.7|0.6|0.4|1.7|34.7|
|decode_asr_lm_lm_train_lm_transformer2_bpe5000_scheduler_confwarmup_steps25000_batch_bins500000000_accum_grad2_use_amptrue_valid.loss.ave_10best_asr_model_valid.acc.ave/test_clean|2620|281530|99.5|0.2|0.2|0.2|0.6|23.1|
|decode_asr_lm_lm_train_lm_transformer2_bpe5000_scheduler_confwarmup_steps25000_batch_bins500000000_accum_grad2_use_amptrue_valid.loss.ave_10best_asr_model_valid.acc.ave/test_other|2939|272758|98.8|0.6|0.6|0.4|1.6|37.1|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave/dev_clean|2703|68010|97.8|1.6|0.6|0.3|2.6|26.3|
|decode_asr_asr_model_valid.acc.ave/dev_other|2864|63110|94.9|3.9|1.2|0.8|5.9|40.6|
|decode_asr_asr_model_valid.acc.ave/test_clean|2620|65818|97.6|1.7|0.7|0.3|2.7|26.6|
|decode_asr_asr_model_valid.acc.ave/test_other|2939|65101|95.0|3.6|1.4|0.7|5.7|42.0|
|decode_asr_lm_lm_train_lm_transformer2_bpe5000_scheduler_confwarmup_steps25000_batch_bins500000000_accum_grad2_use_amptrue_valid.loss.ave_10best_asr_model_valid.acc.ave/dev_clean|2703|68010|98.1|1.3|0.6|0.3|2.1|22.5|
|decode_asr_lm_lm_train_lm_transformer2_bpe5000_scheduler_confwarmup_steps25000_batch_bins500000000_accum_grad2_use_amptrue_valid.loss.ave_10best_asr_model_valid.acc.ave/dev_other|2864|63110|95.6|3.1|1.3|0.6|5.0|34.7|
|decode_asr_lm_lm_train_lm_transformer2_bpe5000_scheduler_confwarmup_steps25000_batch_bins500000000_accum_grad2_use_amptrue_valid.loss.ave_10best_asr_model_valid.acc.ave/test_clean|2620|65818|97.8|1.4|0.8|0.3|2.5|23.1|
|decode_asr_lm_lm_train_lm_transformer2_bpe5000_scheduler_confwarmup_steps25000_batch_bins500000000_accum_grad2_use_amptrue_valid.loss.ave_10best_asr_model_valid.acc.ave/test_other|2939|65101|95.8|2.8|1.5|0.5|4.7|37.1|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_e_branchformer.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_e_branchformer_raw_en_bpe5000_sp
ngpu: 1
seed: 0
num_workers: 8
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 8
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 49667
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: true
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 80
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 10
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: true
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 140000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_bpe5000_sp/train/speech_shape
- exp/asr_stats_raw_en_bpe5000_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_en_bpe5000_sp/valid/speech_shape
- exp/asr_stats_raw_en_bpe5000_sp/valid/text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_960_sp/wav.scp
- speech
- sound
- - dump/raw/train_960_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- sound
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.002
weight_decay: 1.0e-06
scheduler: warmuplr
scheduler_conf:
warmup_steps: 40000
token_list:
- <blank>
- <unk>
- βTHE
- S
- βAND
- βOF
- βTO
- βA
- βIN
- βI
- βHE
- βTHAT
- βWAS
- ED
- βIT
- ''''
- βHIS
- ING
- βYOU
- βWITH
- βFOR
- βHAD
- T
- βAS
- βHER
- βIS
- βBE
- βBUT
- βNOT
- βSHE
- D
- βAT
- βON
- LY
- βHIM
- βTHEY
- βALL
- βHAVE
- βBY
- βSO
- βTHIS
- βMY
- βWHICH
- βME
- βSAID
- βFROM
- βONE
- Y
- E
- βWERE
- βWE
- βNO
- N
- βTHERE
- βOR
- ER
- βAN
- βWHEN
- βARE
- βTHEIR
- βWOULD
- βIF
- βWHAT
- βTHEM
- βWHO
- βOUT
- M
- βDO
- βWILL
- βUP
- βBEEN
- P
- R
- βMAN
- βTHEN
- βCOULD
- βMORE
- C
- βINTO
- βNOW
- βVERY
- βYOUR
- βSOME
- βLITTLE
- ES
- βTIME
- RE
- βCAN
- βLIKE
- LL
- βABOUT
- βHAS
- βTHAN
- βDID
- βUPON
- βOVER
- IN
- βANY
- βWELL
- βONLY
- B
- βSEE
- βGOOD
- βOTHER
- βTWO
- L
- βKNOW
- βGO
- βDOWN
- βBEFORE
- A
- AL
- βOUR
- βOLD
- βSHOULD
- βMADE
- βAFTER
- βGREAT
- βDAY
- βMUST
- βCOME
- βHOW
- βSUCH
- βCAME
- LE
- βWHERE
- βUS
- βNEVER
- βTHESE
- βMUCH
- βDE
- βMISTER
- βWAY
- G
- βS
- βMAY
- ATION
- βLONG
- OR
- βAM
- βFIRST
- βBACK
- βOWN
- βRE
- βAGAIN
- βSAY
- βMEN
- βWENT
- βHIMSELF
- βHERE
- NESS
- βTHINK
- V
- IC
- βEVEN
- βTHOUGHT
- βHAND
- βJUST
- βO
- βUN
- VE
- ION
- βITS
- 'ON'
- βMAKE
- βMIGHT
- βTOO
- K
- βAWAY
- βLIFE
- TH
- βWITHOUT
- ST
- βTHROUGH
- βMOST
- βTAKE
- βDON
- βEVERY
- F
- O
- βSHALL
- βTHOSE
- βEYES
- AR
- βSTILL
- βLAST
- βHOUSE
- βHEAD
- ABLE
- βNOTHING
- βNIGHT
- ITY
- βLET
- βMANY
- βOFF
- βBEING
- βFOUND
- βWHILE
- EN
- βSAW
- βGET
- βPEOPLE
- βFACE
- βYOUNG
- CH
- βUNDER
- βONCE
- βTELL
- AN
- βTHREE
- βPLACE
- βROOM
- βYET
- βSAME
- IL
- US
- U
- βFATHER
- βRIGHT
- EL
- βTHOUGH
- βANOTHER
- LI
- RI
- βHEART
- IT
- βPUT
- βTOOK
- βGIVE
- βEVER
- βE
- βPART
- βWORK
- ERS
- βLOOK
- βNEW
- βKING
- βMISSUS
- βSIR
- βLOVE
- βMIND
- βLOOKED
- W
- RY
- βASKED
- βLEFT
- ET
- βLIGHT
- CK
- βDOOR
- βMOMENT
- RO
- βWORLD
- βTHINGS
- βHOME
- UL
- βTHING
- LA
- βWHY
- βMOTHER
- βALWAYS
- βFAR
- FUL
- βWATER
- CE
- IVE
- UR
- βHEARD
- βSOMETHING
- βSEEMED
- I
- LO
- βBECAUSE
- OL
- βEND
- βTOLD
- βCON
- βYES
- βGOING
- βGOT
- RA
- IR
- βWOMAN
- βGOD
- EST
- TED
- βFIND
- βKNEW
- βSOON
- βEACH
- βSIDE
- H
- TON
- MENT
- βOH
- NE
- Z
- LING
- βAGAINST
- TER
- βNAME
- βMISS
- βQUITE
- βWANT
- βYEARS
- βFEW
- βBETTER
- ENT
- βHALF
- βDONE
- βALSO
- βBEGAN
- βHAVING
- βENOUGH
- IS
- βLADY
- βWHOLE
- LESS
- βBOTH
- βSEEN
- βSET
- βWHITE
- βCOURSE
- IES
- βVOICE
- βCALLED
- βD
- βEX
- ATE
- βTURNED
- βGAVE
- βC
- βPOOR
- MAN
- UT
- NA
- βDEAR
- ISH
- βGIRL
- βMORNING
- βBETWEEN
- LED
- βNOR
- IA
- βAMONG
- MA
- β
- βSMALL
- βREST
- βWHOM
- βFELT
- βHANDS
- βMYSELF
- βHIGH
- βM
- βHOWEVER
- βHERSELF
- βP
- CO
- βSTOOD
- ID
- βKIND
- βHUNDRED
- AS
- βROUND
- βALMOST
- TY
- βSINCE
- βG
- AM
- βLA
- SE
- βBOY
- βMA
- βPERHAPS
- βWORDS
- ATED
- βHO
- X
- βMO
- βSAT
- βREPLIED
- βFOUR
- βANYTHING
- βTILL
- βUNTIL
- βBLACK
- TION
- βCRIED
- RU
- TE
- βFACT
- βHELP
- βNEXT
- βLOOKING
- βDOES
- βFRIEND
- βLAY
- ANCE
- βPOWER
- βBROUGHT
- VER
- βFIRE
- βKEEP
- PO
- FF
- βCOUNTRY
- βSEA
- βWORD
- βCAR
- βDAYS
- βTOGETHER
- βIMP
- βREASON
- KE
- βINDEED
- TING
- βMATTER
- βFULL
- βTEN
- TIC
- βLAND
- βRATHER
- βAIR
- βHOPE
- βDA
- βOPEN
- βFEET
- βEN
- βFIVE
- βPOINT
- βCO
- OM
- βLARGE
- βB
- βCL
- ME
- βGONE
- βCHILD
- INE
- GG
- βBEST
- βDIS
- UM
- βHARD
- βLORD
- OUS
- βWIFE
- βSURE
- βFORM
- DE
- βDEATH
- ANT
- βNATURE
- βBA
- βCARE
- βBELIEVE
- PP
- βNEAR
- βRO
- βRED
- βWAR
- IE
- βSPEAK
- βFEAR
- βCASE
- βTAKEN
- βALONG
- βCANNOT
- βHEAR
- βTHEMSELVES
- CI
- βPRESENT
- AD
- βMASTER
- βSON
- βTHUS
- βLI
- βLESS
- βSUN
- βTRUE
- IM
- IOUS
- βTHOUSAND
- βMONEY
- βW
- βBEHIND
- βCHILDREN
- βDOCTOR
- AC
- βTWENTY
- βWISH
- βSOUND
- βWHOSE
- βLEAVE
- βANSWERED
- βTHOU
- βDUR
- βHA
- βCERTAIN
- βPO
- βPASSED
- GE
- TO
- βARM
- βLO
- βSTATE
- βALONE
- TA
- βSHOW
- βNEED
- βLIVE
- ND
- βDEAD
- ENCE
- βSTRONG
- βPRE
- βTI
- βGROUND
- SH
- TI
- βSHORT
- IAN
- UN
- βPRO
- βHORSE
- MI
- βPRINCE
- ARD
- βFELL
- βORDER
- βCALL
- AT
- βGIVEN
- βDARK
- βTHEREFORE
- βCLOSE
- βBODY
- βOTHERS
- βSENT
- βSECOND
- βOFTEN
- βCA
- βMANNER
- MO
- NI
- βBRING
- βQUESTION
- βHOUR
- βBO
- AGE
- βST
- βTURN
- βTABLE
- βGENERAL
- βEARTH
- βBED
- βREALLY
- βSIX
- 'NO'
- IST
- βBECOME
- βUSE
- βREAD
- βSE
- βVI
- βCOMING
- βEVERYTHING
- βEM
- βABOVE
- βEVENING
- βBEAUTIFUL
- βFEEL
- βRAN
- βLEAST
- βLAW
- βALREADY
- βMEAN
- βROSE
- WARD
- βITSELF
- βSOUL
- βSUDDENLY
- βAROUND
- RED
- βANSWER
- ICAL
- βRA
- βWIND
- βFINE
- βWON
- βWHETHER
- βKNOWN
- BER
- NG
- βTA
- βCAPTAIN
- βEYE
- βPERSON
- βWOMEN
- βSORT
- βASK
- βBROTHER
- βUSED
- βHELD
- βBIG
- βRETURNED
- βSTRANGE
- βBU
- βPER
- βFREE
- βEITHER
- βWITHIN
- βDOUBT
- βYEAR
- βCLEAR
- βSIGHT
- βGRA
- βLOST
- βKEPT
- βF
- PE
- βBAR
- βTOWN
- βSLEEP
- ARY
- βHAIR
- βFRIENDS
- βDREAM
- βFELLOW
- PER
- βDEEP
- QUE
- βBECAME
- βREAL
- βPAST
- βMAKING
- RING
- βCOMP
- βACT
- βBAD
- HO
- STER
- βYE
- βMEANS
- βRUN
- MEN
- βDAUGHTER
- βSENSE
- βCITY
- βSOMETIMES
- βTOWARDS
- βROAD
- βSP
- βLU
- βREADY
- βFOOT
- βCOLD
- βSA
- βLETTER
- βELSE
- βMAR
- βSTA
- BE
- βTRUTH
- βLE
- BO
- βBUSINESS
- CHE
- βJOHN
- βSUBJECT
- βCOURT
- βIDEA
- ILY
- βRIVER
- ATING
- βFAMILY
- HE
- βDIDN
- βGLAD
- βSEVERAL
- IAL
- βUNDERSTAND
- βSC
- βPOSSIBLE
- βDIFFERENT
- βRETURN
- βARMS
- βLOW
- βHOLD
- βTALK
- βRU
- βWINDOW
- βINTEREST
- βSISTER
- SON
- βSH
- βBLOOD
- βSAYS
- βCAP
- βDI
- βHUMAN
- βCAUSE
- NCE
- βTHANK
- βLATE
- GO
- βCUT
- βACROSS
- βSTORY
- NT
- βCOUNT
- βABLE
- DY
- LEY
- βNUMBER
- βSTAND
- βCHURCH
- βTHY
- βSUPPOSE
- LES
- BLE
- OP
- βEFFECT
- BY
- βK
- βNA
- βSPOKE
- βMET
- βGREEN
- βHUSBAND
- βRESPECT
- βPA
- βFOLLOWED
- βREMEMBER
- βLONGER
- βAGE
- βTAKING
- βLINE
- βSEEM
- βHAPPY
- LAND
- EM
- βSTAY
- βPLAY
- βCOMMON
- βGA
- βBOOK
- βTIMES
- βOBJECT
- βSEVEN
- QUI
- DO
- UND
- βFL
- βPRETTY
- βFAIR
- WAY
- βWOOD
- βREACHED
- βAPPEARED
- βSWEET
- βFALL
- BA
- βPASS
- βSIGN
- βTREE
- IONS
- βGARDEN
- βILL
- βART
- βREMAIN
- βOPENED
- βBRIGHT
- βSTREET
- βTROUBLE
- βPAIN
- βCONTINUED
- βSCHOOL
- OUR
- βCARRIED
- βSAYING
- HA
- βCHANGE
- βFOLLOW
- βGOLD
- βSW
- βFEELING
- βCOMMAND
- βBEAR
- βCERTAINLY
- βBLUE
- βNE
- CA
- βWILD
- βACCOUNT
- βOUGHT
- UD
- βT
- βBREATH
- βWANTED
- βRI
- βHEAVEN
- βPURPOSE
- βCHARACTER
- βRICH
- βPE
- βDRESS
- OS
- FA
- βTH
- βENGLISH
- βCHANCE
- βSHIP
- βVIEW
- βTOWARD
- AK
- βJOY
- βJA
- βHAR
- βNEITHER
- βFORCE
- βUNCLE
- DER
- βPLAN
- βPRINCESS
- DI
- βCHIEF
- βHAT
- βLIVED
- βAB
- βVISIT
- βMOR
- TEN
- βWALL
- UC
- βMINE
- βPLEASURE
- βSMILE
- βFRONT
- βHU
- βDEAL
- OW
- βFURTHER
- GED
- βTRIED
- DA
- VA
- βNONE
- βENTERED
- βQUEEN
- βPAY
- βEL
- βEXCEPT
- βSHA
- βFORWARD
- βEIGHT
- βADDED
- βPUBLIC
- βEIGHTEEN
- βSTAR
- βHAPPENED
- βLED
- βWALKED
- βALTHOUGH
- βLATER
- βSPIRIT
- βWALK
- βBIT
- βMEET
- LIN
- βFI
- LT
- βMOUTH
- βWAIT
- βHOURS
- βLIVING
- βYOURSELF
- βFAST
- βCHA
- βHALL
- βBEYOND
- βBOAT
- βSECRET
- ENS
- βCHAIR
- RN
- βRECEIVED
- βCAT
- RESS
- βDESIRE
- βGENTLEMAN
- UGH
- βLAID
- EVER
- βOCCASION
- βWONDER
- βGU
- βPARTY
- DEN
- βFISH
- βSEND
- βNEARLY
- βTRY
- CON
- βSEEMS
- RS
- βBELL
- βBRA
- βSILENCE
- IG
- βGUARD
- βDIE
- βDOING
- βTU
- βCOR
- βEARLY
- βBANK
- βFIGURE
- IF
- βENGLAND
- βMARY
- βAFRAID
- LER
- βFO
- βWATCH
- βFA
- βVA
- βGRE
- βAUNT
- PED
- βSERVICE
- βJE
- βPEN
- βMINUTES
- βPAN
- βTREES
- NED
- βGLASS
- βTONE
- βPLEASE
- βFORTH
- βCROSS
- βEXCLAIMED
- βDREW
- βEAT
- βAH
- βGRAVE
- βCUR
- PA
- URE
- CENT
- βMILES
- βSOFT
- βAGO
- βPOSITION
- βWARM
- βLENGTH
- βNECESSARY
- βTHINKING
- βPICTURE
- βPI
- SHIP
- IBLE
- βHEAVY
- βATTENTION
- βDOG
- ABLY
- βSTANDING
- βNATURAL
- βAPPEAR
- OV
- βCAUGHT
- VO
- ISM
- βSPRING
- βEXPERIENCE
- βPAT
- OT
- βSTOPPED
- βREGARD
- βHARDLY
- βSELF
- βSTRENGTH
- βGREW
- βKNIGHT
- βOPINION
- βWIDE
- βINSTEAD
- βSOUTH
- βTRANS
- βCORNER
- βLEARN
- βISLAND
- βMI
- βTHIRD
- βSTE
- βSTRAIGHT
- βTEA
- βBOUND
- βSEEING
- βJU
- βDINNER
- βBEAUTY
- βPEACE
- AH
- βREP
- βSILENT
- βCRE
- ALLY
- RIC
- βSTEP
- βVER
- βJO
- GER
- βSITTING
- βTHIRTY
- βSAVE
- ENED
- βGLANCE
- βREACH
- βACTION
- βSAL
- βSAD
- βSTONE
- ITIES
- βFRENCH
- βSTRUCK
- βPAPER
- βWHATEVER
- βSUB
- βDISTANCE
- βWRONG
- βKNOWLEDGE
- βSAFE
- βSNOW
- βMUSIC
- βFIFTY
- RON
- βATTEMPT
- βGOVERNMENT
- TU
- βCROWD
- βBESIDES
- βLOVED
- βBOX
- βDIRECTION
- βTRAIN
- βNORTH
- βTHICK
- βGETTING
- AV
- βFLOOR
- βCOMPANY
- βBLOW
- βPLAIN
- TRO
- βBESIDE
- βROCK
- βIMMEDIATELY
- FI
- βSHADOW
- βSIT
- ORS
- ILE
- βDRINK
- βSPOT
- βDANGER
- βAL
- βSAINT
- βSLOWLY
- βPALACE
- IER
- βRESULT
- βPETER
- βFOREST
- βBELONG
- βSU
- βPAR
- RIS
- βTEARS
- βAPPEARANCE
- βGATE
- BU
- ITION
- βQUICKLY
- βQUIET
- βLONDON
- βSTART
- βBROWN
- TRA
- KIN
- βCONSIDER
- βBATTLE
- βANNE
- βPIECE
- βDIED
- βSUCCESS
- βLIPS
- βFILLED
- βFORGET
- βPOST
- IFIED
- βMARGARET
- βFOOD
- HAM
- βPLEASANT
- βFE
- βEXPRESSION
- βPOCKET
- βFRESH
- βWEAR
- TRI
- βBROKEN
- βLAUGHED
- GING
- βFOLLOWING
- WN
- IP
- βTOUCH
- βYOUTH
- ATIVE
- βLEG
- βWEEK
- βREMAINED
- βEASY
- NER
- RK
- βENTER
- βFIGHT
- βPLACED
- βTRAVEL
- βSIMPLE
- βGIRLS
- βWAITING
- βSTOP
- βWAVE
- AU
- βWISE
- βCAMP
- TURE
- UB
- βVE
- βOFFICE
- βGRAND
- βFIT
- βJUDGE
- UP
- MENTS
- βQUICK
- HI
- βFLO
- RIES
- VAL
- βCOMFORT
- βPARTICULAR
- βSTARTED
- βSUIT
- βNI
- βPALE
- βIMPOSSIBLE
- βHOT
- βCONVERSATION
- βSCENE
- βBOYS
- βWIN
- βBRE
- βSOCIETY
- βOUTSIDE
- βWRITE
- βEFFORT
- βTALKING
- βFORTUNE
- βNINE
- βWA
- βSINGLE
- βRULE
- βPORT
- βWINTER
- βCAST
- βCRA
- βHAPPEN
- βCRO
- βSHUT
- NING
- βGUN
- βNOBLE
- βBEGIN
- βPATH
- βSKY
- βWONDERFUL
- βSUDDEN
- βARMY
- βCHE
- βWORTH
- βMOUNTAIN
- βMIN
- AG
- βFLU
- βGRACE
- βCHAPTER
- βBELOW
- βRING
- βTURNING
- βIRON
- βTOP
- βAFTERNOON
- ORY
- βEVIL
- βTRUST
- βBOW
- βTRI
- βSAIL
- βCONTENT
- βHORSES
- ITE
- βSILVER
- AP
- βLAD
- βRUNNING
- βHILL
- βBEGINNING
- βMAD
- βHABIT
- GRA
- βCLOTHES
- βMORROW
- βCRY
- βFASHION
- βPRESENCE
- βZ
- FE
- βARRIVED
- βQUARTER
- βPERFECT
- βWO
- βTRA
- βUSUAL
- βNECK
- βMARRIED
- βSEAT
- βWI
- βGAR
- βSAND
- βSHORE
- βGIVING
- NY
- βPROBABLY
- βMINUTE
- βEXPECT
- βDU
- βSHOT
- βINSTANT
- βDEGREE
- βCOLOR
- βWEST
- RT
- βMARCH
- βBIRD
- βSHOWED
- βGREATER
- βSERIOUS
- βCARRY
- βCOVERED
- βFORMER
- βLOUD
- βMOVED
- βMASS
- βSEEK
- βCHO
- GEN
- βROMAN
- IB
- βMOON
- βBOARD
- βSTREAM
- βEASILY
- βWISHED
- βSEARCH
- βCOULDN
- βMONTHS
- βSICK
- LIE
- βDUTY
- βTWELVE
- βFAINT
- βSTRANGER
- βSURPRISE
- βKILL
- βLEAVING
- βJOURNEY
- βSCARCELY
- βRAISED
- βSPEAKING
- βTERRIBLE
- βTOM
- βFIELD
- βGAME
- βQUA
- βPROMISE
- βLIE
- βCONDITION
- βTRO
- βPERSONAL
- βTALL
- βSTICK
- βTHREW
- βMARRY
- βVAN
- βBURN
- βACCORDING
- βRISE
- βATTACK
- βSWORD
- βGUESS
- βTHOUGHTS
- βTHIN
- βTHROW
- βCALM
- SIDE
- βVILLAGE
- βDEN
- βANXIOUS
- βMER
- GI
- βEXPECTED
- βBALL
- βESPECIALLY
- βCHARGE
- βMEASURE
- ISE
- βNICE
- βTRYING
- βALLOW
- βSHARP
- βBREAD
- βHONOUR
- βHONOR
- βENTIRELY
- βBILL
- βBRI
- βWRITTEN
- βAR
- βBROKE
- βKILLED
- βMARK
- βVEN
- βLADIES
- βLEARNED
- βFLOWERS
- PLE
- βFORTY
- βOFFER
- βHAPPINESS
- βPRAY
- βCLASS
- βFER
- βPRINCIPLE
- GU
- βBOOKS
- βSHAPE
- βSUMMER
- βJACK
- βDRAW
- βGOLDEN
- βDECIDED
- βLEAD
- βUNLESS
- βHARM
- βLISTEN
- HER
- βSHOOK
- βINFLUENCE
- βPERFECTLY
- βMARRIAGE
- βBROAD
- βESCAPE
- βSTATES
- βMIDDLE
- βPLANT
- βMIL
- βMOVEMENT
- βNOISE
- βENEMY
- βHISTORY
- βBREAK
- ROUS
- βUNDERSTOOD
- βLATTER
- FER
- βCOMES
- βMERELY
- βSIMPLY
- WI
- βIMAGINE
- βLOWER
- βCONDUCT
- βBORN
- WA
- βYARD
- βKA
- βCLOSED
- βNOTE
- GA
- βSTRA
- RAN
- βEXIST
- EV
- βSPEECH
- βBITTER
- JO
- βMAKES
- βGRASS
- βREPLY
- βCHANGED
- βMON
- βLYING
- βDANCE
- βFINALLY
- βAMERICAN
- βENJOY
- βCONTAIN
- βMEANT
- USE
- βOBSERVED
- THER
- βLAUGH
- βAFTERWARDS
- βBEAT
- βRACE
- βEQUAL
- βRAIN
- PS
- βSTEPS
- βBENEATH
- βTAIL
- βTASTE
- IO
- EY
- βCHAR
- βGE
- GN
- TIN
- βGROW
- βTE
- IANS
- βMOVE
- βREPEATED
- βDRIVE
- TUR
- βSI
- CLOCK
- βBRAVE
- βMADAME
- βLOT
- βCASTLE
- βHI
- AND
- βFUTURE
- βRELATION
- βSORRY
- βHEALTH
- βDICK
- βR
- βBUILDING
- βEDGE
- βBLESS
- βSPITE
- WE
- βMIS
- βPRISONER
- βALLOWED
- βPH
- βCATCH
- MER
- ETH
- βCOAT
- βCOMPLETE
- βWOULDN
- βCREATURE
- βYELLOW
- βIMPORTANT
- βADD
- βPASSING
- βDARKNESS
- βCARRIAGE
- βMILL
- βFIFTEEN
- NCY
- βHUNG
- βOB
- βPLEASED
- βSPREAD
- βCURIOUS
- βWORSE
- βCIRCUMSTANCES
- βGI
- LAR
- βCAL
- βHY
- βMERE
- βJANE
- βEAST
- BI
- βCUP
- βBLIND
- βPASSION
- βDISCOVERED
- βNOTICE
- βREPORT
- βSPACE
- βPRESENTLY
- βSORROW
- βPACK
- βDIN
- CY
- βDRY
- βANCIENT
- βDRESSED
- βCOVER
- βVO
- βEXISTENCE
- βEXACTLY
- βBEAST
- βPROPER
- βDROPPED
- βCLEAN
- βCOLOUR
- βHOST
- βCHAMBER
- βFAITH
- LET
- βDETERMINED
- βPRIEST
- βSTORM
- βSKIN
- βDARE
- βPERSONS
- βPICK
- βNARROW
- βSUPPORT
- βPRIVATE
- βSMILED
- βCOUSIN
- βDRAWING
- βATTEND
- βCOOK
- βPREVENT
- βVARIOUS
- βBLA
- βFIXED
- βWEAK
- THE
- βHOLE
- βBOTTOM
- βNOBODY
- ADE
- βLEGS
- ITCH
- βINDIVIDUAL
- βEARS
- LIKE
- βADVANTAGE
- βFRANCE
- βBON
- βWINE
- βLIVES
- OD
- βWALLS
- βTIRED
- βSHOP
- βANIMAL
- βCRU
- βWROTE
- βROYAL
- βCONSIDERED
- βMORAL
- βCOMPANION
- βLOSE
- βISN
- βBAG
- βLAKE
- βINTER
- βCOM
- βLETTERS
- βLUCK
- βEAR
- βGERMAN
- βPET
- βSAKE
- βDROP
- βPAID
- βBREAKFAST
- βLABOR
- βDESERT
- βDECLARED
- βHUM
- βSTUDY
- βINSTANCE
- ONE
- βSOMEWHAT
- βCLOTH
- βSPECIAL
- βCOLONEL
- βSONG
- βMAIN
- βVALUE
- βPROUD
- βEXPRESS
- βNATION
- βHANDSOME
- βCONFESS
- βPU
- βPASSAGE
- βPERIOD
- βCUSTOM
- βHURT
- βSHOULDER
- βCHRIST
- ZA
- βRECEIVE
- βDIFFICULT
- βDEPEND
- βMEETING
- βCHI
- βGEN
- LIGHT
- βBELIEVED
- βSOCIAL
- βDIFFICULTY
- βGREATEST
- βDRAWN
- βGRANT
- βBIRDS
- βANGRY
- βHEAT
- UFF
- βDUE
- βPLACES
- βSIN
- βCOURAGE
- βEVIDENTLY
- βGENTLE
- βCRUEL
- βGEORGE
- βGRI
- βSERVANT
- βU
- βPURE
- OOK
- βKNOWS
- βKNOWING
- LF
- βWRITING
- βREMEMBERED
- βCU
- βHOLDING
- βTENDER
- βQUI
- βBURST
- βSURELY
- IGN
- βVALLEY
- βFU
- βBUTTER
- βSPOKEN
- βSTORE
- βDISC
- βCHRISTIAN
- βPARIS
- βHENRY
- βFINISHED
- βPROVE
- βFOOL
- βSOLDIERS
- βLANGUAGE
- βINSIDE
- βBAN
- βFALLEN
- ROW
- βMAL
- βBABY
- βSITUATION
- βWATCHED
- ANS
- βRUIN
- βGENTLEMEN
- βFRO
- βFANCY
- βACCEPT
- βSEASON
- βOURSELVES
- βSAN
- βSPEED
- IZED
- βCOOL
- βSERVE
- βVESSEL
- βWILLIAM
- βOBLIGED
- βGROUP
- FORM
- βGOES
- UOUS
- βLEAVES
- βPECULIAR
- βNEWS
- βVAIN
- βEVERYBODY
- βPIN
- UG
- βFORGOTTEN
- βFRA
- GAN
- βCAREFULLY
- βFLASH
- UCH
- βFUR
- βMURDER
- βDELIGHT
- βWAITED
- βRENDER
- βPROPERTY
- βNOTICED
- βROLL
- βKNOCK
- βEARNEST
- KI
- βHONEST
- βPROMISED
- βBAL
- AW
- βWALKING
- ANG
- βSQUARE
- βQUIETLY
- βCLOUD
- WOOD
- βFORMED
- βHIGHER
- βBUILT
- βFATE
- βTEACH
- MY
- βFALSE
- βYORK
- βDUST
- βCLIMB
- βFOND
- βGROWN
- βDESCEND
- βRAG
- βFRUIT
- βGENERALLY
- βOFFERED
- βER
- βNURSE
- POSE
- βSPENT
- βJOIN
- βSTATION
- βMEANING
- βSMOKE
- HOOD
- βROUGH
- JU
- βLIKELY
- βSURFACE
- βKE
- βMONTH
- βPOSSESSION
- βTONGUE
- βDUKE
- βNOSE
- βLAUGHING
- βWEATHER
- βWHISPERED
- βSYSTEM
- βLAWS
- DDLE
- βTOUCHED
- βTRADE
- LD
- βSURPRISED
- RIN
- βARCH
- βWEALTH
- FOR
- βTEMPER
- βFRANK
- βGAL
- βBARE
- βOPPORTUNITY
- βCLAIM
- βANIMALS
- βREV
- βCOST
- βWASH
- ZE
- βCORN
- βOPPOSITE
- βPOLICE
- βIDEAS
- LON
- βKEY
- βREADING
- βCOLLECT
- CHED
- βH
- βCROWN
- βTAR
- βSWIFT
- βSHOULDERS
- βICE
- βGRAY
- βSHARE
- βPREPARED
- βGRO
- βUND
- βTER
- βEMPTY
- CING
- βSMILING
- βAVOID
- βDIFFERENCE
- βEXPLAIN
- βPOUR
- βATTRACT
- βOPENING
- βWHEEL
- βMATERIAL
- βBREAST
- βSUFFERING
- βDISTINCT
- βBOOT
- βROW
- βFINGERS
- HAN
- βALTOGETHER
- βFAT
- βPAPA
- βBRAIN
- βASLEEP
- βGREY
- βSUM
- βGAS
- βWINDOWS
- βALIVE
- βPROCEED
- βFLOWER
- βLEAP
- βPUR
- βPIECES
- βALTER
- βMEMORY
- IENT
- βFILL
- βCLO
- βTHROWN
- βKINGDOM
- βRODE
- IUS
- βMAID
- βDIM
- βBAND
- βVIRTUE
- βDISH
- βGUEST
- βLOSS
- βCAUSED
- βMOTION
- βPOT
- βMILLION
- βFAULT
- βLOVELY
- βHERO
- PPING
- βUNITED
- βSPI
- SOME
- BRA
- βMOUNTAINS
- βNU
- βSATISFIED
- βDOLLARS
- βLOVER
- βCONCEAL
- βVAST
- βPULL
- βHATH
- βRUSH
- βJ
- βDESPAIR
- EX
- βHEIGHT
- βCE
- βBENT
- βPITY
- βRISING
- ATH
- βPRIDE
- βHURRY
- KA
- βSETTLED
- βJUSTICE
- βLIFTED
- PEN
- βSOLDIER
- βFINDING
- βREMARK
- βREGULAR
- βSTRUGGLE
- βMACHINE
- βSING
- βHURRIED
- βSUFFICIENT
- βREPRESENT
- βDOUBLE
- βALARM
- βSUPPER
- βDREADFUL
- βFORE
- ATOR
- βSTOCK
- βTIN
- βEXAMPLE
- βROOF
- βFLOW
- βSUPPOSED
- βPRESERV
- βL
- βLISTENED
- OC
- βSTO
- βSECURE
- βFRIGHTENED
- βDISTURB
- βEMOTION
- βSERVANTS
- βYO
- βBUY
- βFORCED
- βKITCHEN
- βTERROR
- βSTAIRS
- βSIXTY
- KER
- βORDINARY
- βDIRECTLY
- βHEADS
- βMETHOD
- βFORGIVE
- βAWFUL
- βREFLECT
- βGREATLY
- βTALKED
- βRIDE
- STONE
- βFAVOUR
- βWELCOME
- βSEIZED
- OU
- βCONTROL
- βORDERED
- βANGEL
- βUSUALLY
- βPOET
- βBOLD
- LINE
- βADVENTURE
- βWATCHING
- βFOLK
- βMISTRESS
- IZE
- βGROWING
- βCAVE
- βEVIDENCE
- βFINGER
- βSEVENTEEN
- βMOVING
- EOUS
- βDOESN
- βCOW
- βTYPE
- βBOIL
- βTALE
- βDELIVER
- βFARM
- βMONSIEUR
- βGATHERED
- βFEELINGS
- βRATE
- βREMARKED
- βPUTTING
- βMAT
- βCONTRARY
- βCRIME
- βPLA
- βCOL
- βNEARER
- TES
- βCIVIL
- βSHAME
- βLOOSE
- βDISCOVER
- βFLAT
- βTWICE
- βFAIL
- VIS
- βUNC
- EA
- βEUROPE
- βPATIENT
- βUNTO
- βSUFFER
- βPAIR
- βTREASURE
- OSE
- βEAGER
- βFLY
- βN
- βVAL
- βDAN
- βSALT
- βBORE
- BBE
- βARTHUR
- βAFFAIRS
- βSLOW
- βCONSIST
- βDEVIL
- LAN
- βAFFECTION
- βENGAGED
- βKISS
- βYA
- βOFFICER
- IFICATION
- βLAMP
- βPARTS
- HEN
- βMILK
- βPROCESS
- βGIFT
- βPULLED
- βHID
- βRAY
- βEXCELLENT
- βIMPRESSION
- βAUTHORITY
- βPROVED
- βTELLING
- TTE
- βTOWER
- βCONSEQUENCE
- βFAVOR
- βFLEW
- βCHARLES
- ISTS
- βADDRESS
- βFAMILIAR
- βLIMIT
- βCONFIDENCE
- βRARE
- βWEEKS
- βWOODS
- βINTENTION
- βDIRECT
- βPERFORM
- βSOLEMN
- βDISTANT
- βIMAGE
- βPRESIDENT
- βFIRM
- βINDIAN
- βRANK
- βLIKED
- βAGREE
- βHOUSES
- βWIL
- βMATTERS
- βPRISON
- βMODE
- βMAJOR
- βWORKING
- βSLIP
- βWEIGHT
- βAWARE
- βBUSY
- βLOOKS
- βWOUND
- βTHOR
- βBATH
- βEXERCISE
- βSIMILAR
- βWORE
- βAMOUNT
- βQUESTIONS
- βVIOLENT
- βEXCUSE
- βASIDE
- βTUR
- βDULL
- OF
- βEMPEROR
- βNEVERTHELESS
- βSHOUT
- βEXPLAINED
- βSIZE
- βACCOMPLISH
- FORD
- CAN
- βMISTAKE
- βINSTANTLY
- βSMOOTH
- βSTRIKE
- βBOB
- ISED
- βHORROR
- βSCIENCE
- βPROTEST
- βMANAGE
- βOBEY
- βNECESSITY
- βSPLENDID
- βPRESS
- βINTERESTING
- βRELIGION
- βUNKNOWN
- βFIERCE
- βDISAPPEARED
- βHOLY
- βHATE
- βPLAYED
- βLIN
- βNATURALLY
- βDROVE
- βLOUIS
- TIES
- βBRAND
- INESS
- RIE
- βSHOOT
- βCONSENT
- βSEATED
- βLINES
- GUE
- βAGREED
- βCIRCLE
- βSTIR
- βSTREETS
- βTASK
- βRID
- βPRODUCED
- βACCIDENT
- βWITNESS
- βLIBERTY
- βDETAIL
- βMINISTER
- βPOWERFUL
- βSAVAGE
- βSIXTEEN
- βPRETEND
- βCOAST
- βSQU
- βUTTER
- βNAMED
- βCLEVER
- βADMIT
- βCOUPLE
- βWICKED
- βMESSAGE
- βTEMPLE
- βSTONES
- βYESTERDAY
- βHILLS
- DAY
- βSLIGHT
- βDIAMOND
- βPOSSIBLY
- βAFFAIR
- βORIGINAL
- βHEARING
- βWORTHY
- βSELL
- NEY
- ICK
- βCOTTAGE
- βSACRIFICE
- βPROGRESS
- βSHOCK
- βDESIGN
- βSOUGHT
- βPIT
- βSUNDAY
- βOTHERWISE
- βCABIN
- βPRAYER
- βDWELL
- βGAIN
- βBRIDGE
- βPARTICULARLY
- βYIELD
- βTREAT
- RIGHT
- βOAK
- βROPE
- WIN
- βORDERS
- βSUSPECT
- βEDWARD
- AB
- βELEVEN
- βTEETH
- βOCCURRED
- DDING
- βAMERICA
- βFALLING
- βLION
- βDEPART
- βKEEPING
- βDEMAND
- βPAUSED
- βCEASED
- INA
- βFUN
- βCHEER
- βPARDON
- βNATIVE
- LUS
- LOW
- βDOGS
- βREQUIRED
- ILITY
- βELECT
- βENTERTAIN
- ITUDE
- βHUGE
- βCARRYING
- βBLU
- βINSIST
- βSATISFACTION
- βHUNT
- βCOUNTENANCE
- βUPPER
- βMAIDEN
- βFAILED
- βJAMES
- βFOREIGN
- βGATHER
- βTEST
- BOARD
- βTERMS
- βSILK
- βBEG
- βBROTHERS
- βPAGE
- βKNEES
- βSHOWN
- βPROFESSOR
- βMIGHTY
- βDEFI
- βCHARM
- βREQUIRE
- βLOG
- MORE
- βPROOF
- βPOSSESSED
- βSOFTLY
- βUNFORTUNATE
- βPRICE
- βSEVERE
- βSINGING
- βSTAGE
- βFREEDOM
- βSHOUTED
- βFARTHER
- βMAJESTY
- βPREVIOUS
- βGUIDE
- βMATCH
- βCHEST
- βINTENDED
- βBI
- βEXCITEMENT
- βOFFICERS
- βSUR
- βSHAKE
- βSENTIMENT
- βGENTLY
- βSUCCEEDED
- βMENTION
- βLOCK
- βACQUAINTANCE
- βIMAGINATION
- βPHYSICAL
- βLEADING
- βSLAVE
- βCART
- βPOINTED
- βSTEAM
- βSHADE
- βPIPE
- βBASE
- βINVENT
- βALAS
- βWORKED
- βREGRET
- βBUR
- βFAITHFUL
- βMENTIONED
- βRECORD
- βCOMPLAIN
- βSUPERIOR
- βBAY
- βPAL
- EMENT
- UE
- βSEVENTY
- βHOTEL
- βSHEEP
- βMEAL
- βADVICE
- βHIDDEN
- βDEMANDED
- βCONSCIOUS
- βBROW
- βPOSSESS
- βFOURTH
- βEVENTS
- βFRI
- βPRAISE
- βADVANCED
- βRESOLVED
- βSTUFF
- βCHEERFUL
- βBIRTH
- βGRIEF
- βAFFORD
- βFAIRY
- βWAKE
- βSIDES
- βSUBSTANCE
- βARTICLE
- βLEVEL
- βMIST
- βJOINED
- βPRACTICAL
- βCLEARLY
- βTRACE
- βAWAKE
- βOBSERVE
- βBASKET
- βLACK
- VILLE
- βSPIRITS
- βEXCITED
- βABANDON
- βSHINING
- βFULLY
- βCALLING
- βCONSIDERABLE
- βSPRANG
- βMILE
- βDOZEN
- βPEA
- βDANGEROUS
- βWIT
- βJEW
- βPOUNDS
- βFOX
- βINFORMATION
- βLIES
- βDECK
- NNY
- βPAUL
- βSTARS
- βANGER
- βSETTLE
- βWILLING
- βADAM
- βFACES
- βSMITH
- βIMPORTANCE
- βSTRAIN
- WAR
- βSAM
- βFEATHER
- βSERVED
- βAUTHOR
- βPERCEIVED
- βFLAME
- βDIVINE
- βTRAIL
- βANYBODY
- βSIGH
- βDELICATE
- KY
- βFOLD
- βHAVEN
- βDESIRED
- βCURIOSITY
- βPRACTICE
- βCONSIDERATION
- βABSOLUTELY
- βCITIZEN
- βBOTTLE
- βINTERESTED
- βMEAT
- βOCCUPIED
- βCHOOSE
- βTHROAT
- ETTE
- βCANDLE
- βDAWN
- βPROTECT
- βSENTENCE
- IED
- βROCKS
- βPORTION
- βAPPARENTLY
- βPRESENTED
- βTIGHT
- βACTUALLY
- βDYING
- βHAM
- βDAILY
- βSUFFERED
- βPOLITICAL
- βBODIES
- βMODERN
- βCOMPLETELY
- βSOONER
- TAN
- βPROP
- βADVANCE
- βREFUSED
- βFARMER
- βPOLITE
- βTHUNDER
- βBRIEF
- βELSIE
- βSAILOR
- βSUGGESTED
- βPLATE
- βAID
- βFLESH
- βWEEP
- βBUCK
- βANTI
- βOCEAN
- βSPEND
- WELL
- βODD
- βGOVERNOR
- βENTRANCE
- βSUSPICION
- βSTEPPED
- βRAPIDLY
- βCHECK
- βHIDE
- βFLIGHT
- βCLUB
- βENTIRE
- βINDIANS
- ASH
- βCAPITAL
- βMAMMA
- HAR
- βCORRECT
- βCRACK
- βSENSATION
- βWORST
- βPACE
- βMIDST
- βAUGUST
- βPROPORTION
- βINNOCENT
- LINESS
- βREGARDED
- βDRIVEN
- ORD
- βHASTE
- βEDUCATION
- βEMPLOY
- βTRULY
- βINSTRUMENT
- βMAG
- βFRAME
- βFOOLISH
- βTAUGHT
- βHANG
- βARGUMENT
- βNINETEEN
- βELDER
- βNAY
- βNEEDED
- βNEIGHBOR
- βINSTRUCT
- βPAPERS
- βREWARD
- βEQUALLY
- βFIELDS
- βDIG
- HIN
- βCONDITIONS
- JA
- βSPAR
- βREQUEST
- βWORN
- βREMARKABLE
- βLOAD
- βWORSHIP
- βPARK
- βKI
- βINTERRUPTED
- βSKILL
- βTERM
- LAC
- βCRITIC
- βDISTRESS
- βBELIEF
- βSTERN
- IGHT
- βTRACK
- βHUNTING
- βJEWEL
- βGRADUALLY
- βGLOW
- βRUSHED
- βMENTAL
- βVISITOR
- βPICKED
- βBEHOLD
- βEXPRESSED
- βRUB
- βSKI
- ARTAGNAN
- βMOREOVER
- βOPERATION
- βCAREFUL
- βKEEN
- βASSERT
- βWANDER
- βENEMIES
- βMYSTERIOUS
- βDEPTH
- βPREFER
- βCROSSED
- βCHARMING
- βDREAD
- βFLOUR
- βROBIN
- βTRE
- βRELIEF
- βINQUIRED
- βAPPLE
- βHENCE
- βWINGS
- βCHOICE
- βJUD
- OO
- βSPECIES
- βDELIGHTED
- IUM
- βRAPID
- βAPPEAL
- βFAMOUS
- βUSEFUL
- βHELEN
- βNEWSPAPER
- βPLENTY
- βBEARING
- βNERVOUS
- βPARA
- βURGE
- βROAR
- βWOUNDED
- βCHAIN
- βPRODUCE
- βREFLECTION
- βMERCHANT
- βQUARREL
- βGLORY
- βBEGUN
- βBARON
- CUS
- βQUEER
- βMIX
- βGAZE
- βWHISPER
- βBURIED
- βDIV
- βCARD
- βFREQUENTLY
- βTIP
- βKNEE
- βREGION
- βROOT
- βLEST
- βJEALOUS
- CTOR
- βSAVED
- βASKING
- βTRIP
- QUA
- βUNION
- HY
- βCOMPANIONS
- βSHIPS
- βHALE
- βAPPROACHED
- βHARRY
- βDRUNK
- βARRIVAL
- βSLEPT
- βFURNISH
- HEAD
- βPIG
- βABSENCE
- βPHIL
- βHEAP
- βSHOES
- βCONSCIOUSNESS
- βKINDLY
- βEVIDENT
- βSCAR
- βDETERMIN
- βGRASP
- βSTEAL
- βOWE
- βKNIFE
- βPRECIOUS
- βELEMENT
- βPROCEEDED
- βFEVER
- βLEADER
- βRISK
- βEASE
- βGRIM
- βMOUNT
- βMEANWHILE
- βCENTURY
- OON
- βJUDGMENT
- βAROSE
- βVISION
- βSPARE
- βEXTREME
- βCONSTANT
- βOBSERVATION
- βTHRUST
- βDELAY
- βCENT
- βINCLUD
- βLIFT
- βADMIRE
- βISSUE
- βFRIENDSHIP
- βLESSON
- βPRINCIPAL
- βMOURN
- βACCEPTED
- βBURNING
- βCAPABLE
- βEXTRAORDINARY
- βSANG
- βREMOVED
- βHOPED
- βHORN
- βALICE
- βMUD
- βAPARTMENT
- βFIGHTING
- βBLAME
- βTREMBLING
- βSOMEBODY
- βANYONE
- βBRIDE
- βREADER
- βROB
- βEVERYWHERE
- βLABOUR
- βRECALL
- βBULL
- βHIT
- βCOUNCIL
- βPOPULAR
- βCHAP
- βTRIAL
- βDUN
- βWISHES
- βBRILLIANT
- βASSURED
- βFORGOT
- βCONTINUE
- βACKNOWLEDG
- βRETREAT
- βINCREASED
- βCONTEMPT
- βGRANDFATHER
- βSYMPATHY
- βGHOST
- βSTRETCHED
- βCREATURES
- βCAB
- βHIND
- βPLAYING
- βMISERABLE
- βMEMBERS
- βKINDNESS
- βHIGHEST
- βPRIM
- βKISSED
- βDESERVE
- βHUT
- βBEGGED
- βEIGHTY
- βCLOSELY
- βWONDERED
- βMILITARY
- βREMIND
- βACCORDINGLY
- βLARGER
- βMAINTAIN
- βENGINE
- βMOTIVE
- βDESTROY
- βSTRIP
- βHANS
- βAHEAD
- βINFINITE
- βPROMPT
- βINFORMED
- TTLE
- βPEER
- βPRESSED
- βTRAP
- βSOMEWHERE
- βBOUGHT
- βVISIBLE
- βASHAMED
- βTEAR
- βNEIGHBOUR
- βCONSTITUTION
- βINTELLIGENCE
- βPROFESSION
- βHUNGRY
- RIDGE
- βSMELL
- βSTORIES
- βLISTENING
- βAPPROACH
- βSTRING
- βEXPLANATION
- βIMMENSE
- βRELIGIOUS
- βTHROUGHOUT
- βHOLLOW
- βAWAIT
- βFLYING
- βSCREAM
- βACTIVE
- βRUM
- βPRODUCT
- βUNHAPPY
- βVAGUE
- ARIES
- βELIZABETH
- βSTUPID
- βDIGNITY
- βISABEL
- GAR
- βBRO
- βPITCH
- βCOMRADE
- βSTIFF
- βRECKON
- βSOLD
- βSPARK
- βSTRO
- βCRYING
- βMAGIC
- βREPEAT
- PORT
- βMARKED
- βCOMFORTABLE
- βPROJECT
- βBECOMING
- βPARENTS
- βSHELTER
- βSTOLE
- βHINT
- βNEST
- βTRICK
- βTHOROUGHLY
- βHOSPITAL
- βWEAPON
- βROME
- βSTYLE
- βADMITTED
- βSAFETY
- FIELD
- βUNDERSTANDING
- βTREMBLE
- βPRINT
- βSLAVES
- βWEARY
- βARTIST
- βCREDIT
- BURG
- βCONCLUSION
- βSELDOM
- βUNUSUAL
- βCLOUDS
- βUNABLE
- βGAY
- βHANGING
- βSCR
- βBOWED
- βDAVID
- βVOL
- βPUSHED
- βESCAPED
- MOND
- βWARN
- βBETRAY
- βEGGS
- βPLAINLY
- βEXHIBIT
- βDISPLAY
- βMEMBER
- βGRIN
- βPROSPECT
- βBRUSH
- βBID
- βSUCCESSFUL
- βEXTENT
- βPERSUADE
- βMID
- βMOOD
- βARRANGED
- βUNIVERSAL
- βJIM
- βSIGNAL
- βWHILST
- βPHILIP
- βWOLF
- RATE
- βEAGERLY
- βBILLY
- βRETURNING
- βCONSCIENCE
- βFORTUNATE
- βFEMALE
- βGLEAM
- βHASTILY
- βPROVIDED
- βOBTAIN
- βINSTINCT
- βCONCERNED
- βCONCERNING
- βSOMEHOW
- βPINK
- βRAGE
- βACCUSTOMED
- βUNCONSCIOUS
- βADVISE
- βBRANCHES
- βTINY
- βREFUSE
- βBISHOP
- βSUPPLY
- βPEASANT
- βLAWYER
- βWASTE
- βCONNECTION
- βDEVELOP
- βCORRESPOND
- βPLUM
- βNODDED
- βSLIPPED
- βEU
- βCONSTANTLY
- CUM
- MMED
- βFAIRLY
- HOUSE
- βKIT
- βRANG
- βFEATURES
- βPAUSE
- βPAINFUL
- βJOE
- βWHENCE
- βLAUGHTER
- βCOACH
- βCHRISTMAS
- βEATING
- βWHOLLY
- βAPART
- βSUPER
- βREVOLUTION
- βLONELY
- βCHEEKS
- βTHRONE
- βCREW
- βATTAIN
- βESTABLISHED
- TIME
- βDASH
- βFRIENDLY
- βOPERA
- βEARL
- βEXHAUST
- βCLIFF
- βREVEAL
- βADOPT
- βCENTRE
- βMERRY
- βSYLVIA
- βIDEAL
- βMISFORTUNE
- βFEAST
- βARAB
- βNUT
- βFETCH
- βFOUGHT
- βPILE
- βSETTING
- βSOURCE
- βPERSIST
- βMERCY
- βBARK
- βLUC
- βDEEPLY
- βCOMPARE
- βATTITUDE
- βENDURE
- βDELIGHTFUL
- βBEARD
- βPATIENCE
- βLOCAL
- βUTTERED
- βVICTORY
- βTREATED
- βSEPARATE
- βWAG
- βDRAGG
- βTITLE
- βTROOPS
- βTRIUMPH
- βREAR
- βGAINED
- βSINK
- βDEFEND
- βTIED
- βFLED
- βDARED
- βINCREASE
- βPOND
- βCONQUER
- βFOREHEAD
- βFAN
- βANXIETY
- βENCOUNTER
- βSEX
- βHALT
- βSANK
- βCHEEK
- βHUMBLE
- βWRITER
- βEMPLOYED
- βDISTINGUISHED
- βRAISE
- βWHIP
- βGIANT
- βRANGE
- βOBTAINED
- βFLAG
- βMAC
- βJUMPED
- βDISCOVERY
- βNATIONAL
- βCOMMISSION
- βPOSITIVE
- βLOVING
- βEXACT
- βMURMURED
- βGAZED
- βREFER
- βCOLLEGE
- βENCOURAGE
- βNOVEL
- βCLOCK
- βMORTAL
- βROLLED
- βRAT
- IZING
- βGUILTY
- βVICTOR
- WORTH
- βPRA
- βAPPROACHING
- βRELATIVE
- βESTATE
- βUGLY
- βMETAL
- βROBERT
- βTENT
- βADMIRATION
- βFOURTEEN
- βBARBAR
- βWITCH
- ELLA
- βCAKE
- βSHONE
- βMANAGED
- βVOLUME
- βGREEK
- βDANCING
- βWRETCHED
- βCONDEMN
- βMAGNIFICENT
- βCONSULT
- J
- βORGAN
- βFLEET
- βARRANGEMENT
- βINCIDENT
- βMISERY
- βARROW
- βSTROKE
- βASSIST
- βBUILD
- βSUCCEED
- βDESPERATE
- βWIDOW
- UDE
- βMARKET
- βWISDOM
- βPRECISE
- βCURRENT
- βSPOIL
- βBADE
- βWOODEN
- βRESIST
- βOBVIOUS
- βSENSIBLE
- FALL
- βADDRESSED
- βGIL
- βCOUNSEL
- βPURCHASE
- βSELECT
- βUSELESS
- βSTARED
- βARREST
- βPOISON
- βFIN
- βSWALLOW
- βBLOCK
- βSLID
- βNINETY
- βSPORT
- βPROVIDE
- βANNA
- βLAMB
- βINTERVAL
- βJUMP
- βDESCRIBED
- βSTRIKING
- βPROVISION
- βPROPOSED
- βMELANCHOLY
- βWARRIOR
- βSUGGEST
- βDEPARTURE
- βBURDEN
- βLIMB
- βTROUBLED
- βMEADOW
- βSACRED
- βSOLID
- βTRU
- βLUCY
- βRECOVER
- βENERGY
- βPOWDER
- βRESUMED
- βINTENSE
- βBRITISH
- βSTRAW
- βAGREEABLE
- βEVERYONE
- βCONCERN
- βVOYAGE
- βSOUTHERN
- βBOSOM
- βUTTERLY
- βFEED
- βESSENTIAL
- βCONFINE
- βHOUSEHOLD
- βEXTREMELY
- βWONDERING
- βLIST
- βPINE
- PHA
- βEXPERIMENT
- βJOSEPH
- βMYSTERY
- βRESTORE
- βBLUSH
- FOLD
- βCHOSEN
- βINTELLECT
- βCURTAIN
- OLOGY
- βMOUNTED
- βLAP
- βEPI
- βPUNISH
- βWEDDING
- βRECOGNIZED
- βDRIFT
- βPREPARATION
- βRESOLUTION
- βOPPRESS
- βFIX
- βVICTIM
- OGRAPH
- βSUMMON
- βJULIA
- βFLOOD
- βWAL
- ULATION
- βSLIGHTLY
- βLODGE
- βWIRE
- βCONFUSION
- βUNEXPECTED
- βCONCEIVE
- βPRIZE
- βJESUS
- βADDITION
- βRUDE
- βFATAL
- βCARELESS
- βPATCH
- βKO
- βCATHERINE
- βPARLIAMENT
- βPROFOUND
- βALOUD
- βRELIEVE
- βPUSH
- ABILITY
- βACCOMPANIED
- βSOVEREIGN
- βSINGULAR
- βECHO
- βCOMPOSED
- βSHAKING
- ATORY
- βASSISTANCE
- βTEACHER
- βHORRIBLE
- βSTRICT
- βVERSE
- βPUNISHMENT
- βGOWN
- βMISTAKEN
- βVARI
- βSWEPT
- βGESTURE
- βBUSH
- βSTEEL
- βAFFECTED
- βDIRECTED
- βSURROUNDED
- βABSURD
- βSUGAR
- βSCRAP
- βIMMEDIATE
- βSADDLE
- βTY
- βARISE
- βSIGHED
- βEXCHANGE
- βIMPATIENT
- βSNAP
- βEMBRACE
- βDISEASE
- βPROFIT
- βRIDING
- βRECOVERED
- βGOVERN
- βSTRETCH
- βCONVINCED
- βLEANING
- βDOMESTIC
- βCOMPLEX
- βMANIFEST
- βINDULGE
- βGENIUS
- βAGENT
- βVEIL
- βDESCRIPTION
- βINCLINED
- βDECEIVE
- βDARLING
- βREIGN
- HU
- βENORMOUS
- βRESTRAIN
- βDUTIES
- BURY
- TTERED
- βPOLE
- βENABLE
- βEXCEPTION
- βINTIMATE
- βCOUNTESS
- βTRIBE
- βHANDKERCHIEF
- βMIDNIGHT
- βPROBLEM
- βTRAMP
- βOIL
- CAST
- βCRUSH
- βDISCUSS
- βRAM
- βTROT
- βUNRE
- βWHIRL
- βLOCKED
- βHORIZON
- βOFFICIAL
- βSCHEME
- βDROWN
- βPIERRE
- βPERMITTED
- βCONNECTED
- βASSURE
- βCOCK
- βUTMOST
- βDEVOTED
- βRELI
- βSUFFICIENTLY
- βINTELLECTUAL
- βCARPET
- βOBJECTION
- βAFTERWARD
- βREALITY
- βNEGRO
- βRETAIN
- βASCEND
- βCEASE
- βKATE
- βMARVEL
- KO
- βBOND
- MOST
- βCOAL
- GATE
- βIGNORANT
- βBREAKING
- βTWIN
- βASTONISHMENT
- βCOFFEE
- βJAR
- βCITIES
- βORIGIN
- βEXECUT
- βFINAL
- βINHABITANTS
- βSTABLE
- βCHIN
- βPARTIES
- βPLUNGE
- βGENEROUS
- βDESCRIBE
- βANNOUNCED
- βMERIT
- βREVERE
- βERE
- ACIOUS
- ZI
- βDISAPPOINT
- βSUGGESTION
- βDOUBTLESS
- βTRUNK
- βSTAMP
- βJOB
- βAPPOINTED
- βDIVIDED
- βACQUAINTED
- CHI
- βABSOLUTE
- βFEARFUL
- βPRIVILEGE
- βCRAFT
- βSTEEP
- βHUNTER
- βFORBID
- βMODEST
- βENDEAVOUR
- βSWEEP
- βBEHELD
- βABSORB
- βCONSTRUCT
- βEMPIRE
- βEXPEDITION
- βERECT
- βOFFEND
- βINTEND
- βPERMIT
- βDESTROYED
- βCONTRACT
- βTHIRST
- βWAGON
- βEVA
- βGLOOM
- βATMOSPHERE
- βRESERVE
- βVOTE
- βGER
- βNONSENSE
- βPREVAIL
- βQUALITY
- βCLASP
- βCONCLUDED
- βRAP
- βKATY
- βETERNAL
- βMUTTERED
- βNEGLECT
- βSQUIRE
- βCREEP
- LOCK
- βELECTRIC
- βHAY
- βEXPENSE
- βSCORN
- βRETIRED
- βSTOUT
- βMURMUR
- βSHARPLY
- βDISTRICT
- βLEAF
- βFAILURE
- WICK
- βJEAN
- βNUMEROUS
- βINFANT
- βREALIZED
- βTRAVELLER
- βHUNGER
- βJUNE
- βMUN
- βRECOMMEND
- βCREP
- ZZLE
- βRICHARD
- WORK
- βMONTE
- βPREACH
- βPALM
- AVI
- βANYWHERE
- βDISPOSITION
- βMIRROR
- βVENTURE
- βPOUND
- βCIGAR
- βINVITED
- βBENCH
- βPROTECTION
- βBENEFIT
- βTHOMAS
- βCLERK
- βREPROACH
- βUNIFORM
- βGENERATION
- βSEAL
- βCOMPASS
- βWARNING
- βEXTENDED
- βDIFFICULTIES
- βMAYBE
- βGROAN
- βAFFECT
- βCOMB
- βEARN
- βWESTERN
- βIDLE
- βSCORE
- βTAP
- βASTONISHED
- βINTRODUCED
- βLEISURE
- βLIEUTENANT
- βVIOLENCE
- βFIRMLY
- βMONSTER
- βUR
- βPROPERLY
- βTWIST
- βPIRATE
- βROBBER
- βBATTER
- βWEPT
- βLEANED
- βFOG
- βORNAMENT
- βANDREW
- βBUSHES
- βREPUBLIC
- βCONFIDENT
- βLEAN
- βDART
- βSTOOP
- βCURL
- βCOUNTER
- βNORTHERN
- βPEARL
- βNEAREST
- βFRANCIS
- βWANDERING
- βFREQUENT
- βSTARTLED
- βSTATEMENT
- βOCCUR
- βBLOOM
- βNERVE
- βINSPECT
- βINDUCE
- βFLATTER
- βDATE
- βAMBITION
- βSLOPE
- βMALE
- βMADAM
- βMONK
- βRENT
- βCONFIRM
- βINVESTIGAT
- βRABBIT
- βREGIMENT
- βSUBMIT
- βSPELL
- βFURIOUS
- βRAIL
- βBESTOW
- βRALPH
- βSCATTERED
- βCOMPELLED
- βTHREAD
- βCHILL
- βDENY
- βPRONOUNC
- βMANKIND
- βCATTLE
- βEXECUTION
- βREBEL
- βSUPREME
- βVALUABLE
- βLIKEWISE
- βCONVEY
- βTIDE
- βGLOOMY
- βCOIN
- βACTUAL
- βTAX
- βPROVINCE
- βGRATEFUL
- βSPIRITUAL
- βVANISHED
- βDIANA
- βHAUNT
- βDRAGON
- βCRAWL
- βCHINA
- βGRATITUDE
- βNEAT
- βFINISH
- βINTENT
- βFRIGHT
- βEMBARRASS
- βTHIRTEEN
- βRUTH
- βSLIGHTEST
- βDEVELOPMENT
- βINTERVIEW
- βSPECTACLE
- βBROOK
- VIE
- βWEAKNESS
- βAUDIENCE
- βCONSEQUENTLY
- βABROAD
- βASPECT
- βPAINTED
- βRELEASE
- βINSULT
- βSOOTH
- βDISAPPOINTMENT
- βEMERG
- βBRIG
- βESTEEM
- βINVITATION
- βPASSENGER
- βPUBLISH
- βPIANO
- βIRISH
- βDESK
- βBEATEN
- βFIFTH
- βIMPULSE
- βSWEAR
- βEATEN
- βPURPLE
- βCOMMITTED
- βCOUNTRIES
- βPERCEIVE
- ISON
- βCELEBRAT
- βGRANDMOTHER
- βSHUDDER
- βSUNSHINE
- βSPANISH
- βHITHERTO
- βMARILLA
- βSNAKE
- βMOCK
- βINTERFERE
- βWALTER
- βAMID
- βMARBLE
- βMISSION
- TERIOR
- βDRIVING
- βFURNITURE
- βSTEADY
- βCIRCUMSTANCE
- βINTERPRET
- βENCHANT
- βERROR
- βCONVICTION
- βHELPLESS
- βMEDICINE
- βQUALITIES
- βITALIAN
- βHASTENED
- βOCCASIONALLY
- βPURSUED
- βHESITATED
- βINDEPENDENT
- βOLIVER
- βLINGER
- UX
- βEXAMINED
- βREPENT
- βPHYSICIAN
- βCHASE
- βBELOVED
- βATTACHED
- βFLORENCE
- βHONEY
- βMOUSE
- βCRIES
- βBAKE
- βPOEM
- βDESTRUCTION
- βFULFIL
- βMESSENGER
- βTRISTRAM
- βFANCIED
- βEXCESS
- βCURSE
- βCHU
- βQUANTITY
- βTHORNTON
- βCREATED
- βCONTINUALLY
- βLIGHTNING
- βBORNE
- βTOTAL
- βDISPOSED
- βRIFLE
- βPOLLY
- βGOAT
- βBACKWARD
- βVIRGINIA
- βKICK
- βPERIL
- βQUO
- βGLORIOUS
- βMULTITUDE
- βLEATHER
- βABSENT
- βDEMON
- βDEBT
- βTORTURE
- βACCORD
- βMATE
- βCATHOLIC
- βPILL
- βLIBRARY
- βPURSUIT
- βSHIRT
- βDEAREST
- βCOLLAR
- βBEACH
- βROBE
- βDECLARE
- βBRANCH
- βTEMPT
- βSTEADILY
- βDISGUST
- βSILLY
- βARRIVE
- βDRANK
- βLEVI
- βCOMMUNICAT
- βRACHEL
- βWASHINGTON
- βRESIGN
- βMEANTIME
- βLACE
- βENGAGEMENT
- βQUIVER
- βSEPARATED
- βDISCUSSION
- βVENTURED
- βSURROUNDING
- βPOLISH
- βNAIL
- βSWELL
- βJOKE
- βLINCOLN
- βSTUDENT
- βGLITTER
- βRUSSIAN
- βREADILY
- βCHRIS
- βPOVERTY
- βDISGRACE
- βCHEESE
- βHEAVILY
- βSCALE
- βSTAFF
- βENTREAT
- βFAREWELL
- βLUNCH
- βPEEP
- βMULE
- βSOMEONE
- βDISAPPEAR
- βDECISION
- βPISTOL
- βPUN
- βSPUR
- βASSUMED
- βEXTEND
- βENTHUSIASM
- βDEFINITE
- βUNDERTAKE
- βCOMMITTEE
- βSIMON
- βFENCE
- βAPPLIED
- βRELATED
- βVICE
- βUNPLEASANT
- βPROBABLE
- βPROCURE
- βFROWN
- βCLOAK
- βHUMANITY
- βFAMILIES
- βPHILOSOPHER
- βDWARF
- βOVERCOME
- βDEFEAT
- βFASTENED
- βMARSH
- βCLASSES
- βTOMB
- βGRACIOUS
- βREMOTE
- βCELL
- βSHRIEK
- βRESCUE
- βPOOL
- βORGANIZ
- βCHOSE
- βCUTTING
- βCOWARD
- βBORDER
- βDIRTY
- βMONKEY
- βHOOK
- βCHUCK
- βEMILY
- βJEST
- βPLAC
- βWEIGH
- βASSOCIATE
- βGLIMPSE
- βSTUCK
- βBOLT
- βMURDERER
- βPONY
- βDISTINGUISH
- βINSTITUTION
- βCUNNING
- βCOMPLIMENT
- βAPPETITE
- βREPUTATION
- βFEEBLE
- βKIN
- βSERIES
- βGRACEFUL
- βPLATFORM
- βBREEZE
- βPHRASE
- βCLAY
- MONT
- βRATTL
- βOPPOSITION
- βLANE
- βBOAST
- βGROWTH
- βINCLINATION
- βBEHAVE
- βSUSAN
- βDISTINCTION
- βDISLIKE
- βNICHOLAS
- βSATISFY
- βDRAMA
- βELBOW
- βGAZING
- βCONSUM
- βSPIN
- βOATH
- βCHANNEL
- βCHARACTERISTIC
- βSPEAR
- βSLAIN
- βSAUCE
- βFROG
- βCONCEPTION
- βTIMID
- βZEAL
- βAPPARENT
- SHIRE
- βCENTER
- βVARIETY
- βDUSK
- βAPT
- βCOLUMN
- βREVENGE
- βRIVAL
- βIMITAT
- βPASSIONATE
- βSELFISH
- βNORMAN
- βREPAIR
- βTHRILL
- βTREATMENT
- βROSA
- βMARTIN
- βINDIFFERENT
- βTHITHER
- βGALLANT
- βPEPPER
- βRECOLLECT
- βVINE
- βSCARCE
- βSHIELD
- βMINGLED
- CLOSE
- βHARSH
- βBRICK
- βHUMOR
- βMISCHIEF
- βTREMENDOUS
- βFUNCTION
- βSMART
- βSULTAN
- βDISMISS
- βTHREATENED
- βCHEAP
- βFLOCK
- βENDEAVOR
- βWHISK
- βITALY
- βWAIST
- βFLUTTER
- βSMOKING
- βMONARCH
- βAFRICA
- βACCUSE
- βHERBERT
- βREFRESH
- βREJOICE
- βPILLOW
- βEXPECTATION
- βPOETRY
- βHOPELESS
- βPERISH
- βPHILOSOPHY
- βWHISTLE
- βBERNARD
- βLAMENT
- βIMPROVE
- βSUP
- βPERPLEX
- βFOUNTAIN
- βLEAGUE
- βDESPISE
- βIGNORANCE
- βREFERENCE
- βDUCK
- βGROVE
- βPURSE
- βPARTNER
- βPROPHET
- βSHIVER
- βNEIGHBOURHOOD
- βREPRESENTATIVE
- SAIL
- βWIP
- βACQUIRED
- βCHIMNEY
- βDOCTRINE
- βMAXIM
- βANGLE
- βMAJORITY
- βAUTUMN
- βCONFUSED
- βCRISTO
- βACHIEVE
- βDISGUISE
- βREDUCED
- βEARLIER
- βTHEATRE
- βDECIDE
- MINATED
- OLOGICAL
- βOCCUPATION
- βVIGOROUS
- βCONTINENT
- βDECLINE
- βCOMMUNITY
- βMOTIONLESS
- βHATRED
- βCOMMUNICATION
- βBOWL
- βCOMMENT
- βAPPROVE
- βCEREMONY
- βCRIMINAL
- βSCIENTIFIC
- βDUCHESS
- βVIVID
- βSHIFT
- βAVAIL
- βDAMP
- βJOHNSON
- βSLENDER
- βCONTRAST
- βAMUSEMENT
- βPLOT
- βLYN
- βASSOCIATION
- βSNATCH
- βUNCERTAIN
- βPRESSURE
- βPERCH
- βAPPLY
- βPLANET
- βNOTWITHSTANDING
- βSWUNG
- βSTIRRED
- βATTENDANT
- βENJOYMENT
- βWORRY
- βALBERT
- βNAKED
- βTALENT
- βMARIAN
- βREFORM
- βDELIBERATE
- βINTELLIGENT
- βSENSITIVE
- βYONDER
- βPUPIL
- βFRIGHTFUL
- βDOUBTFUL
- βSTANDARD
- βMAGISTRATE
- βSHEPHERD
- βSTOMACH
- βDEPOSIT
- βRENEW
- βHEDGE
- βFRANCS
- βPOSSIBILITY
- βRESEMBLE
- βFATIGUE
- βPORTRAIT
- βFAVORITE
- βCREAM
- βBURG
- βSECRETARY
- βDIVERS
- βACTIVITY
- βSPECULAT
- βHUMOUR
- βFITTED
- βEXTERNAL
- βCETERA
- βWRAPPED
- βWHIT
- βFRED
- βEXAMINATION
- βLODGING
- βOWING
- βJAW
- βCROW
- βBALANCE
- βPUFF
- βTENDERNESS
- βPORTHOS
- βANCHOR
- βINTERRUPT
- βNECESSARILY
- βPERPETUAL
- βAGONY
- βPOPE
- βSCHOLAR
- βSCOTLAND
- βSUPPRESS
- βWRATH
- βWRECK
- βEXCEED
- βPERFECTION
- βINDIA
- βTRADITION
- βSECTION
- βEASTERN
- βDOORWAY
- βWIVES
- βCONVENTION
- βANNOUNC
- βEGYPT
- βCONTRADICT
- βSCRATCH
- βCENTRAL
- βGLOVE
- βWAX
- βPREPARE
- βACCOMPANY
- βINCREASING
- βLIBERAL
- βRAISING
- βORANGE
- βSHOE
- βATTRIBUTE
- βLITERATURE
- βPUZZLED
- βWITHDRAW
- βWHITHER
- βHAWK
- βMOONLIGHT
- βEXAMINE
- βHAPPILY
- βPRECEDE
- βDETECTIVE
- βINCHES
- βSOLITARY
- βDUTCH
- βNAPOLEON
- βUNEASY
- βCARDINAL
- βBLEW
- βFOWL
- βDECORAT
- βCHILDHOOD
- βTORMENT
- βLOSING
- βPERMISSION
- βBLANK
- βUPSTAIRS
- βCAPACITY
- βTRIFLE
- βFOLLY
- βRECOGNIZE
- βREMOVE
- βVENGEANCE
- βENTERPRISE
- βBEDROOM
- βANYHOW
- βINQUIRY
- βASHES
- βDRAG
- βHUSH
- βAWKWARD
- βSATURDAY
- βGENUINE
- βSURVIV
- βSKIRT
- βAFFECTIONATE
- βTANG
- βMUTUAL
- βDISPUTE
- βEAGLE
- βINCOME
- βBIND
- βFAME
- βIMPROVEMENT
- ROVING
- βDIFFER
- βAWOKE
- βSLEEVE
- βSOLITUDE
- βFAVOURITE
- JI
- βDETECT
- βCOMPREHEND
- βPREPARING
- βSERPENT
- βSUMMIT
- βKNOT
- βKNIT
- βCOPY
- βSTOPPING
- βFADED
- βHIDEOUS
- βJULIE
- STEAD
- βSHINE
- βCONFLICT
- βPROPOSITION
- βREFUGE
- βGALLERY
- βBUNDLE
- βAXE
- βSLAVERY
- βMASK
- βALYOSHA
- βLADDER
- βDEPARTMENT
- βDISCHARGE
- βDEPRESS
- βGALLOP
- βSCARLET
- βKITTY
- βRECEIVING
- βSURRENDER
- βSUSTAIN
- βTWILIGHT
- βCONGRESS
- βIRELAND
- βFUNNY
- βLEND
- βCONSTITUTE
- βFUNERAL
- βCRYSTAL
- βSPAIN
- βEXCEEDINGLY
- βDAMN
- βCOMMUN
- βCIVILIZATION
- βPREJUDICE
- βPORCH
- βASSISTANT
- βINDUSTRY
- βTUMBLE
- βDEFENCE
- βHITHER
- βSMOT
- βCOLONI
- βAMAZEMENT
- βMARGUERITE
- βMIRACLE
- βINHERIT
- βBEGGAR
- βENVELOPE
- βINDIGNATION
- βNATASHA
- βPROPOSAL
- βFRAGMENT
- βROUSED
- βROAST
- ENCIES
- βCOMMENCED
- βRESOURCE
- βPOPULATION
- βQUOTH
- βPURSUE
- βEDUCAT
- βAFFLICT
- βCONTACT
- βCRIMSON
- βDIVISION
- βDISORDER
- βCOPPER
- βSOLICIT
- βMODERATE
- βDRUM
- βSWIM
- βSALUTE
- βASSUME
- βMUSCLE
- βOVERWHELM
- βSHAKESPEARE
- βSTRUGGLING
- βTRANQUIL
- βCHICKEN
- βTREAD
- βCLAW
- βBIBLE
- βRIDGE
- βTHREAT
- βVELVET
- βEXPOSED
- βIDIOT
- βBARREL
- βPENNY
- βTEMPTATION
- βDANGLARS
- βCENTURIES
- βDISTRIBUT
- βREJECT
- βRETORTED
- βCONCENTRAT
- βCORDIAL
- βMOTOR
- βCANNON
- KEEP
- βWRETCH
- βASSURANCE
- βTHIEF
- βSURVEY
- βVITAL
- βRAILWAY
- βJACKSON
- βCRASH
- βGROWL
- βCOMBAT
- βRECOLLECTION
- βSECURITY
- βJACOB
- βCLUTCH
- βBLANKET
- βNANCY
- βCELLAR
- βCONVENIENT
- βINDIGNANT
- βCOARSE
- βWORM
- βSCREEN
- βTRANSPORT
- βBULLET
- βAPPRECIATE
- βDEVOTION
- βINVISIBLE
- βDRIED
- βMIXTURE
- βCANDID
- βPERFORMANCE
- βRIPE
- βEXQUISITE
- βBARGAIN
- βTOBACCO
- βLOYAL
- βMOULD
- βATTENTIVE
- βDOROTHY
- βBRUTE
- βESTABLISHMENT
- βABILITY
- βINHABIT
- βOBSCURE
- βBORROW
- βESSENCE
- βDISMAY
- βFLEE
- βBLADE
- βPLUCK
- βCOFFIN
- βSUNSET
- βSTEPHEN
- βECONOMIC
- βHOLIDAY
- βMECHANICAL
- βCOTTON
- βAWAKENED
- βSEIZE
- βRIDICULOUS
- βSANCHO
- βHESITATION
- βCORPSE
- βSAVING
- HOLD
- FOOT
- βELDEST
- βDESPITE
- βEDITH
- βCHERISH
- βRESISTANCE
- βWILSON
- βARGUE
- βINQUIRE
- βAPPREHENSION
- βAVENUE
- βDRAKE
- βPROPOSE
- HURST
- βINFERIOR
- βSTAIRCASE
- βWHEREFORE
- βCARLYLE
- βCOUCH
- βROUTE
- βPOLITICS
- βTOMORROW
- βTHRONG
- βNAUGHT
- βSUNLIGHT
- βINDIFFERENCE
- βOBEDIENCE
- βRECEPTION
- βVEGETABLE
- βIMPERFECT
- βRESIDENCE
- βTURKEY
- βVIOLET
- βSARAH
- βALTAR
- βGRIEVE
- βJERK
- βENSU
- βMAGICIAN
- βBLOSSOM
- βLANTERN
- βRESOLUTE
- βTHOUGHTFULLY
- βFORTNIGHT
- βTRUMPET
- βVALJEAN
- βUNWILLING
- βLECTURE
- βWHEREUPON
- βHOLLAND
- βCHANGING
- βCREEK
- βSLICE
- βNORMAL
- βANNIE
- βACCENT
- βFREDERICK
- βDISAGREEABLE
- βRUBBED
- βDUMB
- βESTABLISH
- βIMPORT
- βAFFIRM
- βMATTHEW
- βBRISK
- βCONVERT
- βBENDING
- βIVAN
- βMADEMOISELLE
- βMICHAEL
- βEASIER
- βJONES
- βFACING
- βEXCELLENCY
- βLITERARY
- βGOSSIP
- βDEVOUR
- βSTAGGER
- βPENCIL
- βAVERAGE
- βHAMMER
- βTRIUMPHANT
- βPREFERRED
- βAPPLICATION
- βOCCUPY
- βAUTHORITIES
- BURN
- βASCERTAIN
- βCORRIDOR
- βDELICIOUS
- βPRACTISE
- βUNIVERSE
- βSHILLING
- βCONTEST
- βASHORE
- βCOMMIT
- βADMINISTRATION
- βSTUDIED
- βRIGID
- βADORN
- βELSEWHERE
- βINNOCENCE
- βJOURNAL
- βLANDSCAPE
- βTELEGRAPH
- βANGRILY
- βCAMPAIGN
- βUNJUST
- βCHALLENGE
- βTORRENT
- βRELATE
- βASSEMBLED
- βIMPRESSED
- βCANOE
- βCONCLUD
- βQUIXOTE
- βSATISFACTORY
- βNIECE
- βDEAF
- βRAFT
- βJIMMY
- βGLID
- βREGULAT
- βCHATTER
- βGLACIER
- βENVY
- βSTATUE
- βBOSTON
- βRICHMOND
- βDENIED
- βFANNY
- βSOLOMON
- βVULGAR
- βSTALK
- βREPLACE
- βSPOON
- βBASIN
- βFEATURE
- βCONVICT
- βARCHITECT
- βADMIRAL
- βRIBBON
- βPERMANENT
- βAPRIL
- βJOLLY
- βNEIGHBORHOOD
- βIMPART
- BOROUGH
- CAMP
- βHORRID
- βIMMORTAL
- βPRUDENCE
- βSPANIARD
- βSUPPOSING
- βTELEPHONE
- βTEMPERATURE
- βPENETRATE
- βOYSTER
- βAPPOINTMENT
- βEGYPTIAN
- βDWELT
- βNEPHEW
- βRAILROAD
- βSEPTEMBER
- βDEVICE
- βWHEAT
- βGILBERT
- βELEGANT
- βADVERTISE
- βRATIONAL
- βTURTLE
- βBROOD
- βASSEMBLY
- βCULTIVATE
- βEDITOR
- βSPECIMEN
- βUNDOUBTEDLY
- βWHALE
- βDROPPING
- βBALLOON
- βMEDICAL
- COMB
- βCOMPOSITION
- βFOOTSTEPS
- βLAUNCELOT
- βDISCOURSE
- βERRAND
- βCONVERSE
- βADVANCING
- βDOWNSTAIRS
- βTUMULT
- βCORRUPT
- βSUFFICE
- βANGUISH
- βSHAGGY
- βRETIRE
- βTIMBER
- βBLAZE
- βABSTRACT
- βEMBROIDER
- βPHOTOGRAPH
- βPROSPERITY
- βTERRIBLY
- βTERRITORY
- βTHRESHOLD
- βPAVEMENT
- βINJURED
- βLIMP
- βAGITATION
- βRASCAL
- βPRESUME
- βOBSERVING
- βOBSTACLE
- βSIMPLICITY
- βSLUMBER
- βSUPPLIED
- βCOMBINATION
- βDRAIN
- βWILDERNESS
- βBELIEVING
- βVILLAIN
- βRECKLESS
- βINJURY
- βCLAPP
- βFRIDAY
- βHERCULES
- βKENNEDY
- βSYMPTOM
- βSLEDGE
- βCEILING
- βLEMON
- βPLAGUE
- βMONDAY
- βCANVAS
- βIMPATIENCE
- βUNCOMFORTABLE
- βACCESS
- βFROZEN
- βSENATOR
- βFRANZ
- βSWIMMING
- βBARRIER
- βADJUST
- βCOMPARISON
- βPROCLAIM
- βWRINKL
- βOVERLOOK
- βMITYA
- βGUILT
- βPERCEPTION
- βPRECAUTION
- βSPECTATOR
- βSURPRISING
- βDISTRACT
- βDISDAIN
- βBONNET
- βMAGNET
- βPROFESS
- βCONFOUND
- βNARRATIVE
- βSTRUCTURE
- βSKETCH
- βULTIMATE
- βGLOBE
- βINSECT
- FICIENCY
- βORCHARD
- βAMIABLE
- βDESCENT
- βINDEPENDENCE
- βMANUFACTURE
- βSPRINKLE
- βNIGHTINGALE
- βCUSHION
- βEMINENT
- βSCOTT
- βARRAY
- βCOSETTE
- βWAVING
- βEXTRACT
- βIRREGULAR
- βPERSECUT
- βDERIVED
- βWITHDREW
- βCAUTION
- βSUSPICIOUS
- βMEMORIES
- βNOWHERE
- βSUBTLE
- βTHOROUGH
- Q
- βAPPROPRIATE
- βSLAUGHTER
- βYOURSELVES
- βTHUMB
- βTWAS
- βABODE
- βBIDDING
- βCONSPICUOUS
- βREBECCA
- βSERGEANT
- βAPRON
- βANTICIPATE
- βDISCIPLINE
- βGLANCING
- βPILGRIM
- βSULLEN
- βCONTRIBUTE
- βPRAIRIE
- βCARVED
- βCOMMERCE
- βEXCLAMATION
- βMUSCULAR
- βNOVEMBER
- βPHENOMENA
- βSYMBOL
- βUMBRELLA
- βDIMINISH
- βPARLOUR
- βTHREATENING
- βSTUMP
- βEXTENSIVE
- βPLEASING
- βREMEMBRANCE
- βCOMBINED
- βSHERIFF
- βSHAFT
- βLAURA
- βINTERCOURSE
- βSTRICKEN
- βSUPPLIES
- βLANDLORD
- βSHRINK
- βPRICK
- βCAESAR
- βDRUG
- βBEWILDERED
- βNAUTILUS
- βBRUTAL
- βCOMMERCIAL
- βMAGGIE
- βSPHERE
- βVIRGIN
- βBRETHREN
- βDESTINY
- βPOLICY
- βTERRIFIED
- βHOUSEKEEPER
- βCRAZY
- βARDENT
- βDISCERN
- βWRAP
- βMARQUIS
- βRUSSIA
- MOUTH
- βBRITAIN
- βHARBOUR
- βCONCERT
- βDONKEY
- βDAMAGE
- βSLIM
- ABOUT
- βLUXURY
- βMONSTROUS
- βTENDENCY
- βPARADISE
- βCULTURE
- βJULIUS
- βRAOUL
- βREMEDY
- βDECAY
- βSCOLD
- βSPLIT
- βASSAULT
- βDECEMBER
- βMOSCOW
- βEXPLORE
- βTROUSERS
- βWRIST
- PIECE
- βMUSKET
- βVALENTINE
- βTYRANT
- βABRAHAM
- βMEDIUM
- βARTIFICIAL
- βFACULTY
- βOBLIGATION
- βRESEMBLANCE
- βINQUIRIES
- βDETAIN
- βSWARM
- βPLEDGE
- βADMIRABLE
- βDEFECT
- βSUPERINTEND
- βPATRIOT
- βCLUNG
- βDISMAL
- βRECIT
- βIGNOR
- βAMELIA
- βJUSTIFY
- βELEPHANT
- βESTIMATE
- βKNELT
- βSERVING
- βWHIM
- βSHRILL
- βSTUDIO
- βTEXT
- βALEXANDER
- βWROUGHT
- βABUNDANT
- βSITUATED
- βREGAIN
- βFIERY
- βSNEER
- βSWEAT
- βGLARE
- βNIGH
- βESCORT
- βINEVITABLE
- βPSMITH
- βRELUCTANT
- βPRECEDING
- βRESORT
- βOUTRAGE
- βAMBASSADOR
- βCONSOLATION
- βRECOGNITION
- βREMORSE
- βBEHALF
- βFORMIDABLE
- βGRAVITY
- βDIVIDE
- βCONFRONT
- βGIGANTIC
- βOCTOBER
- βFLANK
- βSLEW
- βCLARA
- βFILM
- βBULK
- βPOMP
- βELEANOR
- βEMPHASIS
- βJAPANESE
- βCAVALRY
- βEXCLUSIVE
- βPERFUME
- βBRONZE
- βFEDERAL
- βLIQUID
- βRUBBING
- βOVEN
- DOLPH
- βCONVULS
- βDEPRIVED
- βRESPONSIBILITY
- βSIGNIFICANT
- βWAISTCOAT
- βCLUSTER
- βMARTHA
- βREVERSE
- βATTORNEY
- βDROOP
- βSKILFUL
- βHABITUAL
- βPUMP
- βINTERVEN
- βOWL
- βCONJECTURE
- βFANTASTIC
- βRESPONSIBLE
- βDESTINED
- βDOCUMENT
- βTHEREUPON
- βGODDESS
- βPACIFIC
- βWARRANT
- βCOSTUME
- βBRIDLE
- βCALIFORNIA
- βDEMOCRATIC
- βEUSTACE
- βSQUIRREL
- βUNCOMMON
- βMARVELLOUS
- βPLOUGH
- βTRAGEDY
- βVAULT
- βHESITATE
- βREFRAIN
- βADMIRING
- βCORPORAL
- βENTITLED
- βSHREWD
- βSQUEEZ
- βACCURATE
- βTEMPEST
- βMONUMENT
- βSIEGE
- βCHINESE
- βRAVEN
- βLOUNG
- βASSASSIN
- βINFLICT
- βAGITATED
- βDESIRABLE
- βEARLIEST
- βLAUNCH
- βPILOT
- βPULSE
- βMUTE
- LEIGH
- βLIQUOR
- βSCARECROW
- βSKULL
- βDESOLATE
- βSUBLIME
- βSERENE
- βRECESS
- βWAKING
- βCHARLOTTE
- βCIRCULAR
- βINJUSTICE
- βPINOCCHIO
- βPRISCILLA
- βTHYSELF
- βOCCURRENCE
- βCASUAL
- βFRANTIC
- βLEGEND
- βFERTIL
- βBACKGROUND
- βDELICACY
- βESTRALLA
- βMANUSCRIPT
- βRESPONSE
- βUNIVERSITY
- βWOLVES
- βSCANDAL
- βSTUMBLE
- βHOARSE
- βBODILY
- βCONVENT
- βEXAMINING
- βINCAPABLE
- βPERCEIVING
- βPHILADELPHIA
- βSUBSEQUENT
- βTHIEVES
- βACCUMULAT
- βDAMSEL
- βSCOTCH
- βUNDERNEATH
- βNOBILITY
- βSMASH
- βREVOLT
- βENGAGE
- βCATHEDRAL
- βCHAMPION
- βDESPATCH
- βETERNITY
- βJANUARY
- βPLEADED
- βPROBABILITY
- βJIMMIE
- βPARALLEL
- βFISHERMAN
- βJERRY
- βSWORE
- βDRAUGHT
- βOPPONENT
- βPRIMITIVE
- βSIGNIFICANCE
- βSUBSTANTIAL
- βAMAZED
- βDUNBAR
- βCOMMEND
- βCONTEMPLATE
- βTESTIMONY
- βIMPERIAL
- βADAPT
- βJUICE
- βCALAMIT
- CULAR
- βCHATEAU
- βPHOENIX
- βPRUDENT
- βSOLUTION
- βVILLEFORT
- βREACTION
- βRELAX
- βYU
- βPROHIBIT
- βDISTRUST
- βPLUNDER
- βWELFARE
- βNAVIGAT
- βPARLOR
- βLAZY
- βDETACH
- OMETER
- βPRIV
- βDISCOURAGE
- βOBSTINATE
- βREJOICING
- βSERMON
- βVEHICLE
- βFANCIES
- βENLIGHTEN
- βACUTE
- βILLUSION
- βANTHEA
- βMARTIAN
- βEXCITE
- βGENEROSITY
- OLOGIST
- βAMAZING
- βUNWORTHY
- βINTERNAL
- βINCENSE
- βVIBRAT
- βADHERE
- ROACH
- βFEBRUARY
- βMEXICAN
- βPOTATOES
- βINCESSANT
- βINTERPOSED
- βPARCEL
- βVEXED
- βPROMOTE
- MIDST
- βARISTOCRAT
- βCYRIL
- βEMBARK
- βABUNDANCE
- βLITERALLY
- βSURGEON
- βTERRACE
- βATLANTIC
- βMARTYR
- βSPECK
- βSENATE
- βLOAF
- βADMINISTER
- βAPPREHEND
- βSUBDUED
- βTEMPORARY
- βDOMINION
- βELABORATE
- βDIGNIFIED
- βELIZA
- βSPLASH
- βCONSEIL
- βDEXTER
- βUNSEEN
- βTRAGIC
- VOCATION
- βGRATIFY
- βBACHELOR
- βDEFENSE
- βEXCURSION
- βFACULTIES
- βPROPRIETOR
- βSYMPATHETIC
- βUNNECESSARY
- βRADIANT
- βVACANT
- βOUNCE
- βSCREW
- βPHENOMENON
- βPROMINENT
- βWORRIED
- βSTUDIES
- βCLIMATE
- βKEITH
- βARAMIS
- βBLISS
- βCONTINUAL
- βSURPASS
- βHEBREW
- βIDENTITY
- βPROVOKE
- βTEMPERAMENT
- βCHARIOT
- βHARBOR
- βNINTH
- βPRIOR
- βDESIROUS
- βJERUSALEM
- βUNDERTAKING
- βEDISON
- βMIRTH
- βSCOUT
- βAPPARATUS
- βILLUSTRATION
- βINTELLIGIBLE
- βINVARIABLY
- βPIERCED
- βREVIEW
- βFLICKER
- βHAZARD
- βREVELATION
- βDIXON
- βEXCITING
- βGOSPEL
- βCONSTANCE
- βOVERTAKE
- βGUINEA
- βALADDIN
- βCHICAGO
- βTULLIVER
- βHAMILTON
- βGARRISON
- βDISCIPLE
- βINTENSITY
- βTRAITOR
- βCHANCELLOR
- βPROVERB
- βDAGGER
- βFORESEE
- βCONFIDE
- βGLIMMER
- βCHAUVELIN
- βILLUSTRATE
- βVOLUNTEER
- βJUNGLE
- βSTREAK
- βSUNRISE
- βDISSOLV
- βQUEST
- βAWHILE
- βFELICITY
- βLEGISLATURE
- βLEONORA
- βMAGAZINE
- βPITIFUL
- βCOLONY
- βSHAWL
- βARRIVING
- βFUNDAMENTAL
- βCARPENTER
- βOVERFLOW
- βEXPAND
- βHARVEST
- βFEMININE
- βINNUMERABLE
- βSCRAMBLE
- βTWENTIETH
- βTRIFLING
- βGHASTL
- βCONQUEST
- βDANIEL
- βFACILIT
- βFORSAKE
- βBEHAVIOUR
- βGORGEOUS
- βPRODUCING
- βHAPPIER
- βPROMISING
- βRAINBOW
- βINSTINCTIVELY
- βDECREE
- βEYEBROWS
- βIRRESISTIBLE
- βPHARAOH
- βSCROOGE
- βUNNATURAL
- βCRUMBS
- βREFINED
- βDREARY
- βTRENCH
- βCONVINCE
- βFRINGE
- βEXTREMITY
- βINTIMACY
- βSCOUNDREL
- βSUFFRAGE
- βUNEASINESS
- βBARRICADE
- βCIRCULAT
- βSAMUEL
- βBRUCE
- βDARCY
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: null
zero_infinity: true
joint_net_conf: null
use_preprocessor: true
token_type: bpe
bpemodel: data/en_token_list/bpe_unigram5000/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
short_noise_thres: 0.5
frontend: default
frontend_conf:
n_fft: 512
hop_length: 160
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 27
num_freq_mask: 2
apply_time_mask: true
time_mask_width_ratio_range:
- 0.0
- 0.05
num_time_mask: 10
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_en_bpe5000_sp/train/feats_stats.npz
model: espnet
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
preencoder: null
preencoder_conf: {}
encoder: e_branchformer
encoder_conf:
output_size: 512
attention_heads: 8
attention_layer_type: rel_selfattn
pos_enc_layer_type: rel_pos
rel_pos_type: latest
cgmlp_linear_units: 3072
cgmlp_conv_kernel: 31
use_linear_after_conv: false
gate_activation: identity
num_blocks: 17
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
layer_drop_rate: 0.1
linear_units: 1024
positionwise_layer_type: linear
macaron_ffn: true
use_ffn: true
merge_conv_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 8
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
layer_drop_rate: 0.2
preprocessor: default
preprocessor_conf: {}
required:
- output_dir
- token_list
version: '202211'
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Apisate/DialoGPT-small-jordan | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | 2023-01-07T02:36:35Z | ---
license: creativeml-openrail-m
---
This repo contains StableDiffusion models which have been created by merging various other models together. See below for models contained in each merge along with links to the original models if available on Hugging Face.
## Usage
These models can be used in StableDiffusion, including the extremely popular Web UI by Automatic1111 or any other build that supports the .safetensors format. Please consult the documentation for your installation of StableDiffusion for more specific instructions.
I recommend using these models with the [kl-f8-anime2 VAE published by hakurei](https://huggingface.co/hakurei/waifu-diffusion-v1-4). Please consult the documentation for your installation of StableDiffusion for instructions for using a custom VAE.
## Example images:
<table>
<tr>
<td><img src=https://i.imgur.com/NtT1U2k.jpg width=100% height=100%/></td>
</tr>
</table>
<table>
<tr>
<td><img src=https://i.imgur.com/oVUsmv4.jpg width=100% height=100%/></td>
</tr>
</table>
<table>
<tr>
<td><img src=https://i.imgur.com/EH2246Z.jpg width=100% height=100%/></td>
</tr>
</table>
<table>
<tr>
<td><img src=https://i.imgur.com/v1ehb5W.jpg width=100% height=100%/></td>
</tr>
</table>
## EveryoneMix.safetensors
[wlop-any by SirVeggie](https://huggingface.co/SirVeggie/wlop),
[nixeu-any by SirVeggie](https://huggingface.co/SirVeggie/nixeu),
[Ilya_5700 by flamesbob](https://huggingface.co/flamesbob/Ilya_model),
[ross_model12k by flamesbob](https://huggingface.co/flamesbob/ross_model),
[ouroboros_v3_blend_m_ouroboros_token_style_classword by Eppinette](https://huggingface.co/Eppinette/Ouroboros),
[Bo_Chen03step02300pruned by JRW1994](https://huggingface.co/JRW1994/Bo_Chen/tree/main),
[dreamlike-diffusion-1.0 by dreamlike-art](https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0),
[kntkV3_11000 by nubby](https://huggingface.co/nubby/kantoku),
[Elysium_Anime_V2 by hesw23168](https://huggingface.co/hesw23168/SD-Elysium-Model),
F222_F222 by Zeipher AI (?) - Original upload not available on Hugging Face
Use the following tokens from the source models may produce varying effects in the outputs when added to your prompt. Try mixing and matching them to see what works.
Tokens: ```"m_wlop, m_nixeu, m_ilya, m_ross, m_ouroboros, dreamlikeart, kntk"``` Classes: ```"artstyle, illustration style, style"```
## EveryoneMix-Shira.safetensors and EveryoneMix-Shira-ClipFix.safetensors
[wlop-any by SirVeggie](https://huggingface.co/SirVeggie/wlop),
[nixeu-any by SirVeggie](https://huggingface.co/SirVeggie/nixeu),
[Ilya_5700 by flamesbob](https://huggingface.co/flamesbob/Ilya_model),
[ross_model12k by flamesbob](https://huggingface.co/flamesbob/ross_model),
[ouroboros_v3_blend_m_ouroboros_token_style_classword by Eppinette](https://huggingface.co/Eppinette/Ouroboros),
[Bo_Chen03step02300pruned by JRW1994](https://huggingface.co/JRW1994/Bo_Chen/tree/main),
[dreamlike-diffusion-1.0 by dreamlike-art](https://huggingface.co/dreamlike-art/dreamlike-diffusion-1.0),
[kntkV3_11000 by nubby](https://huggingface.co/nubby/kantoku),
[Shirayuki_Anime_v1-fp16 by hesw23168](https://huggingface.co/hesw23168/SD_Shirayuki_Model),
F222_F222 by Zeipher AI (?) - Original upload not available on Hugging Face
Use the following tokens from the source models may produce varying effects in the outputs when added to your prompt. Try mixing and matching them to see what works.
Tokens: ```"m_wlop, m_nixeu, m_ilya, m_ross, m_ouroboros, dreamlikeart, kntk"``` Classes: ```"artstyle, illustration style, style"```
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
Appolo/TestModel | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-01-07T03:05:30Z | ---
language:
- en
tags:
- stable-diffusion
- text-to-image
license: creativeml-openrail-m
inference: true
---
## Informations
Fine-tuned SD v1-5 model, 18720 steps, 9 epochs
Aspect Ratio Bucketing centered at 768 resolution, aspect ratio 16:9 (1024x576)
Made with 208 pictures of the movie Redline by MadHouse;
Captions by WD-v1-4
## Tags
Tokens are in the tags.txt along with their occurrences in [#] format
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
ArBert/bert-base-uncased-finetuned-ner | [
"pytorch",
"tensorboard",
"bert",
"token-classification",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | 2023-01-07T04:32:57Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- sst2
model-index:
- name: finetuned_gpt2-medium_sst2_negation0.05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_gpt2-medium_sst2_negation0.05
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on the sst2 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4461
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.8275 | 1.0 | 1062 | 3.3098 |
| 2.5383 | 2.0 | 2124 | 3.3873 |
| 2.3901 | 3.0 | 3186 | 3.4461 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.12.1
|
Aspect11/DialoGPT-Medium-LiSBot | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: openrail
language:
- en
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
---
A repost of [this model](https://civitai.com/models/2583/grape-and-grapefruit-hentai-models) by [ikena](https://civitai.com/user/ikena) on CivitAi.
Contact me if you are the owner of this model and want to put this model on your huggingface repo instead. |
Augustvember/WokkaBot | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- generated_from_trainer
model-index:
- name: libri-alpha-0.75-Temp-1-attention-3-layers-distil-with-6-layers-take-2-train-extractor
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# libri-alpha-0.75-Temp-1-attention-3-layers-distil-with-6-layers-take-2-train-extractor
This model is a fine-tuned version of [rohitp1/libri-alpha-0.75-Temp-1-attention-3-layers-distil-with-6-layers](https://huggingface.co/rohitp1/libri-alpha-0.75-Temp-1-attention-3-layers-distil-with-6-layers) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 123.6555
- Wer: 0.2525
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.3
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 488.7409 | 0.22 | 200 | 175.5911 | 0.4211 |
| 470.3788 | 0.45 | 400 | 174.7645 | 0.4192 |
| 472.5283 | 0.67 | 600 | 173.8402 | 0.4184 |
| 474.1535 | 0.9 | 800 | 173.4610 | 0.4162 |
| 488.9395 | 1.12 | 1000 | 172.2722 | 0.4172 |
| 468.5794 | 1.35 | 1200 | 170.7173 | 0.4134 |
| 473.337 | 1.57 | 1400 | 171.2823 | 0.4069 |
| 453.5572 | 1.79 | 1600 | 168.4595 | 0.4093 |
| 456.1514 | 2.02 | 1800 | 166.4398 | 0.4000 |
| 447.1798 | 2.24 | 2000 | 167.9152 | 0.3994 |
| 438.2698 | 2.47 | 2200 | 166.1868 | 0.3974 |
| 438.1535 | 2.69 | 2400 | 164.5998 | 0.3946 |
| 442.7301 | 2.91 | 2600 | 162.8684 | 0.3956 |
| 440.5328 | 3.14 | 2800 | 162.3347 | 0.3861 |
| 449.2731 | 3.36 | 3000 | 160.7815 | 0.3847 |
| 436.718 | 3.59 | 3200 | 158.1402 | 0.3849 |
| 425.2622 | 3.81 | 3400 | 157.0624 | 0.3778 |
| 430.4346 | 4.04 | 3600 | 156.7345 | 0.3764 |
| 402.7262 | 4.26 | 3800 | 154.0662 | 0.3635 |
| 405.4374 | 4.48 | 4000 | 153.8651 | 0.3683 |
| 395.4657 | 4.71 | 4200 | 152.3929 | 0.3609 |
| 401.6397 | 4.93 | 4400 | 150.4990 | 0.3576 |
| 397.0791 | 5.16 | 4600 | 151.3244 | 0.3634 |
| 399.281 | 5.38 | 4800 | 149.6291 | 0.3513 |
| 392.448 | 5.61 | 5000 | 149.6411 | 0.3474 |
| 396.3989 | 5.83 | 5200 | 148.5435 | 0.3459 |
| 381.1296 | 6.05 | 5400 | 147.9963 | 0.3501 |
| 384.1926 | 6.28 | 5600 | 145.6473 | 0.3435 |
| 364.3308 | 6.5 | 5800 | 145.9607 | 0.3381 |
| 365.9475 | 6.73 | 6000 | 142.4151 | 0.3382 |
| 359.6295 | 6.95 | 6200 | 139.8908 | 0.3315 |
| 361.9945 | 7.17 | 6400 | 143.2300 | 0.3403 |
| 370.9596 | 7.4 | 6600 | 140.1414 | 0.3280 |
| 363.0185 | 7.62 | 6800 | 140.3988 | 0.3240 |
| 354.5542 | 7.85 | 7000 | 143.5237 | 0.3286 |
| 356.7341 | 8.07 | 7200 | 145.7105 | 0.3229 |
| 342.3261 | 8.3 | 7400 | 137.8948 | 0.3188 |
| 343.8778 | 8.52 | 7600 | 138.7520 | 0.3085 |
| 327.9473 | 8.74 | 7800 | 136.1127 | 0.3122 |
| 339.7105 | 8.97 | 8000 | 136.3135 | 0.3084 |
| 322.9032 | 9.19 | 8200 | 136.0534 | 0.3089 |
| 332.4099 | 9.42 | 8400 | 136.3784 | 0.3079 |
| 333.1054 | 9.64 | 8600 | 136.3690 | 0.3020 |
| 325.0327 | 9.87 | 8800 | 138.1514 | 0.3022 |
| 326.1452 | 10.09 | 9000 | 130.8793 | 0.2944 |
| 319.7307 | 10.31 | 9200 | 133.0722 | 0.2945 |
| 322.89 | 10.54 | 9400 | 131.6615 | 0.2961 |
| 307.7924 | 10.76 | 9600 | 129.8601 | 0.2917 |
| 322.2392 | 10.99 | 9800 | 131.7703 | 0.2911 |
| 306.9055 | 11.21 | 10000 | 130.2165 | 0.2878 |
| 297.5498 | 11.43 | 10200 | 130.4440 | 0.2920 |
| 300.9818 | 11.66 | 10400 | 130.6544 | 0.2862 |
| 300.7568 | 11.88 | 10600 | 128.4007 | 0.2857 |
| 298.6313 | 12.11 | 10800 | 129.3903 | 0.2808 |
| 286.8174 | 12.33 | 11000 | 129.0809 | 0.2824 |
| 290.7518 | 12.56 | 11200 | 130.4312 | 0.2827 |
| 292.7182 | 12.78 | 11400 | 129.6407 | 0.2829 |
| 287.0013 | 13.0 | 11600 | 128.5187 | 0.2841 |
| 262.7644 | 13.23 | 11800 | 128.3923 | 0.2798 |
| 277.8379 | 13.45 | 12000 | 128.4876 | 0.2786 |
| 272.4847 | 13.68 | 12200 | 126.7397 | 0.2738 |
| 286.6665 | 13.9 | 12400 | 129.2148 | 0.2823 |
| 281.27 | 14.13 | 12600 | 131.3539 | 0.2796 |
| 266.3464 | 14.35 | 12800 | 127.2011 | 0.2758 |
| 274.4771 | 14.57 | 13000 | 128.8553 | 0.2784 |
| 266.4516 | 14.8 | 13200 | 125.6450 | 0.2730 |
| 266.1086 | 15.02 | 13400 | 125.1995 | 0.2709 |
| 264.5101 | 15.25 | 13600 | 126.9386 | 0.2723 |
| 266.8765 | 15.47 | 13800 | 124.8972 | 0.2724 |
| 255.5908 | 15.7 | 14000 | 125.3817 | 0.2716 |
| 260.3176 | 15.92 | 14200 | 124.9812 | 0.2698 |
| 251.0676 | 16.14 | 14400 | 127.1510 | 0.2695 |
| 255.0812 | 16.37 | 14600 | 127.9661 | 0.2709 |
| 254.8599 | 16.59 | 14800 | 125.1549 | 0.2670 |
| 255.7383 | 16.82 | 15000 | 125.9465 | 0.2705 |
| 242.564 | 17.04 | 15200 | 126.6244 | 0.2669 |
| 245.8529 | 17.26 | 15400 | 125.0135 | 0.2668 |
| 250.1366 | 17.49 | 15600 | 123.4417 | 0.2633 |
| 244.0923 | 17.71 | 15800 | 123.3352 | 0.2654 |
| 248.4393 | 17.94 | 16000 | 122.9122 | 0.2645 |
| 252.4732 | 18.16 | 16200 | 122.2313 | 0.2581 |
| 249.2825 | 18.39 | 16400 | 123.7648 | 0.2618 |
| 250.1891 | 18.61 | 16600 | 124.0998 | 0.2607 |
| 243.6611 | 18.83 | 16800 | 123.0910 | 0.2576 |
| 242.8351 | 19.06 | 17000 | 122.3869 | 0.2576 |
| 237.169 | 19.28 | 17200 | 123.0963 | 0.2577 |
| 230.8865 | 19.51 | 17400 | 124.9314 | 0.2589 |
| 228.3782 | 19.73 | 17600 | 126.1155 | 0.2602 |
| 235.9318 | 19.96 | 17800 | 121.9966 | 0.2551 |
| 231.499 | 20.18 | 18000 | 123.4103 | 0.2583 |
| 234.1825 | 20.4 | 18200 | 122.7898 | 0.2572 |
| 234.1546 | 20.63 | 18400 | 124.8323 | 0.2577 |
| 228.4214 | 20.85 | 18600 | 122.2580 | 0.2561 |
| 229.5802 | 21.08 | 18800 | 122.1630 | 0.2550 |
| 222.507 | 21.3 | 19000 | 122.7615 | 0.2543 |
| 223.9583 | 21.52 | 19200 | 123.3316 | 0.2557 |
| 231.9215 | 21.75 | 19400 | 121.7923 | 0.2542 |
| 229.7037 | 21.97 | 19600 | 121.5026 | 0.2533 |
| 232.5929 | 22.2 | 19800 | 123.7730 | 0.2527 |
| 213.1247 | 22.42 | 20000 | 121.8280 | 0.2506 |
| 224.965 | 22.65 | 20200 | 123.2294 | 0.2527 |
| 228.214 | 22.87 | 20400 | 122.9256 | 0.2544 |
| 216.6104 | 23.09 | 20600 | 124.1280 | 0.2510 |
| 220.0993 | 23.32 | 20800 | 124.4064 | 0.2523 |
| 232.2647 | 23.54 | 21000 | 123.6555 | 0.2525 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.13.1
|
Augustvember/WokkaBot4 | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | 2023-01-07T06:14:10Z | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-MLP-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 29.30 +/- 18.72
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Aurora/asdawd | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null |
---
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
language: de
datasets:
- lmqg/qg_dequad
pipeline_tag: text2text-generation
tags:
- answer extraction
widget:
- text: "Sommerzeit <hl> FrΓΌhling <hl>: Umstellung von Normalzeit auf Sommerzeit β die Uhr wird um eine Stunde ''vor''gestellt. Herbst: Umstellung von Sommerzeit auf Normalzeit β die Uhr wird um eine Stunde ''zurΓΌck''gestellt. Als Sommerzeit wird die gegenΓΌber der Zonenzeit meist um eine Stunde vorgestellte Uhrzeit bezeichnet, die wΓ€hrend eines bestimmten Zeitraums im Sommerhalbjahr (und oft auch etwas darΓΌber hinaus) als gesetzliche Zeit dient. Eine solche Regelung wird fast nur in LΓ€ndern der gemΓ€Γigten Zonen angewandt. Die mitteleuropΓ€ische Sommerzeit beginnt am letzten Sonntag im MΓ€rz um 2:00 Uhr MEZ, indem die StundenzΓ€hlung um eine Stunde von 2:00 Uhr auf 3:00 Uhr vorgestellt wird. Sie endet jeweils am letzten Sonntag im Oktober um 3:00 Uhr MESZ, indem die StundenzΓ€hlung um eine Stunde von 3:00 Uhr auf 2:00 Uhr zurΓΌckgestellt wird."
example_title: "Answering Extraction Example 1"
- text: "Iran === Landwirtschaft === Die landwirtschaftliche NutzflΓ€che betrΓ€gt trotz zahlreicher Gebirge und WΓΌsten 10 % der LandesflΓ€che, wobei ein Drittel kΓΌnstlich bewΓ€ssert wird. Die Landwirtschaft ist einer der grΓΆΓten Arbeitgeber des Landes. Wichtige Produkte sind Pistazien, Weizen, Reis, Zucker, Baumwolle, FrΓΌchte, NΓΌsse, Datteln, Wolle und Kaviar. Seit der Revolution von 1979 wurde der Anbau von Weintrauben wegen des islamischen Alkoholverbots auf den 200.000 Hektar RebflΓ€che fast vollstΓ€ndig auf Tafeltrauben und Rosinen umgestellt. Bei Rosinen ist <hl> der Iran <hl> inzwischen nach der TΓΌrkei der zweitgrΓΆΓte Exporteur der Welt, bei Safran mit ungefΓ€hr 90 % Marktanteil des globalen Bedarfs mit Abstand der grΓΆΓte."
example_title: "Answering Extraction Example 2"
model-index:
- name: lmqg/mbart-large-cc25-dequad-ae
results:
- task:
name: Text2text Generation
type: text2text-generation
dataset:
name: lmqg/qg_dequad
type: default
args: default
metrics:
- name: BLEU4 (Answer Extraction)
type: bleu4_answer_extraction
value: 0.0
- name: ROUGE-L (Answer Extraction)
type: rouge_l_answer_extraction
value: 3.48
- name: METEOR (Answer Extraction)
type: meteor_answer_extraction
value: 2.36
- name: BERTScore (Answer Extraction)
type: bertscore_answer_extraction
value: 54.55
- name: MoverScore (Answer Extraction)
type: moverscore_answer_extraction
value: 46.73
- name: AnswerF1Score (Answer Extraction)
type: answer_f1_score__answer_extraction
value: 6.06
- name: AnswerExactMatch (Answer Extraction)
type: answer_exact_match_answer_extraction
value: 0.0
---
# Model Card of `lmqg/mbart-large-cc25-dequad-ae`
This model is fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) for answer extraction on the [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
### Overview
- **Language model:** [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25)
- **Language:** de
- **Training data:** [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="de", model="lmqg/mbart-large-cc25-dequad-ae")
# model prediction
answers = model.generate_a("das erste weltweit errichtete Hermann Brehmer 1855 im niederschlesischen ''GΓΆrbersdorf'' (heute SokoΕowsko, Polen).")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/mbart-large-cc25-dequad-ae")
output = pipe("Sommerzeit <hl> FrΓΌhling <hl>: Umstellung von Normalzeit auf Sommerzeit β die Uhr wird um eine Stunde ''vor''gestellt. Herbst: Umstellung von Sommerzeit auf Normalzeit β die Uhr wird um eine Stunde ''zurΓΌck''gestellt. Als Sommerzeit wird die gegenΓΌber der Zonenzeit meist um eine Stunde vorgestellte Uhrzeit bezeichnet, die wΓ€hrend eines bestimmten Zeitraums im Sommerhalbjahr (und oft auch etwas darΓΌber hinaus) als gesetzliche Zeit dient. Eine solche Regelung wird fast nur in LΓ€ndern der gemΓ€Γigten Zonen angewandt. Die mitteleuropΓ€ische Sommerzeit beginnt am letzten Sonntag im MΓ€rz um 2:00 Uhr MEZ, indem die StundenzΓ€hlung um eine Stunde von 2:00 Uhr auf 3:00 Uhr vorgestellt wird. Sie endet jeweils am letzten Sonntag im Oktober um 3:00 Uhr MESZ, indem die StundenzΓ€hlung um eine Stunde von 3:00 Uhr auf 2:00 Uhr zurΓΌckgestellt wird.")
```
## Evaluation
- ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/mbart-large-cc25-dequad-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_dequad.default.json)
| | Score | Type | Dataset |
|:-----------------|--------:|:--------|:-----------------------------------------------------------------|
| AnswerExactMatch | 0 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
| AnswerF1Score | 6.06 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
| BERTScore | 54.55 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
| Bleu_1 | 3.45 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
| Bleu_2 | 0.92 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
| Bleu_3 | 0 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
| Bleu_4 | 0 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
| METEOR | 2.36 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
| MoverScore | 46.73 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
| ROUGE_L | 3.48 | default | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qg_dequad
- dataset_name: default
- input_types: ['paragraph_sentence']
- output_types: ['answer']
- prefix_types: None
- model: facebook/mbart-large-cc25
- max_length: 512
- max_length_output: 32
- epoch: 8
- batch: 8
- lr: 0.0005
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 8
- label_smoothing: 0.15
The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mbart-large-cc25-dequad-ae/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
|
Ayham/albert_roberta_summarization_cnn_dailymail | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- sst2
model-index:
- name: finetuned_gpt2-medium_sst2_negation0.8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_gpt2-medium_sst2_negation0.8
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on the sst2 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.7265 | 1.0 | 1111 | 3.2385 |
| 2.4446 | 2.0 | 2222 | 3.3030 |
| 2.2992 | 3.0 | 3333 | 3.3634 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.12.1
|
Ayham/distilbert_gpt2_summarization_cnndm | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class π§¨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute π¦.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('szamanian/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
Ayham/distilbert_roberta_summarization_cnn_dailymail | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- sst2
model-index:
- name: finetuned_gpt2-large_sst2_negation0.01
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_gpt2-large_sst2_negation0.01
This model is a fine-tuned version of [gpt2-large](https://huggingface.co/gpt2-large) on the sst2 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7003
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4321 | 1.0 | 1060 | 3.3586 |
| 1.8705 | 2.0 | 2120 | 3.6034 |
| 1.6189 | 3.0 | 3180 | 3.7003 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.12.1
|
Ayham/robertagpt2_cnn | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | 2023-01-07T09:48:16Z | ---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3010
- Accuracy: 0.8710
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 300
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.7305 | 0.25 | 75 | 1.2898 | 0.5429 |
| 0.8441 | 1.25 | 150 | 0.5689 | 0.8 |
| 0.2166 | 2.25 | 225 | 0.2856 | 0.8571 |
| 0.2691 | 3.25 | 300 | 0.1857 | 0.9286 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Ayham/robertagpt2_xsum2 | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
license: mit
tags:
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: finetune_deberta_small_model
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: super_glue
type: super_glue
config: boolq
split: train
args: boolq
metrics:
- name: Accuracy
type: accuracy
value: 0.8021406727828746
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetune_deberta_small_model
This model is a fine-tuned version of [nc33/finetune_deberta_small_model](https://huggingface.co/nc33/finetune_deberta_small_model) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6788
- Accuracy: 0.8021
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3666 | 1.0 | 590 | 0.5625 | 0.8003 |
| 0.2501 | 2.0 | 1180 | 0.6762 | 0.7976 |
| 0.2343 | 3.0 | 1770 | 0.6788 | 0.8021 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Ayham/xlmroberta_large_gpt2_summarization_cnndm | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
language:
- en
thumbnail:
tags:
- audio-classification
- speechbrain
- embeddings
- Accent
- Identification
- pytorch
- ECAPA-TDNN
- TDNN
- CommonAccent
license: "mit"
datasets:
- CommonVoice
metrics:
- Accuracy
widget:
- example_title: Australian English
src: https://huggingface.co/Jzuluaga/dummy-accent-id-commonlanguage_ecapa/resolve/main/australia_1.wav
- example_title: African English
src: https://huggingface.co/Jzuluaga/dummy-accent-id-commonlanguage_ecapa/resolve/main/african_1.wav
- example_title: Canadian English
src: https://huggingface.co/Jzuluaga/dummy-accent-id-commonlanguage_ecapa/resolve/main/canada_1.wav
---
# DEPRECATED: GO TO: https://huggingface.co/Jzuluaga/accent-id-commonaccent_ecapa
GO TO (BEST MODEL): https://huggingface.co/Jzuluaga/accent-id-commonaccent_ecapa
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
# Accent Identification from Speech Recordings with ECAPA embeddings on CommonAccent
This repository provides all the necessary tools to perform accent identification from speech recordings with SpeechBrain.
The system uses a model pretrained on the CommonAccent dataset in English (16 accents).
The provided system can recognize the following 16 accents of English from short speech recordings:
- african
- australia
- bermuda
- canada
- england
- hongkong
- indian
- ireland
- malaysia
- newzealand
- philippines
- scotland
- singapore
- southatlandtic
- us
- wales
The portions of data for each set is:
- Train set: 50hrs / 45k samples
- Dev set: 1.24hrs / 1062 samples
- Test set: 1.15hrs / 972 samples
(This code was developed for the SLT-CODE hackathon: https://slt2022.org/hackathon.php)
### To UPDATE ALL BELOW
For a better experience, we encourage you to learn more about
[SpeechBrain](https://speechbrain.github.io). The given model performance on the test set is:
| Release | Accuracy (%)
|:-------------:|:--------------:|
| 30-06-21 | 85.0 |
## Pipeline description
This system is composed of an ECAPA model coupled with statistical pooling. A classifier, trained with Categorical Cross-Entropy Loss, is applied on top of that.
The system is trained with recordings sampled at 16kHz (single channel).
The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *classify_file* if needed. Make sure your input tensor is compliant with the expected sampling rate if you use *encode_batch* and *classify_batch*.
## Install SpeechBrain
First of all, please install SpeechBrain with the following command:
```
pip install speechbrain
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Perform Language Identification from Speech Recordings
```python
import torchaudio
from speechbrain.pretrained import EncoderClassifier
classifier = EncoderClassifier.from_hparams(source="speechbrain/lang-id-commonlanguage_ecapa", savedir="pretrained_models/lang-id-commonlanguage_ecapa")
# Italian Example
out_prob, score, index, text_lab = classifier.classify_file('speechbrain/lang-id-commonlanguage_ecapa/example-it.wav')
print(text_lab)
# French Example
out_prob, score, index, text_lab = classifier.classify_file('speechbrain/lang-id-commonlanguage_ecapa/example-fr.wav')
print(text_lab)
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
### Training
The model was trained with SpeechBrain (a02f860e).
To train it from scratch follow these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```
cd recipes/CommonLanguage/lang_id
python train.py hparams/train_ecapa_tdnn.yaml --data_folder=your_data_folder
```
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1sD2u0MhSmJlx_3RRgwsYzevX81RM8-WE?usp=sharing).
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
#### Referencing ECAPA
```@inproceedings{DBLP:conf/interspeech/DesplanquesTD20,
author = {Brecht Desplanques and
Jenthe Thienpondt and
Kris Demuynck},
editor = {Helen Meng and
Bo Xu and
Thomas Fang Zheng},
title = {{ECAPA-TDNN:} Emphasized Channel Attention, Propagation and Aggregation
in {TDNN} Based Speaker Verification},
booktitle = {Interspeech 2020},
pages = {3830--3834},
publisher = {{ISCA}},
year = {2020},
}
```
# **Citing SpeechBrain**
Please, cite SpeechBrain if you use it for your research or business.
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and FranΓ§ois Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
``` |
Ayham/xlnet_gpt2_summarization_cnn_dailymail | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"dataset:cnn_dailymail",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | Access to model chriscelaya/chriscelaya-trained-model is restricted and you are not in the authorized list. Visit https://huggingface.co/chriscelaya/chriscelaya-trained-model to ask for access. |
Ayham/xlnet_gpt_xsum | [
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"EncoderDecoderModel"
],
"model_type": "encoder-decoder",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | null | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
- dreambooth
---
### Trained dreambooth with my personal images with Stable diffusion 1.5.
### Diffusers
```py
from diffusers import StableDiffusionPipeline
import torch
model_id = "mallapraveen/ai_art_sd_v1.5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "Portrait of praveen as Captain America, muscular, fantasy, intricate, elegant, highly detailed, digital painting, artstation, concept art, smooth, sharp focus, illustration, art by Leonardo da Vinci and John Singer Sargent and Michelangelo"
image = pipe(prompt).images[0]
image.save("capamerica.png")
```
Sample pictures of this concept:

















 |
Ayjayo/DialoGPT-medium-AyjayoAI | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null |
---
language:
- nl
- en
- multilingual
license: apache-2.0
tags:
- dutch
- english
- t5
- t5x
- ul2
- seq2seq
- translation
datasets:
- yhavinga/mc4_nl_cleaned
- yhavinga/nedd_wiki_news
pipeline_tag: translation
widget:
- text: >-
Redistricting and West Virginiaβs shrinking population forced the stateβs
Republican Legislature to pit Mr. McKinley, a six-term Republican with a
pragmatic bent, against Mr. Mooney, who has served four terms marked more
by conservative rhetoric than legislative achievements.
- text: >-
It is a painful and tragic spectacle that rises before me: I have drawn
back the curtain from the rottenness of man. This word, in my mouth, is at
least free from one suspicion: that it involves a moral accusation against
humanity.
- text: >-
Young Wehling was hunched in his chair, his head in his hand. He was so
rumpled, so still and colorless as to be virtually invisible. His
camouflage was perfect, since the waiting room had a disorderly and
demoralized air, too. Chairs and ashtrays had been moved away from the
walls. The floor was paved with spattered dropcloths.
---
# ul2-large-en-nl for English to Dutch translation
Fine-tuned T5 model on English to Dutch translation that was pretrained on Dutch using a UL2 (Mixture-of-Denoisers) objective.
The T5 model was introduced in
[this paper](https://arxiv.org/abs/1910.10683)
and first released at [this page](https://github.com/google-research/text-to-text-transfer-transformer).
The UL2 objective was introduced in
[this paper](https://arxiv.org/abs/2205.05131)
and first released at [this page](https://github.com/google-research/google-research/tree/master/ul2).
## Model description
T5 is an encoder-decoder model and treats all NLP problems in a text-to-text format.
`ul2-large-en-nl` T5 is a transformers model fine-tuned on parallel sentence and paragraph pairs
sampled from books.
This model used the [T5 v1.1](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#t511) improvements compared to the original T5 model during the pretraining:
- GEGLU activation in the feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202)
- Dropout was turned off during pre-training. Dropout should be re-enabled during fine-tuning
- Pre-trained on self-supervised objective only without mixing in the downstream tasks
- No parameter sharing between embedding and classifier layer
### UL2 pretraining objective
This model was pretrained with the UL2's Mixture-of-Denoisers (MoD) objective, that combines diverse pre-training
paradigms together. UL2 frames different objective functions for training language models as denoising tasks, where
the model has to recover missing sub-sequences of a given input. During pre-training it uses a novel mixture-of-denoisers
that samples from a varied set of such objectives, each with different configurations. UL2 is trained using a mixture of
three denoising tasks:
1. R-denoising (or regular span corruption), which emulates the standard T5 span corruption objective;
2. X-denoising (or extreme span corruption); and
3. S-denoising (or sequential PrefixLM).
During pre-training, we sample from the available denoising tasks based on user-specified ratios.
UL2 introduces a notion of mode switching, wherein downstream fine-tuning is associated with specific pre-training
denoising task. During the pre-training, a paradigm token is inserted to the input
(`[NLU]` for R-denoising, `[NLG]` for X-denoising, or `[S2S]` for S-denoising) indicating the denoising task at hand.
Then, during fine-tuning the same input token should be inserted to get the best performance for different downstream
fine-tuning tasks.
## Intended uses & limitations
This model was fine-tuned on parallel sentence and paragraph pairs and can be used
for machine translation.
### How to use
Here is how to use this model in PyTorch:
```python
model_name = "yhavinga/ul2-large-en-nl"
from transformers import AutoTokenizer
from transformers import AutoModelForSeq2SeqLM
from transformers import pipeline
import torch
device_num = 0 if torch.cuda.is_available() else -1
device = "cpu" if device_num < 0 else f"cuda:{device_num}"
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name, use_auth_token=True).to(
device
)
params = {"max_length": 370, "num_beams": 4, "early_stopping": True}
translator = pipeline("translation", tokenizer=tokenizer, model=model, device=device_num)
print(translator("Young Wehling was hunched in his chair, his head in his hand. He was so rumpled, so still and colorless as to be virtually invisible.",
**params)[0]['translation_text'])
```
### Limitations and bias
The training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral.
Therefore, the model can have biased predictions. This bias will also affect all fine-tuned versions of this model.
## Training data
The `ul2-large-en-nl` T5 model was pre-trained simultaneously on a combination of several datasets,
including the `full` config of the "mc4_nl_cleaned" dataset, which is a cleaned version of Common Crawl's web
crawl corpus, Dutch books, the Dutch subset of Wikipedia (2022-03-20), and a subset of "mc4_nl_cleaned"
containing only texts from Dutch newspapers.
After pre-training, the model was
fine-tuned on a translation dataset containing 13 million sentence and paragraph pairs
sampled from books.
## Training procedure
### Preprocessing
The ul2-large-en-nl T5 model uses a SentencePiece unigram tokenizer with a vocabulary of 32,000 tokens.
The tokenizer includes the special tokens `<pad>`, `</s>`, `<unk>`, known from the original T5 paper,
`[NLU]`, `[NLG]` and `[S2S]` for the MoD pre-training, and `<n>` for newline.
During pre-training with the UL2 objective, input and output sequences consist of 512 consecutive tokens.
The tokenizer does not lowercase texts and is therefore case-sensitive; it distinguises
between `dutch` and `Dutch`.
Additionally, 100+28 extra tokens were added for pre-training tasks, resulting in a total of 32,128 tokens.
### Fine-tuning
This model was fine-tuned on a dataset containing 13M sentence and paragraph translation pairs sampled from books.
* Pre-trained model used as starting point: yhavinga/ul2-large-dutch
* Amount of fine-tune training steps: 77600
* Batch size: 512 (gradient accumulation steps: 16)
* Sequence length: 370 tokens
* Model dtype: bfloat16
* z_loss: 0.0001
* Optimizer: adamw_hf beta1: 0.9 beta2: 0.9969 eps: 1e-08
* Dropout rate: 0.01
* Learning rate: 0.0009 with linear decay to 0 and warmup for 500 steps
* Label smoothing factor: 0.11
* Bleu score: 45.1
### Model list
Models in this series:
| | ul2-base-en-nl | ul2-base-nl36-en-nl | ul2-large-en-nl |
|:---------------------|:-----------------|:----------------------|:------------------|
| model_type | t5 | t5 | t5 |
| _pipeline_tag | translation | translation | translation |
| d_model | 768 | 768 | 1024 |
| d_ff | 2048 | 3072 | 2816 |
| num_heads | 12 | 12 | 16 |
| d_kv | 64 | 64 | 64 |
| num_layers | 12 | 36 | 24 |
| num_decoder_layers | 12 | 36 | 24 |
| feed_forward_proj | gated-silu | gated-silu | gated-silu |
| dense_act_fn | silu | silu | silu |
| vocab_size | 32128 | 32128 | 32128 |
| tie_word_embeddings | 0 | 0 | 0 |
| torch_dtype | float32 | float32 | float32 |
| _gin_batch_size | 128 | 64 | 64 |
| _gin_z_loss | 0.0001 | 0.0001 | 0.0001 |
| _gin_t5_config_dtype | 'bfloat16' | 'bfloat16' | 'bfloat16' |
## Evaluation results
See the evaluation section in the interactive [Pre-training Dutch T5 Models](https://huggingface.co/spaces/yhavinga/pre-training-dutch-t5-models) blog.
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
Thanks to the [Finnish-NLP](https://huggingface.co/Finnish-NLP) authors for releasing their code for the UL2 objective and associated task definitions.
Thanks to [Stephenn Fernandes](https://huggingface.co/StephennFernandes) for helping me get started with the t5x framework.
Created by [Yeb Havinga](https://www.linkedin.com/in/yeb-havinga-86530825/)
|
Ayoola/pytorch_model | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: reinforce_cartpole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Ayran/DialoGPT-medium-harry-potter-1-through-4-plus-6 | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
license: mit
datasets:
- wikipedia
- IlyaGusev/gazeta
language:
- ru
library_name: transformers
---
# ruGPT-Neo 1.3B [IN TRANING, 100k/2M NOT FINAL CHECKPOINT]
## Model Description
ruGPT-Neo 1.3B is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. ruGPT-Neo refers to the class of models, while 1.3B represents the number of parameters of this particular pre-trained model.
## Training procedure
This model was trained on the wiki, gazeta summorization, for 38k steps, on 1*v100 gpu, still training . It was trained as a masked autoregressive language model, using cross-entropy loss.
### How to use
You can use this model directly with a pipeline for text generation. This example generates a different sequence each time it's run:
```py
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='AlexWortega/rugpt-neo-1.3b')
>>> generator("ΠΠ°ΠΊ ΠΊΠ°ΠΊΠ°ΡΡ? ΠΡΠ²Π΅Ρ:", do_sample=True, min_length=50)
[{'generated_text': 'ΠΠ°ΠΊ ΠΊΠ°ΠΊΠ°ΡΡ? ΠΡΠ²Π΅Ρ: CΠΏΡΡΡΠΈΡΠ΅ ΡΡΠ°Π½Ρ ΠΈ ΠΏΠΎΠΊΠ°ΠΊΠ°ΠΉΡΠ΅, Π·Π°ΡΠ΅ΠΌ Π²ΠΎΡΠΏΠΎΠ»ΡΠ·ΡΠΉΡΠ΅ΡΡ Π±ΡΠΌΠ°Π³ΠΎΠΉ'}]
``` |
Ayran/DialoGPT-small-harry-potter-1-through-3 | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 252.22 +/- 43.79
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Ayta/Haha | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- Gravitar-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Gravitar-v5
type: Gravitar-v5
metrics:
- type: mean_reward
value: 445.00 +/- 224.11
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Gravitar-v5**
This is a trained model of a PPO agent playing Gravitar-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/ppo_atari_envpool_async_jax_scan_impalanet_machado.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[ppo_atari_envpool_async_jax_scan_impalanet_machado]"
python -m cleanrl_utils.enjoy --exp-name ppo_atari_envpool_async_jax_scan_impalanet_machado --env-id Gravitar-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Gravitar-v5-ppo_atari_envpool_async_jax_scan_impalanet_machado-seed1/raw/main/ppo_atari_envpool_async_jax_scan_impalanet_machado.py
curl -OL https://huggingface.co/cleanrl/Gravitar-v5-ppo_atari_envpool_async_jax_scan_impalanet_machado-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Gravitar-v5-ppo_atari_envpool_async_jax_scan_impalanet_machado-seed1/raw/main/poetry.lock
poetry install --all-extras
python ppo_atari_envpool_async_jax_scan_impalanet_machado.py --track --wandb-project-name envpool-atari --save-model --upload-model --hf-entity cleanrl --env-id Gravitar-v5 --seed 1
```
# Hyperparameters
```python
{'anneal_lr': True,
'async_batch_size': 16,
'batch_size': 2048,
'capture_video': False,
'clip_coef': 0.1,
'cuda': True,
'ent_coef': 0.01,
'env_id': 'Gravitar-v5',
'exp_name': 'ppo_atari_envpool_async_jax_scan_impalanet_machado',
'gae': True,
'gae_lambda': 0.95,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learning_rate': 0.00025,
'max_grad_norm': 0.5,
'minibatch_size': 1024,
'norm_adv': True,
'num_envs': 64,
'num_minibatches': 2,
'num_steps': 32,
'num_updates': 24414,
'save_model': True,
'seed': 1,
'target_kl': None,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 2,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'envpool-atari'}
```
|
Ayumi/Jovana | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
license: cc-by-sa-4.0
datasets:
- cjvt/sentinews
language:
- sl
library_name: transformers
pipeline_tag: text-classification
model-index:
- name: sloberta-sentinews-sentence
results:
- task:
type: text-classification
name: Sentiment classification
dataset:
type: cjvt/sentinews
name: SentiNews
config: sentence_level
metrics:
- type: f1
value: 0.6851357247321056
name: Test macro F1
- type: accuracy
value: 0.7158081705150977
name: Test accuracy
- type: f1
value: 0.6934678744913757
name: Validation macro F1
- type: accuracy
value: 0.7207815275310835
name: Validation accuracy
---
# sloberta-sentinews-sentence
Slovenian 3-class sentiment classifier - [SloBERTa](https://huggingface.co/EMBEDDIA/sloberta) fine-tuned on the sentence-level config of the
SentiNews dataset.
The model is intended as:
(1) an out-of-the box sentence-level sentiment classifier or
(2) a sentence-level sentiment classification baseline.
## Fine-tuning details
The model was fine-tuned on a random 90%/5%/5% train-val-test split of the `sentence_level` configuration of the [cjvt/sentinews](https://huggingface.co/datasets/cjvt/sentinews) dataset
using the following hyperparameters:
```
max_length = 79 # 99th percentile of encoded training sequences, sequences are padded/truncated to this length
batch_size = 128
optimizer = "adamw_torch"
learning_rate = 2e-5
num_epochs = 10
validation_metric = "macro_f1"
```
Feel free to inspect `training_args.bin` for more details.
If you wish to directly compare your model to this one, you should use the same split as this model. To do so, use the following code:
```python
import json
import datasets
# You can find split_indices.json in the 'Files and versions' tab
with open("split_indices.json", "r") as f_split:
split = json.load(f_split)
data = datasets.load_dataset("cjvt/sentinews", "sentence_level", split="train")
train_data = data.select(split["train_indices"])
dev_data = data.select(split["dev_indices"])
test_data = data.select(split["test_indices"])
```
## Evaluation results
Best validation set results:
```
{
"eval_accuracy": 0.7207815275310835,
"eval_f1_macro": 0.6934678744913757,
"eval_f1_negative": 0.7042136003337507,
"eval_f1_neutral": 0.759215853398679,
"eval_f1_positive": 0.6169741697416974,
"eval_loss": 0.6337869167327881,
"eval_precision_negative": 0.6685148514851486,
"eval_precision_neutral": 0.7752393385552655,
"eval_precision_positive": 0.6314199395770392,
"eval_recall_negative": 0.74394006170119,
"eval_recall_neutral": 0.7438413361169103,
"eval_recall_positive": 0.6031746031746031
}
```
Test set results:
```
{
"test_loss": 0.6395984888076782,
"test_accuracy": 0.7158081705150977,
"test_precision_negative": 0.6570397111913358,
"test_recall_negative": 0.7292965271593945,
"test_f1_negative": 0.6912850812407682,
"test_precision_neutral": 0.7748017998714377,
"test_recall_neutral": 0.7418957734919983,
"test_f1_neutral": 0.7579918247563149,
"test_precision_positive": 0.6155642023346304,
"test_recall_positive": 0.5969811320754717,
"test_f1_positive": 0.6061302681992337,
"test_f1_macro": 0.6851357247321056,
}
``` |
AyushPJ/ai-club-inductions-21-nlp-XLNet | [
"pytorch",
"xlnet",
"question-answering",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"XLNetForQuestionAnsweringSimple"
],
"model_type": "xlnet",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 250
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 9 | null | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- cord-layoutlmv3
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-finetuned-cord_100
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: cord-layoutlmv3
type: cord-layoutlmv3
config: cord
split: train
args: cord
metrics:
- name: Precision
type: precision
value: 0.917960088691796
- name: Recall
type: recall
value: 0.9296407185628742
- name: F1
type: f1
value: 0.9237634808478989
- name: Accuracy
type: accuracy
value: 0.9303904923599321
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-finetuned-cord_100
This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the cord-layoutlmv3 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2854
- Precision: 0.9180
- Recall: 0.9296
- F1: 0.9238
- Accuracy: 0.9304
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 0.62 | 250 | 1.2967 | 0.6175 | 0.7021 | 0.6571 | 0.7296 |
| 1.6872 | 1.25 | 500 | 0.7576 | 0.8140 | 0.8383 | 0.8260 | 0.8383 |
| 1.6872 | 1.88 | 750 | 0.5695 | 0.8301 | 0.8518 | 0.8408 | 0.8544 |
| 0.6109 | 2.5 | 1000 | 0.4778 | 0.8564 | 0.875 | 0.8656 | 0.8812 |
| 0.6109 | 3.12 | 1250 | 0.3825 | 0.8694 | 0.8922 | 0.8807 | 0.8986 |
| 0.3905 | 3.75 | 1500 | 0.3546 | 0.8831 | 0.9049 | 0.8939 | 0.9143 |
| 0.3905 | 4.38 | 1750 | 0.3153 | 0.8998 | 0.9207 | 0.9101 | 0.9223 |
| 0.275 | 5.0 | 2000 | 0.3065 | 0.8926 | 0.9147 | 0.9035 | 0.9202 |
| 0.275 | 5.62 | 2250 | 0.2872 | 0.9131 | 0.9281 | 0.9206 | 0.9291 |
| 0.2275 | 6.25 | 2500 | 0.2854 | 0.9180 | 0.9296 | 0.9238 | 0.9304 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cpu
- Datasets 2.8.0
- Tokenizers 0.13.2
|
AyushPJ/ai-club-inductions-21-nlp-roBERTa | [
"pytorch",
"roberta",
"question-answering",
"transformers",
"generated_from_trainer",
"autotrain_compatible"
]
| question-answering | {
"architectures": [
"RobertaForQuestionAnswering"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="sd99/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Azaghast/DistilBERT-SCP-Class-Classification | [
"pytorch",
"distilbert",
"text-classification",
"transformers"
]
| text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 42 | null | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-dpv-finetuned-WITH-AUGMENTATION-LOWER-LR-WEIGHT-DECAY
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-dpv-finetuned-WITH-AUGMENTATION-LOWER-LR-WEIGHT-DECAY
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8435
- Wer: 35.0215
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.5839 | 0.62 | 1000 | 0.5726 | 37.4633 |
| 0.2068 | 1.25 | 2000 | 0.5799 | 36.4911 |
| 0.1451 | 1.87 | 3000 | 0.6284 | 36.0389 |
| 0.0606 | 2.49 | 4000 | 0.7208 | 36.4006 |
| 0.0081 | 3.12 | 5000 | 0.8024 | 34.9537 |
| 0.0131 | 3.74 | 6000 | 0.8435 | 35.0215 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.12.1
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Azaghast/GPT2-SCP-ContainmentProcedures | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="sd99/q-taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Azaghast/GPT2-SCP-Miscellaneous | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | null | Access to model Cloudfubuki/instance-name is restricted and you are not in the authorized list. Visit https://huggingface.co/Cloudfubuki/instance-name to ask for access. |
Azizun/Geotrend-10-epochs | [
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"BertForTokenClassification"
],
"model_type": "bert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | null | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 254.68 +/- 22.23
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Azuris/DialoGPT-medium-senorita | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 589.00 +/- 92.08
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Artachtron -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Artachtron -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Artachtron
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Azuris/DialoGPT-small-envy | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 14 | null | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
BAHIJA/distilbert-base-uncased-finetuned-cola | [
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"dataset:glue",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
]
| text-classification | {
"architectures": [
"DistilBertForSequenceClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 36 | null | ---
license: mit
datasets:
- SetFit/enron_spam
metrics:
- accuracy
library_name: transformers
pipeline_tag: text-classification
tags:
- email
- multilingual
---
# XLM-RoBERTa for multilingual spam detection
I trained this model to detect spam in german as there is no german labeled spam mail dataset, and I could not find an already pretrained multilingual model for the enron spam dataset.
## Intended use
Identifying spam mail in any XLM-RoBERTa-supported language.
Note that there was no thorough testing on it's intended use - only validation on the enron mail dataset.
## Evaluation
Eval on test set of enron spam:
- loss: 0.0315
- accuracy: 0.996 |
BSC-LT/roberta-base-bne-capitel-ner | [
"pytorch",
"roberta",
"token-classification",
"es",
"dataset:bne",
"dataset:capitel",
"arxiv:1907.11692",
"arxiv:2107.07253",
"transformers",
"national library of spain",
"spanish",
"bne",
"capitel",
"ner",
"license:apache-2.0",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"RobertaForTokenClassification"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | 2023-01-07T13:15:57Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 271.20 +/- 19.44
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Bala/model_name | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: distilbert_for_text_classification
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: train
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.93232
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_for_text_classification
This model is a fine-tuned version of [RavenK/distilbert](https://huggingface.co/RavenK/distilbert) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2352
- Accuracy: 0.9323
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2327 | 1.0 | 1563 | 0.1868 | 0.9279 |
| 0.1472 | 2.0 | 3126 | 0.2352 | 0.9323 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Batsy24/DialoGPT-medium-Twilight_BellaBot | [
"pytorch",
"gpt2",
"text-generation",
"transformers",
"conversational"
]
| conversational | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.75
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="rmathur/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Baybars/wav2vec2-xls-r-300m-cv8-turkish | [
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"tr",
"dataset:common_voice",
"transformers",
"common_voice",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"license:apache-2.0"
]
| automatic-speech-recognition | {
"architectures": [
"Wav2Vec2ForCTC"
],
"model_type": "wav2vec2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 5 | 2023-01-07T15:23:52Z | ---
license: creativeml-openrail-m
tags:
- pytorch
- diffusers
- stable-diffusion
- text-to-image
- diffusion-models-class
- dreambooth-hackathon
- animal
datasets: Ashish08/jacob-soni
widget:
- text: a photo of jacob dog sitting on a rock
---
# DreamBooth model for the jacob concept trained by Ashish08 on the Ashish08/jacob-soni dataset.
This is a Stable Diffusion model fine-tuned on the jacob concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of jacob dog**
This model was created as part of the DreamBooth Hackathon π₯. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
## Description
This is a Stable Diffusion model fine-tuned on `dog` images for the animal theme.
## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('Ashish08/jacob-dog')
image = pipeline().images[0]
image
```
|
Bee-Garbs/DialoGPT-cartman-small | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
This is a wav2vec2 model fined tuned on a Norwegian dataset from the radio broadcasting corpus.
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
The model can be used for automatic speech recognition in Norwegian, and other tasks involving speech technology
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** The SCRIBE project https://scribe-project.github.io/
- **Shared by [optional]:** The SCRIBE project https://scribe-project.github.io/
- **Model type:** wav2vec2
- **Language(s) (NLP):** Norwegian
- **License:** Apache 2.0
- **Finetuned from model [optional]:** KBLab/wav2vec2-large-voxrex
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/scribe-project/nodalida_2023_combined_training
- **Paper [optional]:**
```
@InProceedings{SolbergEtAlNoDaLiDa2023,
author = {Per Erik Solberg and Pablo Ortiz and Phoebe Parsons and TorbjΓΈrn Svendsen and Giampiero Salvi},
title = {Improving Generalization of Norwegian ASR with Limited Linguistic Resources},
booktitle = {Proceedings of the 24th Nordic Conference on Computational Linguistics},
year = {2023},
month = {May},
address = {TΓ³rshavn, Faroe Islands},
}
```
## Uses
The model can be used for automatic speech recognition in Norwegian, and other tasks involving speech technology
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Beelow/wav2vec2-ukrainian-model-large | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | Access to model Jayahari/itz-me-bruh is restricted and you are not in the authorized list. Visit https://huggingface.co/Jayahari/itz-me-bruh to ask for access. |
Begimay/Task | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- conversational
---
#Mental Health Support Chatbot |
BenDavis71/GPT-2-Finetuning-AIRaid | [
"pytorch",
"jax",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1-2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
BenGeorge/MyModel | []
| null | {
"architectures": null,
"model_type": null,
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 0 | null | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 74.00 +/- 50.23
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
BenWitter/DialoGPT-small-Tyrion | [
"pytorch",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": 1000
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 11 | null | ---
license: creativeml-openrail-m
tags:
- pytorch
- diffusers
- stable-diffusion
- text-to-image
- diffusion-models-class
- dreambooth-hackathon
- food
datasets: Ashish08/vada-sambhar
widget:
- text: a photo of vada sambhar south indian dish on a red table
---
# DreamBooth model for the vada-sambhar concept trained by Ashish08 on the Ashish08/vada-sambhar dataset.
This is a Stable Diffusion model fine-tuned on the vada-sambhar concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of vada-sambhar south-indian-dish**
This model was created as part of the DreamBooth Hackathon π₯. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
## Description
This is a Stable Diffusion model fine-tuned on `south-indian-dish` images for the food theme.
## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('Ashish08/vada-sambhar-south-indian-dish')
image = pipeline().images[0]
image
```
|
Benicio/t5-small-finetuned-en-to-ru | [
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 50 | 2023-01-07T16:00:09Z | ---
license: creativeml-openrail-m
tags:
- pytorch
- diffusers
- stable-diffusion
- text-to-image
- diffusion-models-class
- dreambooth-hackathon
- food
widget:
- text: focacciabarese pizza with a nice sea view.
---
# DreamBooth model for the focacciabarese concept trained by dacquaviva on the dacquaviva/focacciabarese dataset.
This is a Stable Diffusion model fine-tuned on the focacciabarese concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of facacciabarese pizza**
This model was created as part of the DreamBooth Hackathon π₯. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
## Description
This is a Stable Diffusion model fine-tuned on my `pizza` images for the food theme. In Bari (Puglia south of Italy), the focaccia is a kind of bread, seasoned with tomatoes, olives, virgin olive oil, and oregano. It is made with poor, very simple, and local ingredients. :).
## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('dacquaviva/focacciabarese-pizza')
image = pipeline().images[0]
image
```
|
BertChristiaens/EmojiPredictor | [
"pytorch",
"distilbert",
"token-classification",
"transformers",
"autotrain_compatible"
]
| token-classification | {
"architectures": [
"DistilBertForTokenClassification"
],
"model_type": "distilbert",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 6 | 2023-01-07T16:08:10Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: roberta-base-academic3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-academic3
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4206
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 64
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.99) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6943 | 0.99 | 82 | 1.5540 |
| 1.6494 | 1.99 | 164 | 1.5268 |
| 1.63 | 2.99 | 246 | 1.5209 |
| 1.6152 | 3.99 | 328 | 1.5049 |
| 1.5985 | 4.99 | 410 | 1.4891 |
| 1.5826 | 5.99 | 492 | 1.4876 |
| 1.5643 | 6.99 | 574 | 1.4769 |
| 1.5506 | 7.99 | 656 | 1.4638 |
| 1.5383 | 8.99 | 738 | 1.4548 |
| 1.5309 | 9.99 | 820 | 1.4511 |
| 1.5225 | 10.99 | 902 | 1.4492 |
| 1.5124 | 11.99 | 984 | 1.4419 |
| 1.507 | 12.99 | 1066 | 1.4323 |
| 1.4985 | 13.99 | 1148 | 1.4294 |
| 1.4921 | 14.99 | 1230 | 1.4296 |
| 1.4859 | 15.99 | 1312 | 1.4256 |
| 1.4827 | 16.99 | 1394 | 1.4194 |
| 1.4756 | 17.99 | 1476 | 1.4184 |
| 1.474 | 18.99 | 1558 | 1.4156 |
| 1.4737 | 19.99 | 1640 | 1.4165 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
Bhuvana/t5-base-spellchecker | [
"pytorch",
"t5",
"text2text-generation",
"transformers",
"autotrain_compatible"
]
| text2text-generation | {
"architectures": [
"T5ForConditionalGeneration"
],
"model_type": "t5",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": true,
"length_penalty": 2,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
}
} | 93 | 2023-01-07T16:29:24Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- super_glue
model-index:
- name: test_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_model
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the super_glue dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
BigSalmon/BertaMyWorda | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 8 | null |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: ospeek/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play π
|
BigSalmon/FormalBerta | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 10 | null | ---
tags:
- conversational
---
#Mental Health Support Chatbot |
BigSalmon/FormalBerta3 | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 4 | null | ---
license: mit
pipeline_tag: text-generation
widget:
- text: ''
tags:
- music
datasets:
- sander-wood/massive_abcnotation_dataset
---
# TunesFormer
## Model description
TunesFormer is a Transformer-based melody generation system trained on 285,449 melodies with musical forms (represented by control codes), where all scores are represented in ABC notation. It was introduced in the paper [TunesFormer: Forming Tunes with Control Codes](https://arxiv.org/abs/2301.02884) by Wu et al. The code is released in [this repository](https://github.com/sander-wood/tunesformer), and the dataset is released in [huggingface](https://huggingface.co/datasets/sander-wood/massive_abcnotation_dataset).
By utilizing specific symbols commonly found in ABC notation to indicate section boundaries, TunesFormer can understand and generate melodies with given musical forms based on control codes. The checkpoint released here is TunesFormer-GP (Global Placement), where all the control codes are placed at the beginning of the ABC notation.
This music generation model is available for online use and experience on [TunesFormer: Forming Tunes with Control Codes](https://huggingface.co/spaces/sander-wood/tunesformer). With this online platform, you can freely explore TunesFormer and receive a generated sheet music output from the model.
## Intended uses & limitations
You can use this model for melody generation conditioned on musical forms. All scores generated by this model can be written on one stave (for vocal solo or instrumental solo) in standard classical notation, and are in a variety of styles, e.g., blues, classical, folk, jazz, pop, and world music. The generated tunes are in ABC notation, and can be converted to sheet music or audio using [this website](https://ldzhangyx.github.io/abc/), or [this software](https://sourceforge.net/projects/easyabc/).
TunesFormer supports the generation of up to 8 sections, and up to 32 bars per section. In addition, although TunesFormer mostly generates music correctly according to the control codes, due to the random nature of sampling, the musical structure generated by the model occasionally does not match that specified by the control codes when more than 6 sections are generated, or when more than 17 bars are generated for a single section. For more information, please check [our paper](https://arxiv.org/abs/2301.02884).
### How to use
1. Install dependencies for the code released in [this repository](https://github.com/sander-wood/tunesformer):
```
torch 1.9.1+cu111
samplings 0.1.7
transformers 4.18.0
```
2. Set the `control_codes` and `prompt` in the script `run_inference.py` for conditional music generation.
```
control_codes = "[SECS_3][BARS_4][SIM_6][BARS_4][SIM_10][SIM_6][BARS_4]"
prompt = """L:1/4
M:4/4
K:C
"C" C C G G |"F" A A"C" G2 |"G" F F"C" E E |"G" D D"C" C2 ||"""
```
For TunesFormer, the input is a concatenation of `control_codes` and `prompt`. Both `control_codes` and `prompt` are optional. However, if you need to set the prompt, you must set the control codes.
3. Run the script `run_inference.py`. When running a script for the first time, the downloaded files will be cached for future reuse.
```
python run_inference.py -num_tunes 3 -max_length 1024 -top_p 0.9 -temperature 1.0 -seed 1
```
4. Enjoy tunes in the folder `output_tunes`! If you want to convert these ABC tunes to sheet music or audio, please refer to `Intended uses & limitations`.
```
X:1
L:1/4
M:4/4
K:C
"C" C C G G |"F" A A"C" G2 |"G" F F"C" E E |"G" D D"C" C2 ||"C" G G"F" A A |"G" G G"C" E2 |
"G" F F"C" E E |"G" D D"C" C2 ||"C" C C G G |"F" A A"C" G2 |"G" F F"C" E E |"G" D D"C" C2 |]
X:2
L:1/4
M:4/4
K:C
"C" C C G G |"F" A A"C" G2 |"G" F F"C" E E |"G" D D"C" C2 ||"C" E E"F" F F |"C" G G"F" A2 |
"G7" F F"C" E E |"G" D D"C" C2 ||"C" C C G G |"F" A A"C" G2 |"G" F F"C" E E |"G" D D"C" C2 |]
X:3
L:1/4
M:4/4
K:C
"C" C C G G |"F" A A"C" G2 |"G" F F"C" E E |"G" D D"C" C2 ||"C" G G"F" A A |"C" G G"F" F2 |
"C" E E"G" D D |"G" D D"C" C2 ||"C" C C G G |"F" A A"C" G2 |"G" F F"C" E E |"G" D D"C" C2 |]
```
### Usage
```
optional arguments:
-h, --help show this help message and exit
-num_tunes NUM_TUNES the number of independently computed returned tunes
-max_length MAX_LENGTH
integer to define the maximum length in tokens of each
tune
-top_p TOP_P float to define the tokens that are within the sample
operation of text generation
-temperature TEMPERATURE
the temperature of the sampling operation
-seed SEED seed for randomstate
```
### BibTeX entry and citation info
```bibtex
@misc{https://doi.org/10.48550/arxiv.2301.02884,
doi = {10.48550/ARXIV.2301.02884},
url = {https://arxiv.org/abs/2301.02884},
author = {Wu, Shangda and Sun, Maosong},
keywords = {Sound (cs.SD), Audio and Speech Processing (eess.AS), FOS: Computer and information sciences, FOS: Computer and information sciences, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering},
title = {TunesFormer: Forming Tunes with Control Codes},
publisher = {arXiv},
year = {2023},
copyright = {Creative Commons Attribution 4.0 International}
}
``` |
BigSalmon/FormalRobertaaa | [
"pytorch",
"roberta",
"fill-mask",
"transformers",
"autotrain_compatible"
]
| fill-mask | {
"architectures": [
"RobertaForMaskedLM"
],
"model_type": "roberta",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": null,
"max_length": null
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 12 | null | ---
tags:
- conversational
---
#Mental Health Support Chatbot |
BigSalmon/GPT2HardArticleEasyArticle | [
"pytorch",
"jax",
"tensorboard",
"gpt2",
"text-generation",
"transformers"
]
| text-generation | {
"architectures": [
"GPT2LMHeadModel"
],
"model_type": "gpt2",
"task_specific_params": {
"conversational": {
"max_length": null
},
"summarization": {
"early_stopping": null,
"length_penalty": null,
"max_length": null,
"min_length": null,
"no_repeat_ngram_size": null,
"num_beams": null,
"prefix": null
},
"text-generation": {
"do_sample": true,
"max_length": 50
},
"translation_en_to_de": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_fr": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
},
"translation_en_to_ro": {
"early_stopping": null,
"max_length": null,
"num_beams": null,
"prefix": null
}
}
} | 7 | null | ---
license: cc-by-nc-4.0
datasets:
- H-Liu1997/BEAT
language:
- en
--- |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.