Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
text-generation
|
transformers
|
# DioloGPT KaeyaBot model
|
{"tags": ["conversational"]}
|
felinecity/ScaraBot
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-de-en-finetuned-de-to-en-second
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-de-en](https://huggingface.co/Helsinki-NLP/opus-mt-de-en) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2282
- Bleu: 37.9762
- Gen Len: 25.3696
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 157 | 1.1837 | 38.8278 | 25.22 |
| No log | 2.0 | 314 | 1.2057 | 38.3047 | 25.2908 |
| No log | 3.0 | 471 | 1.2167 | 38.231 | 25.316 |
| 1.4808 | 4.0 | 628 | 1.2256 | 37.9871 | 25.3556 |
| 1.4808 | 5.0 | 785 | 1.2282 | 37.9762 | 25.3696 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["wmt16"], "metrics": ["bleu"], "model-index": [{"name": "opus-mt-de-en-finetuned-de-to-en-second", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "wmt16", "type": "wmt16", "args": "de-en"}, "metrics": [{"type": "bleu", "value": 37.9762, "name": "Bleu"}]}]}]}
|
felipetanios/opus-mt-de-en-finetuned-de-to-en-second
| null |
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
# mbart for 9-3
|
{}
|
felixai/distilmbart-9-3
| null |
[
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
felixbmuller/bert-base-uncased-finetuned-copa-kb
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
multiple-choice
|
transformers
|
{}
|
felixbmuller/bert-base-uncased-finetuned-copa
| null |
[
"transformers",
"pytorch",
"bert",
"multiple-choice",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
felixbmuller/bert-base-uncased-finetuned-swag
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-generation
|
transformers
|
{}
|
felixhusen/poem
| null |
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-generation
|
transformers
|
{}
|
felixhusen/scientific
| null |
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
feng/BERT-wwm
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
fill-mask
|
transformers
|
{}
|
fenrhjen/camembert_aux_amandes
| null |
[
"transformers",
"pytorch",
"camembert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
image-classification
|
transformers
|
# rare-puppers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### corgi

#### samoyed

#### shiba inu

|
{"tags": ["image-classification", "pytorch", "huggingpics"], "metrics": ["accuracy"]}
|
ferdinand/rare-puppers
| null |
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-classification
|
transformers
|
# FinBERT fine-tuned with the FinnSentiment dataset
This is a FinBERT model fine-tuned with the [FinnSentiment dataset](https://arxiv.org/pdf/2012.02613.pdf). 90% of sentences were used for training and 10% for evaluation.
## Evaluation results
|Metric|Score|
|--|--|
|Accuracy|0.8639028475711893|
|F1-score|0.8643024701696561|
|Precision|0.8653866541244811|
|Recall|0.8639028475711893|
|Matthews|0.6764924917164834|

## License
FinBERT-FinnSentiment is licensed under the [CC BY 4.0 License](https://creativecommons.org/licenses/by/4.0/deed.en) (same as FinBERT and the FinnSentiment dataset).
|
{"language": "fi", "license": "cc-by-4.0"}
|
fergusq/finbert-finnsentiment
| null |
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"fi",
"arxiv:2012.02613",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
<br />
<p align="center">
<a href="https://github.com/FernandoPerezLara/image-preprocessing-layer">
<img src="https://huggingface.co/fernandoperlar/preprocessing_image/resolve/main/duck.png" alt="Logo" width="100" height="146">
</a>
<h3 align="center">Image Preprocessing Model</h3>
<p align="center">
Image preprocessing in a convolutional model
<br />
<a href="https://github.com/FernandoPerezLara/image-preprocessing-layer"><strong>Read more about the model »</strong></a>
<br />
<br />
<a href="https://github.com/FernandoPerezLara/image-preprocessing-layer">View Code</a>
·
<a href="https://github.com/FernandoPerezLara/image-preprocessing-layer/issues">Report Bug</a>
·
<a href="https://github.com/FernandoPerezLara/image-preprocessing-layer/discussions">Start a discussion</a>
</p>
</p>
<br />
The main objective of this project is to apply preprocessing to an image dataset while the model is being trained.
The solution has been taken because we do not want to apply preprocessing to the data before training (i.e. create a copy of the data but already preprocessed) because we want to apply data augmentation while the model trains.
The use of `Lambda` layers has been discarded because they do not allow the use of external libraries that do not work with tensors, since we want to use the functions provided by *OpenCV* and *NumPy*.
## Preprocessing
In this example found in this repository we wanted to divide the images from HSV color masks, where it is divided into:
* **Warm zones**: red and white colors are obtained.
* **Warm zones**: The green color is obtained.
* **Cold zones**: The color blue is obtained.
Within the code you can find the declaration of these filters as:
```python
filters = {
"original": lambda x: x,
"red": lambda x: data.getImageTensor(x, (330, 0, 0), (360, 255, 255)) + data.getImageTensor(x, (0, 0, 0), (50, 255, 255)),
"green": lambda x: data.getImageTensor(x, (60, 0, 0), (130, 255, 255)),
"blue": lambda x: data.getImageTensor(x, (180, 0, 0), (270, 255, 255)),
}
```
On the other hand, the preprocessing functions are located inside `scripts/Data.py` file as follows:
```python
def detectColor(self, image, lower, upper):
if tf.is_tensor(image):
temp_image = image.numpy().copy() # Used for training
else:
temp_image = image.copy() # Used for displaying the image
hsv_image = temp_image.copy()
hsv_image = cv.cvtColor(hsv_image, cv.COLOR_RGB2HSV)
mask = cv.inRange(hsv_image, lower, upper)
result = temp_image.copy()
result[np.where(mask == 0)] = 0
return result
def getImageTensor(self, images, lower, upper):
results = []
for img in images:
results.append(np.expand_dims(self.detectColor(img, lower, upper), axis=0))
return np.concatenate(results, axis=0)
```
## Model
The model used to solve our problem was a *CNN* with a preprocessing layer:

This model can be found in the `scripts/Model.py` file in the following function:
```python
def create_model():
class FilterLayer(layers.Layer):
def __init__(self, filter, **kwargs):
self.filter = filter
super(FilterLayer, self).__init__(name="filter_layer", **kwargs)
def call(self, image):
shape = image.shape
[image, ] = tf.py_function(self.filter, [image], [tf.float32])
image = backend.stop_gradient(image)
image.set_shape(shape)
return image
def get_config(self):
return super().get_config()
model = models.Sequential()
model.add(layers.Input(shape=(215, 538, 3)))
model.add(FilterLayer(filter=self.filter))
model.add(layers.Conv2D(32, (3, 3), activation="relu"))
model.add(layers.MaxPooling2D(pool_size=(2, 2)))
model.add(layers.Conv2D(32, (3, 3), activation="relu"))
model.add(layers.GlobalAveragePooling2D())
model.add(layers.Dropout(rate=0.4))
model.add(layers.Dense(32, activation="relu"))
model.add(layers.Dropout(rate=0.4))
model.add(layers.Dense(2, activation="softmax"))
return model
```
## Contributors
This work has been possible thanks to:
- [Fernando Pérez Lara](https://www.linkedin.com/in/fernandoperezlara/) ([**@FernandoPerezLara**](https://github.com/FernandoPerezLara)) for having developed the model to make this idea come true.
## License
Copyright (c) 2021 Fernando Pérez Lara.
Licensed and distributed under the [MIT](LICENSE.txt) license.
|
{}
|
fernandoperlar/preprocessing_image
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
festival/no
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
feyqualia/arabic
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
feyzazeynep/wav2vec2-large-xls-r-300m-turkish-colab_1
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
fezhou/distilbert-base-uncased-finetuned-ner
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2108
- Accuracy: 0.9265
- F1: 0.9265
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8108 | 1.0 | 250 | 0.3101 | 0.903 | 0.8995 |
| 0.2423 | 2.0 | 500 | 0.2108 | 0.9265 | 0.9265 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "split"}, "metrics": [{"type": "accuracy", "value": 0.9265, "name": "Accuracy"}, {"type": "f1", "value": 0.9264826040883781, "name": "F1"}]}]}]}
|
ffalcao/distilbert-base-uncased-finetuned-emotion
| null |
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
ffarzad/xlnet_sentiment
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text2text-generation
|
transformers
|
{}
|
ffrmns/t5-small_XSum-finetuned
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
ffsouza/mbart-large-en-ro-finetuned-en-to-ro
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text2text-generation
|
transformers
|
{}
|
ffsouza/t5-small-length-128-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-tiny-random-length-128-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro
This model is a fine-tuned version of [patrickvonplaten/t5-tiny-random](https://huggingface.co/patrickvonplaten/t5-tiny-random) on the wmt16_en_ro_pre_processed dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "datasets": ["wmt16_en_ro_pre_processed"], "model-index": [{"name": "t5-tiny-random-length-128-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro", "results": []}]}
|
ffsouza/t5-tiny-random-length-128-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16_en_ro_pre_processed",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-tiny-random-length-96-learning_rate-0.0002-weight_decay-0.01-finetuned-en-to-ro
This model is a fine-tuned version of [patrickvonplaten/t5-tiny-random](https://huggingface.co/patrickvonplaten/t5-tiny-random) on the wmt16_en_ro_pre_processed dataset.
It achieves the following results on the evaluation set:
- Loss: 4.6426
- Bleu: 0.0617
- Gen Len: 8.9895
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:------:|:-------:|
| 4.5828 | 1.0 | 76290 | 5.5397 | 0.0089 | 8.981 |
| 4.187 | 2.0 | 152580 | 5.2241 | 0.0172 | 8.989 |
| 3.9612 | 3.0 | 228870 | 5.0092 | 0.034 | 8.988 |
| 3.8151 | 4.0 | 305160 | 4.8688 | 0.0365 | 8.9865 |
| 3.7162 | 5.0 | 381450 | 4.7656 | 0.0469 | 8.9865 |
| 3.6498 | 6.0 | 457740 | 4.6874 | 0.0531 | 8.9885 |
| 3.6147 | 7.0 | 534030 | 4.6612 | 0.0585 | 8.9875 |
| 3.5972 | 8.0 | 610320 | 4.6426 | 0.0617 | 8.9895 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "datasets": ["wmt16_en_ro_pre_processed"], "metrics": ["bleu"], "model-index": [{"name": "t5-tiny-random-length-96-learning_rate-0.0002-weight_decay-0.01-finetuned-en-to-ro", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "wmt16_en_ro_pre_processed", "type": "wmt16_en_ro_pre_processed", "args": "enro"}, "metrics": [{"type": "bleu", "value": 0.0617, "name": "Bleu"}]}]}]}
|
ffsouza/t5-tiny-random-length-96-learning_rate-0.0002-weight_decay-0.01-finetuned-en-to-ro
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16_en_ro_pre_processed",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
{}
|
ffsouza/t5-tiny-random-length-96-learning_rate-1e-05-weight_decay-0.01-finetuned-en-to-ro
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text2text-generation
|
transformers
|
{}
|
ffsouza/t5-tiny-random-length-96-learning_rate-2e-05-weight_decay-0.005-finetuned-en-to-ro
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-tiny-random-length-96-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro
This model is a fine-tuned version of [patrickvonplaten/t5-tiny-random](https://huggingface.co/patrickvonplaten/t5-tiny-random) on the wmt16_en_ro_pre_processed dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "datasets": ["wmt16_en_ro_pre_processed"], "model-index": [{"name": "t5-tiny-random-length-96-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro", "results": []}]}
|
ffsouza/t5-tiny-random-length-96-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16_en_ro_pre_processed",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-tiny-random-length-96-learning_rate-2e-05-weight_decay-0.02-finetuned-en-to-ro
This model is a fine-tuned version of [patrickvonplaten/t5-tiny-random](https://huggingface.co/patrickvonplaten/t5-tiny-random) on the wmt16_en_ro_pre_processed dataset.
It achieves the following results on the evaluation set:
- Loss: 6.4854
- Bleu: 0.0002
- Gen Len: 9.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:|
| 6.2568 | 1.0 | 76290 | 6.4854 | 0.0002 | 9.0 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "datasets": ["wmt16_en_ro_pre_processed"], "metrics": ["bleu"], "model-index": [{"name": "t5-tiny-random-length-96-learning_rate-2e-05-weight_decay-0.02-finetuned-en-to-ro", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "wmt16_en_ro_pre_processed", "type": "wmt16_en_ro_pre_processed", "args": "enro"}, "metrics": [{"type": "bleu", "value": 0.0002, "name": "Bleu"}]}]}]}
|
ffsouza/t5-tiny-random-length-96-learning_rate-2e-05-weight_decay-0.02-finetuned-en-to-ro
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16_en_ro_pre_processed",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mbart-finetuned-en-to-ro
This model is a fine-tuned version of [sshleifer/tiny-mbart](https://huggingface.co/sshleifer/tiny-mbart) on the wmt16_en_ro_pre_processed dataset.
It achieves the following results on the evaluation set:
- Loss: 8.4792
- Bleu: 0.0
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:----:|:-------:|
| 8.2425 | 1.0 | 76290 | 8.4792 | 0.0 | 20.0 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "datasets": ["wmt16_en_ro_pre_processed"], "metrics": ["bleu"], "model-index": [{"name": "tiny-mbart-finetuned-en-to-ro", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "wmt16_en_ro_pre_processed", "type": "wmt16_en_ro_pre_processed", "args": "enro"}, "metrics": [{"type": "bleu", "value": 0.0, "name": "Bleu"}]}]}]}
|
ffsouza/tiny-mbart-finetuned-en-to-ro
| null |
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16_en_ro_pre_processed",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
{}
|
ffsouza/tiny-mbart-length-128-finetuned-en-to-ro
| null |
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mbart-length-128-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro
This model is a fine-tuned version of [sshleifer/tiny-mbart](https://huggingface.co/sshleifer/tiny-mbart) on the wmt16_en_ro_pre_processed dataset.
It achieves the following results on the evaluation set:
- Loss: 8.4656
- Bleu: 0.0
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:----:|:-------:|
| 8.2268 | 1.0 | 76290 | 8.4656 | 0.0 | 20.0 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "datasets": ["wmt16_en_ro_pre_processed"], "metrics": ["bleu"], "model-index": [{"name": "tiny-mbart-length-128-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "wmt16_en_ro_pre_processed", "type": "wmt16_en_ro_pre_processed", "args": "enro"}, "metrics": [{"type": "bleu", "value": 0.0, "name": "Bleu"}]}]}]}
|
ffsouza/tiny-mbart-length-128-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro
| null |
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16_en_ro_pre_processed",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
ffsouza/tiny-mbart-length-96-finetuned-en-to-ro
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mbart-length-96-learning_rate-2e-05-weight_decay-0.005-finetuned-en-to-ro
This model is a fine-tuned version of [sshleifer/tiny-mbart](https://huggingface.co/sshleifer/tiny-mbart) on the wmt16_en_ro_pre_processed dataset.
It achieves the following results on the evaluation set:
- Loss: 8.5983
- Bleu: 0.0
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:----:|:-------:|
| 8.3753 | 1.0 | 76290 | 8.5983 | 0.0 | 20.0 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "datasets": ["wmt16_en_ro_pre_processed"], "metrics": ["bleu"], "model-index": [{"name": "tiny-mbart-length-96-learning_rate-2e-05-weight_decay-0.005-finetuned-en-to-ro", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "wmt16_en_ro_pre_processed", "type": "wmt16_en_ro_pre_processed", "args": "enro"}, "metrics": [{"type": "bleu", "value": 0.0, "name": "Bleu"}]}]}]}
|
ffsouza/tiny-mbart-length-96-learning_rate-2e-05-weight_decay-0.005-finetuned-en-to-ro
| null |
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16_en_ro_pre_processed",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mbart-length-96-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro
This model is a fine-tuned version of [sshleifer/tiny-mbart](https://huggingface.co/sshleifer/tiny-mbart) on the wmt16_en_ro_pre_processed dataset.
It achieves the following results on the evaluation set:
- Loss: 8.5137
- Bleu: 0.0
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:----:|:-------:|
| 8.2817 | 1.0 | 76290 | 8.5137 | 0.0 | 20.0 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "datasets": ["wmt16_en_ro_pre_processed"], "metrics": ["bleu"], "model-index": [{"name": "tiny-mbart-length-96-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "wmt16_en_ro_pre_processed", "type": "wmt16_en_ro_pre_processed", "args": "enro"}, "metrics": [{"type": "bleu", "value": 0.0, "name": "Bleu"}]}]}]}
|
ffsouza/tiny-mbart-length-96-learning_rate-2e-05-weight_decay-0.01-finetuned-en-to-ro
| null |
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16_en_ro_pre_processed",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
{}
|
ffsouza/tiny-mbart-length-96-learning_rate-2e-05-weight_decay-0.02-finetuned-en-to-ro
| null |
[
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text2text-generation
|
transformers
|
T5-small for QA
---
[Google's T5-small](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) pre-trained on the [C4](https://huggingface.co/datasets/c4) dataset, fine-tuned for Question-Answering on [SQuAD v2](https://huggingface.co/datasets/squad_v2) with the following hyperparameters:
```
optimizer=adamw_hf
learning_rate=3e-5
adam_beta1=0.9
adam_beta2=0.999
adam_epsilon=1e-08
num_train_epochs=2
per_device_train_batch_size=12
```
Usage
---
The input [context and question] has to be prepared in a specific way as follows:
```python
from transformers import pipeline
def prep_input(_context, _question):
return " ".join(["question:", _question.strip(), "context:", _context.strip()])
t5qa = pipeline("text2text-generation", "fgaim/t5-small-squad-v2")
context = """
Oxygen is a chemical element with symbol O and atomic number 8. It is a member of the chalcogen group on the periodic table and is a highly reactive nonmetal and oxidizing agent that readily forms compounds (notably oxides) with most elements. By mass, oxygen is the third-most abundant element in the universe, after hydrogen and helium. At standard temperature and pressure, two atoms of the element bind to form dioxygen, a colorless and odorless diatomic gas with the formula O.
"""
t5qa(prep_input(context, "How many atoms combine to form dioxygen?"))
# [{'generated_text': 'two'}]
t5qa(prep_input(context, "What element makes up almost half of the earth's crust by mass?"))
# [{'generated_text': 'oxygen'}]
t5qa(prep_input(context, "What are the most abundent elements of the universe by mass?"))
# [{'generated_text': 'hydrogen and helium'}]
```
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text2text-generation"], "datasets": ["c4", "squad"], "widget": [{"text": "question: What is the atomic number for oxygen? context: Oxygen is a chemical element with symbol O and atomic number 8."}, {"text": "question: What is the chemical symbol of Oxygen? context: Oxygen is a chemical element with symbol O and atomic number 8."}]}
|
fgaim/t5-small-squad-v2
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:c4",
"dataset:squad",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
fill-mask
|
transformers
|
# BERT Base for Tigrinya Language
We pre-train a BERT base-uncased model for Tigrinya on a dataset of 40 million tokens trained for 40 epochs.
This repo contains the original pre-trained Flax model that was trained on a TPU v3.8 and its corresponding PyTorch version.
## Hyperparameters
The hyperparameters corresponding to the model sizes mentioned above are as follows:
| Model Size | L | AH | HS | FFN | P | Seq |
|------------|----|----|-----|------|------|------|
| BASE | 12 | 12 | 768 | 3072 | 110M | 512 |
(L = number of layers; AH = number of attention heads; HS = hidden size; FFN = feedforward network dimension; P = number of parameters; Seq = maximum sequence length.)
## Citation
If you use this model in your product or research, please cite as follows:
```
@article{Fitsum2021TiPLMs,
author={Fitsum Gaim and Wonsuk Yang and Jong C. Park},
title={Monolingual Pre-trained Language Models for Tigrinya},
year=2021,
publisher={WiNLP 2021 at EMNLP 2021}
}
```
|
{"language": "ti", "widget": [{"text": "\u12d3\u1255\u121a \u12f0\u1242\u12a3\u1295\u1235\u1275\u12ee [MASK] \u1265\u130d\u1265\u122a \u1270\u122b\u12a5\u12e9"}]}
|
fgaim/tibert-base
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"ti",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
token-classification
|
transformers
|
# Tigrinya POS tagging with TiELECTRA
This model is a fine-tuned version of [TiELECTRA](https://huggingface.co/fgaim/tielectra-small) on the NTC-v1 dataset (Tedla et al. 2016).
## Basic usage
```python
from transformers import pipeline
ti_pos = pipeline("token-classification", model="fgaim/tielectra-small-pos")
ti_pos("ድምጻዊ ኣብርሃም ኣፈወርቂ ንዘልኣለም ህያው ኮይኑ ኣብ ልብና ይነብር")
```
## Training
### Hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Results
The model achieves the following results on the test set:
- Loss: 0.2236
- Adj Precision: 0.9148
- Adj Recall: 0.9192
- Adj F1: 0.9170
- Adj Number: 1670
- Adv Precision: 0.8228
- Adv Recall: 0.8058
- Adv F1: 0.8142
- Adv Number: 484
- Con Precision: 0.9793
- Con Recall: 0.9743
- Con F1: 0.9768
- Con Number: 972
- Fw Precision: 0.5
- Fw Recall: 0.3214
- Fw F1: 0.3913
- Fw Number: 28
- Int Precision: 0.64
- Int Recall: 0.6154
- Int F1: 0.6275
- Int Number: 26
- N Precision: 0.9525
- N Recall: 0.9587
- N F1: 0.9556
- N Number: 3992
- Num Precision: 0.9825
- Num Recall: 0.9372
- Num F1: 0.9593
- Num Number: 239
- N Prp Precision: 0.9132
- N Prp Recall: 0.9404
- N Prp F1: 0.9266
- N Prp Number: 470
- N V Precision: 0.9667
- N V Recall: 0.9760
- N V F1: 0.9713
- N V Number: 416
- Pre Precision: 0.9645
- Pre Recall: 0.9592
- Pre F1: 0.9619
- Pre Number: 907
- Pro Precision: 0.9395
- Pro Recall: 0.9079
- Pro F1: 0.9234
- Pro Number: 445
- Pun Precision: 1.0
- Pun Recall: 0.9994
- Pun F1: 0.9997
- Pun Number: 1607
- Unc Precision: 0.9286
- Unc Recall: 0.8125
- Unc F1: 0.8667
- Unc Number: 16
- V Precision: 0.7609
- V Recall: 0.8974
- V F1: 0.8235
- V Number: 78
- V Aux Precision: 0.9581
- V Aux Recall: 0.9786
- V Aux F1: 0.9682
- V Aux Number: 654
- V Ger Precision: 0.9183
- V Ger Recall: 0.9415
- V Ger F1: 0.9297
- V Ger Number: 513
- V Imf Precision: 0.9473
- V Imf Recall: 0.9442
- V Imf F1: 0.9458
- V Imf Number: 914
- V Imv Precision: 0.8163
- V Imv Recall: 0.5714
- V Imv F1: 0.6723
- V Imv Number: 70
- V Prf Precision: 0.8927
- V Prf Recall: 0.8776
- V Prf F1: 0.8851
- V Prf Number: 294
- V Rel Precision: 0.9535
- V Rel Recall: 0.9485
- V Rel F1: 0.9510
- V Rel Number: 757
- Overall Precision: 0.9456
- Overall Recall: 0.9456
- Overall F1: 0.9456
- Overall Accuracy: 0.9456
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.0+cu111
- Datasets 1.10.2
- Tokenizers 0.10.1
## Citation
If you use this model in your product or research, please cite as follows:
```
@article{Fitsum2021TiPLMs,
author= {Fitsum Gaim and Wonsuk Yang and Jong C. Park},
title= {Monolingual Pre-trained Language Models for Tigrinya},
year= 2021,
publisher= {WiNLP 2021/EMNLP 2021}
}
```
## References
```
Tedla, Y., Yamamoto, K. & Marasinghe, A. 2016.
Tigrinya Part-of-Speech Tagging with Morphological Patterns and the New Nagaoka Tigrinya Corpus.
International Journal Of Computer Applications 146 pp. 33-41 (2016).
```
|
{"language": "ti", "datasets": ["TLMD", "NTC"], "metrics": ["f1", "precision", "recall", "accuracy"], "widget": [{"text": "\u12f5\u121d\u133b\u12ca \u12a3\u1265\u122d\u1203\u121d \u12a3\u1348\u12c8\u122d\u1242 \u1295\u12d8\u120d\u12a3\u1208\u121d \u1205\u12eb\u12cd \u12ae\u12ed\u1291 \u12a3\u1265 \u120d\u1265\u1293 \u12ed\u1290\u1265\u122d"}]}
|
fgaim/tielectra-small-pos
| null |
[
"transformers",
"pytorch",
"electra",
"token-classification",
"ti",
"dataset:TLMD",
"dataset:NTC",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-classification
|
transformers
|
# Sentiment Analysis for Tigrinya with TiELECTRA small
This model is a fine-tuned version of [TiELECTRA small](https://huggingface.co/fgaim/tielectra-small) on a YouTube comments Sentiment Analysis dataset for Tigrinya (Tela et al. 2020).
## Basic usage
```python
from transformers import pipeline
ti_sent = pipeline("sentiment-analysis", model="fgaim/tielectra-small-sentiment")
ti_sent("ድምጻዊ ኣብርሃም ኣፈወርቂ ንዘልኣለም ህያው ኮይኑ ኣብ ልብና ይነብር")
```
## Training
### Hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Results
The model achieves the following results on the evaluation set:
- F1: 0.8229
- Precision: 0.8056
- Recall: 0.841
- Accuracy: 0.819
- Loss: 0.4299
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.0+cu111
- Datasets 1.10.2
- Tokenizers 0.10.1
## Citation
If you use this model in your product or research, please cite as follows:
```
@article{Fitsum2021TiPLMs,
author={Fitsum Gaim and Wonsuk Yang and Jong C. Park},
title={Monolingual Pre-trained Language Models for Tigrinya},
year=2021,
publisher= {WiNLP 2021/EMNLP 2021}
}
```
## References
```
Tela, A., Woubie, A. and Hautamäki, V. 2020.
Transferring Monolingual Model to Low-Resource Language: The Case of Tigrinya.
ArXiv, abs/2006.07698.
```
|
{"language": "ti", "metrics": ["f1", "precision", "recall", "accuracy"], "widget": [{"text": "\u12f5\u121d\u133b\u12ca \u12a3\u1265\u122d\u1203\u121d \u12a3\u1348\u12c8\u122d\u1242 \u1295\u12d8\u120d\u12a3\u1208\u121d \u1205\u12eb\u12cd \u12ae\u12ed\u1291 \u12a3\u1265 \u120d\u1265\u1293 \u12ed\u1290\u1265\u122d"}]}
|
fgaim/tielectra-small-sentiment
| null |
[
"transformers",
"pytorch",
"electra",
"text-classification",
"ti",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
fill-mask
|
transformers
|
# Pre-trained ELECTRA small for Tigrinya Language
We pre-train ELECTRA small on the [TLMD](https://zenodo.org/record/5139094) dataset, with over 40 million tokens.
Contained are trained Flax and PyTorch models.
## Hyperparameters
The hyperparameters corresponding to model sizes mentioned above are as follows:
| Model Size | L | AH | HS | FFN | P | Seq |
|------------|----|----|-----|------|------|------|
| SMALL | 12 | 4 | 256 | 1024 | 14M | 512 |
(L = number of layers; AH = number of attention heads; HS = hidden size; FFN = feedforward network dimension; P = number of parameters; Seq = maximum sequence length.)
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
## Citation
If you use this model in your product or research, please cite as follows:
```
@article{Fitsum2021TiPLMs,
author={Fitsum Gaim and Wonsuk Yang and Jong C. Park},
title={Monolingual Pre-trained Language Models for Tigrinya},
year=2021,
publisher={WiNLP 2021 at EMNLP 2021}
}
```
|
{"language": "ti", "widget": [{"text": "\u12d3\u1255\u121a \u1218\u1295\u12a5\u1230\u12ed \u12a4\u122d\u1275\u122b [MASK] \u1270\u122b\u12a5\u12e9"}]}
|
fgaim/tielectra-small
| null |
[
"transformers",
"pytorch",
"jax",
"electra",
"fill-mask",
"ti",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
fill-mask
|
transformers
|
# TiRoBERTa: RoBERTa Pretrained for the Tigrinya Language
We pretrain a RoBERTa base model for Tigrinya on a dataset of 40 million tokens trained for 40 epochs.
Contained in this repo is the original pretrained Flax model that was trained on a TPU v3.8 and it's corresponding PyTorch version.
## Hyperparameters
The hyperparameters corresponding to model sizes mentioned above are as follows:
| Model Size | L | AH | HS | FFN | P | Seq |
|------------|----|----|-----|------|------|------|
| BASE | 12 | 12 | 768 | 3072 | 125M | 512 |
(L = number of layers; AH = number of attention heads; HS = hidden size; FFN = feedforward network dimension; P = number of parameters; Seq = maximum sequence length.)
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
## Citation
If you use this model in your product or research, please cite as follows:
```
@article{Fitsum2021TiPLMs,
author={Fitsum Gaim and Wonsuk Yang and Jong C. Park},
title={Monolingual Pre-trained Language Models for Tigrinya},
year=2021,
publisher={WiNLP 2021 at EMNLP 2021}
}
```
|
{"language": "ti", "widget": [{"text": "\u12d3\u1255\u121a \u1218\u1295\u12a5\u1230\u12ed \u12a4\u122d\u1275\u122b <mask> \u1270\u122b\u12a5\u12e9"}]}
|
fgaim/tiroberta-base
| null |
[
"transformers",
"pytorch",
"jax",
"safetensors",
"roberta",
"fill-mask",
"ti",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
token-classification
|
transformers
|
# Tigrinya POS tagging with TiRoBERTa
This model is a fine-tuned version of [TiRoBERTa](https://huggingface.co/fgaim/tiroberta) on the NTC-v1 dataset (Tedla et al. 2016).
## Training
### Hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Results
The model achieves the following results on the test set:
- Loss: 0.3194
- Adj Precision: 0.9219
- Adj Recall: 0.9335
- Adj F1: 0.9277
- Adj Number: 1670
- Adv Precision: 0.8297
- Adv Recall: 0.8554
- Adv F1: 0.8423
- Adv Number: 484
- Con Precision: 0.9844
- Con Recall: 0.9763
- Con F1: 0.9804
- Con Number: 972
- Fw Precision: 0.7895
- Fw Recall: 0.5357
- Fw F1: 0.6383
- Fw Number: 28
- Int Precision: 0.6552
- Int Recall: 0.7308
- Int F1: 0.6909
- Int Number: 26
- N Precision: 0.9650
- N Recall: 0.9662
- N F1: 0.9656
- N Number: 3992
- Num Precision: 0.9747
- Num Recall: 0.9665
- Num F1: 0.9706
- Num Number: 239
- N Prp Precision: 0.9308
- N Prp Recall: 0.9447
- N Prp F1: 0.9377
- N Prp Number: 470
- N V Precision: 0.9854
- N V Recall: 0.9736
- N V F1: 0.9794
- N V Number: 416
- Pre Precision: 0.9722
- Pre Recall: 0.9625
- Pre F1: 0.9673
- Pre Number: 907
- Pro Precision: 0.9448
- Pro Recall: 0.9236
- Pro F1: 0.9341
- Pro Number: 445
- Pun Precision: 1.0
- Pun Recall: 0.9994
- Pun F1: 0.9997
- Pun Number: 1607
- Unc Precision: 1.0
- Unc Recall: 0.875
- Unc F1: 0.9333
- Unc Number: 16
- V Precision: 0.8780
- V Recall: 0.9231
- V F1: 0.9
- V Number: 78
- V Aux Precision: 0.9685
- V Aux Recall: 0.9878
- V Aux F1: 0.9780
- V Aux Number: 654
- V Ger Precision: 0.9388
- V Ger Recall: 0.9571
- V Ger F1: 0.9479
- V Ger Number: 513
- V Imf Precision: 0.9634
- V Imf Recall: 0.9497
- V Imf F1: 0.9565
- V Imf Number: 914
- V Imv Precision: 0.8793
- V Imv Recall: 0.7286
- V Imv F1: 0.7969
- V Imv Number: 70
- V Prf Precision: 0.8960
- V Prf Recall: 0.9082
- V Prf F1: 0.9020
- V Prf Number: 294
- V Rel Precision: 0.9678
- V Rel Recall: 0.9538
- V Rel F1: 0.9607
- V Rel Number: 757
- Overall Precision: 0.9562
- Overall Recall: 0.9562
- Overall F1: 0.9562
- Overall Accuracy: 0.9562
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
## Citation
If you use this model in your product or research, please cite as follows:
```
@article{Fitsum2021TiPLMs,
author={Fitsum Gaim and Wonsuk Yang and Jong C. Park},
title={Monolingual Pre-trained Language Models for Tigrinya},
year=2021,
publisher={WiNLP 2021/EMNLP 2021}
}
```
## References
```
Tedla, Y., Yamamoto, K. & Marasinghe, A. 2016.
Tigrinya Part-of-Speech Tagging with Morphological Patterns and the New Nagaoka Tigrinya Corpus.
International Journal Of Computer Applications 146 pp. 33-41 (2016).
```
|
{"language": "ti", "datasets": ["TLMD", "NTC"], "metrics": ["f1", "precision", "recall", "accuracy"], "widget": [{"text": "\u12f5\u121d\u133b\u12ca \u12a3\u1265\u122d\u1203\u121d \u12a3\u1348\u12c8\u122d\u1242 \u1295\u12d8\u120d\u12a3\u1208\u121d \u1205\u12eb\u12cd \u12ae\u12ed\u1291 \u12a3\u1265 \u120d\u1265\u1293 \u12ed\u1290\u1265\u122d"}]}
|
fgaim/tiroberta-pos
| null |
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"token-classification",
"ti",
"dataset:TLMD",
"dataset:NTC",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-classification
|
transformers
|
# Sentiment Analysis for Tigrinya with TiRoBERTa
This model is a fine-tuned version of [TiRoBERTa](https://huggingface.co/fgaim/roberta-base-tigrinya) on a YouTube comments Sentiment Analysis dataset for Tigrinya (Tela et al. 2020).
## Basic usage
```python
from transformers import pipeline
ti_sent = pipeline("sentiment-analysis", model="fgaim/tiroberta-sentiment")
ti_sent("ድምጻዊ ኣብርሃም ኣፈወርቂ ንዘልኣለም ህያው ኮይኑ ኣብ ልብና ይነብር")
```
## Training
### Hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Results
It achieves the following results on the evaluation set:
- F1: 0.8477
- Precision: 0.7607
- Recall: 0.957
- Accuracy: 0.828
- Loss: 0.6796
### Framework versions
- Transformers 4.10.3
- Pytorch 1.9.0+cu111
- Datasets 1.10.2
- Tokenizers 0.10.1
## Citation
If you use this model in your product or research, please cite as follows:
```
@article{Fitsum2021TiPLMs,
author={Fitsum Gaim and Wonsuk Yang and Jong C. Park},
title={Monolingual Pre-trained Language Models for Tigrinya},
year=2021,
publisher={WiNLP 2021/EMNLP 2021}
}
```
## References
```
Tela, A., Woubie, A. and Hautamäki, V. 2020.
Transferring Monolingual Model to Low-Resource Language: The Case of Tigrinya.
ArXiv, abs/2006.07698.
```
|
{"language": "ti", "datasets": ["TLMD"], "metrics": ["accuracy", "f1", "precision", "recall"], "widget": [{"text": "\u12f5\u121d\u133b\u12ca \u12a3\u1265\u122d\u1203\u121d \u12a3\u1348\u12c8\u122d\u1242 \u1295\u12d8\u120d\u12a3\u1208\u121d \u1205\u12eb\u12cd \u12ae\u12ed\u1291 \u12a3\u1265 \u120d\u1265\u1293 \u12ed\u1290\u1265\u122d"}]}
|
fgaim/tiroberta-sentiment
| null |
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"ti",
"dataset:TLMD",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
fgggggg/nnjjnnnn
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
fill-mask
|
transformers
|
{}
|
fgua/bert-base-uncased-wikitext2
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
fgua/gpt2-wikitext2
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-classification
|
transformers
|
# NewsSentiment: easy-to-use, high-quality target-dependent sentiment classification for news articles
## Important: [use our PyPI package](https://pypi.org/project/NewsSentiment/) instead of this model on the Hub
The Huggingface Hub architecture currently [does not support](https://github.com/huggingface/transformers/issues/14785) target-dependent sentiment classification since you cannot provide the required inputs, i.e., sentence and target. Thus, we recommend that you use our easy-to-use [PyPI package NewsSentiment](https://pypi.org/project/NewsSentiment/).
## Description
This model is the currently [best performing](https://aclanthology.org/2021.eacl-main.142.pdf)
targeted sentiment classifier for news articles. In contrast to regular sentiment
classification, targeted sentiment classification allows you to provide a target in a sentence.
Only for this target, the sentiment is then predicted. This is more reliable in many
cases, as demonstrated by the following simplistic example: "I like Bert, but I hate Robert."
This model is also available as an easy-to-use PyPI package named [`NewsSentiment`](https://pypi.org/project/NewsSentiment/) and
in its original GitHub repository named [`NewsMTSC`](https://github.com/fhamborg/NewsMTSC), where you will find the dataset the model was trained on, other models for sentiment classification, and a training and testing framework. More information on the model and the dataset (consisting of more than 10k sentences sampled from news articles, each
labeled and agreed upon by at least 5 annotators) can be found in our [EACL paper](https://aclanthology.org/2021.eacl-main.142.pdf). The
dataset, the model, and its source code can be viewed in our [GitHub repository](https://github.com/fhamborg/NewsMTSC).
We recommend to use our [PyPI package](https://pypi.org/project/NewsSentiment/) for sentiment classification since the Huggingface Hub platform seems to [not support](https://github.com/huggingface/transformers/issues/14785) target-dependent sentiment classification.
# How to cite
If you use the dataset or model, please cite our [paper](https://www.aclweb.org/anthology/2021.eacl-main.142/) ([PDF](https://www.aclweb.org/anthology/2021.eacl-main.142.pdf)):
```
@InProceedings{Hamborg2021b,
author = {Hamborg, Felix and Donnay, Karsten},
title = {NewsMTSC: (Multi-)Target-dependent Sentiment Classification in News Articles},
booktitle = {Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2021)},
year = {2021},
month = {Apr.},
location = {Virtual Event},
}
```
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-classification", "sentiment-analysis", "sentiment-classification", "targeted-sentiment-classification", "target-depentent-sentiment-classification"], "datasets": "fhamborg/news_sentiment_newsmtsc"}
|
fhamborg/roberta-targeted-sentiment-classification-newsarticles
| null |
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"sentiment-analysis",
"sentiment-classification",
"targeted-sentiment-classification",
"target-depentent-sentiment-classification",
"en",
"dataset:fhamborg/news_sentiment_newsmtsc",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
token-classification
|
transformers
|
# BERT-DE-NER
## What is it?
This is a German BERT model fine-tuned for named entity recognition.
## Base model & training
This model is based on [bert-base-german-dbmdz-cased](https://huggingface.co/bert-base-german-dbmdz-cased) and has been fine-tuned
for NER on the training data from [GermEval2014](https://sites.google.com/site/germeval2014ner).
## Model results
The results on the test data from GermEval2014 are (entities only):
| Precision | Recall | F1-Score |
|----------:|-------:|---------:|
| 0.817 | 0.842 | 0.829 |
## How to use
```Python
>>> from transformers import pipeline
>>> classifier = pipeline('ner', model="fhswf/bert_de_ner")
>>> classifier('Von der Organisation „medico international“ hieß es, die EU entziehe sich seit vielen Jahren der Verantwortung für die Menschen an ihren Außengrenzen.')
[{'word': 'med', 'score': 0.9996621608734131, 'entity': 'B-ORG', 'index': 6},
{'word': '##ico', 'score': 0.9995362162590027, 'entity': 'I-ORG', 'index': 7},
{'word': 'international',
'score': 0.9996932744979858,
'entity': 'I-ORG',
'index': 8},
{'word': 'eu', 'score': 0.9997008442878723, 'entity': 'B-ORG', 'index': 14}]
```
|
{"language": "de", "license": "cc-by-sa-4.0", "tags": ["German", "de", "NER"], "datasets": ["germeval_14"]}
|
fhswf/bert_de_ner
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"bert",
"token-classification",
"German",
"de",
"NER",
"dataset:germeval_14",
"doi:10.57967/hf/0655",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
fhzh123/k_nc_bert
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-generation
|
transformers
|
# Fibruh Bot Model
|
{"tags": ["conversational"]}
|
fibruh/DialoGPT-small-harrypotter
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert_v1.1_pubmed-finetuned-ner-finetuned-ner
This model is a fine-tuned version of [fidukm34/biobert_v1.1_pubmed-finetuned-ner](https://huggingface.co/fidukm34/biobert_v1.1_pubmed-finetuned-ner) on the ncbi_disease dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0715
- Precision: 0.8464
- Recall: 0.8872
- F1: 0.8663
- Accuracy: 0.9829
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 340 | 0.0715 | 0.8464 | 0.8872 | 0.8663 | 0.9829 |
### Framework versions
- Transformers 4.8.1
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "datasets": ["ncbi_disease"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "biobert_v1.1_pubmed-finetuned-ner-finetuned-ner", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "ncbi_disease", "type": "ncbi_disease", "args": "ncbi_disease"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9829142288061745}}]}]}
|
fidukm34/biobert_v1.1_pubmed-finetuned-ner-finetuned-ner
| null |
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:ncbi_disease",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert_v1.1_pubmed-finetuned-ner
This model is a fine-tuned version of [monologg/biobert_v1.1_pubmed](https://huggingface.co/monologg/biobert_v1.1_pubmed) on the ncbi_disease dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0657
- Precision: 0.8338
- Recall: 0.8933
- F1: 0.8625
- Accuracy: 0.9827
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 340 | 0.0612 | 0.8268 | 0.85 | 0.8382 | 0.9806 |
| 0.0987 | 2.0 | 680 | 0.0604 | 0.8397 | 0.8848 | 0.8616 | 0.9829 |
| 0.0272 | 3.0 | 1020 | 0.0657 | 0.8338 | 0.8933 | 0.8625 | 0.9827 |
### Framework versions
- Transformers 4.8.1
- Pytorch 1.9.0
- Datasets 1.6.2
- Tokenizers 0.10.3
|
{"tags": ["generated_from_trainer"], "datasets": ["ncbi_disease"], "metrics": ["precision", "recall", "f1", "accuracy"], "model_index": [{"name": "biobert_v1.1_pubmed-finetuned-ner", "results": [{"task": {"name": "Token Classification", "type": "token-classification"}, "dataset": {"name": "ncbi_disease", "type": "ncbi_disease", "args": "ncbi_disease"}, "metric": {"name": "Accuracy", "type": "accuracy", "value": 0.9827274990663513}}]}]}
|
fidukm34/biobert_v1.1_pubmed-finetuned-ner
| null |
[
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:ncbi_disease",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
figge/intpolitics
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{"license": "afl-3.0"}
|
fighterhitx/model-test
| null |
[
"license:afl-3.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null |
transformers
|
This model can measure semantic similarity between pairs of texts containing figurative language. As far as we know,
this model works slightly better than sup-simCSE-roberta-base. For example :
**sentence 1**: I have been in seventh heaven since Harry entered my life .
**sentence 2**: I have been in very happy since Harry entered my life.
the cosin score of simcse: 0.897
the cosin score of us: 0.897
-------------------------------------------------------------------
**sentence 1**: I have been in seventh heaven since Harry entered my life .
**sentence 2**: I have been in pain since Harry entered my life .
the cosin score of simcse: 0.846
the cosin score of us: 0.753
--------------------------------------------------
It's still a big challenge for us to measure semantic similarity of figurative language from the sentence embedding perspective.
unsupvised models may useless as the key is to infer the literal meaning of the figurative expression, since the annotated is rare.
|
{}
|
figurative-nlp/se4fig-roberta-base
| null |
[
"transformers",
"pytorch",
"roberta",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
This model can convert the literal expression to figurative/metaphorical expression. Below is the usage of our model:
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("figurative-nlp/t5-figurative-generation")
model = AutoModelForSeq2SeqLM.from_pretrained("figurative-nlp/t5-figurative-generation")
input_ids = tokenizer(
"research is <m> very difficult </m> for me.", return_tensors="pt"
).input_ids # Batch size 1
outputs = model.generate(input_ids,beam_search = 5)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
#result : research is a tough nut to crack for me.
For example (the <m> and </m> is the mark that inform the model which literal expression we want to convert it as figurative expression):
**Input**: as of a cloud that softly <m> covers </m> the sun.
**Output**: as of a cloud that softly drapes over the sun.
**Input**: that car coming around the corner <m> surprised me. </m>
**Output**: that car coming around the corner knocked my socks off.
Note: the figurative language here includes metaphor, idiom and simile. We don't guarantee that the results generated results are satisfactory to you. We are trying to improve the effect of the model.
|
{}
|
figurative-nlp/t5-figurative-generation
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text2text-generation
|
transformers
|
This model can convert the figurative/metaphorical expression to the literal expression. Below is the usage of our model:
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("figurative-nlp/t5-figurative-paraphrase")
model = AutoModelForSeq2SeqLM.from_pretrained("figurative-nlp/t5-figurative-paraphrase")
input_ids = tokenizer(
"paraphrase the sentence : i will talk this story to you from A to Z", return_tensors="pt"
).input_ids # Batch size 1
outputs = model.generate(input_ids,num_beams = 5)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
#result : i will talk this story to you from beginning to end..
For example:
**Input**: He is always bang on when he makes a speech.
**Output**: He is always presice when he makes a speech.
**Input**: He always buy what he said.
**Output**: He always agree with what he said.
**Input**: Your team will be done like dinner if they play against the all-star team.
**Output**: Your team will be defeated if they play against the all-star team. (the one is not particularly accurate)
Note: the figurative language here includes metaphor, idiom and simile. We don't guarantee that the results generated results are satisfactory to you. We are trying to improve the effect of the model.
|
{}
|
figurative-nlp/t5-figurative-paraphrase
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
import requests
API_URL = "https://api-inference.huggingface.co/models/huggingface/prunebert-base-uncased-6-finepruned-w-distil-squad"
headers = {"Authorization": "Bearer api_UXqrzQBiZKXaWxstVwEKcYvHQpGSGiQGbr"}
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.json()
output = query({
"inputs": {
"question": "What's my name?",
"context": "My name is Clara and I live in Berkeley.",
},
})
|
{}
|
fihtrotuld/123
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
# GPT2 base style transfer paraphraser
This is the trained base-model from the paper [Reformulating Unsupervised Style Transfer as Paraphrase Generation](https://arxiv.org/abs/2010.05700) by Krishna K. et al. Note that I (the uploader) am not the author of the paper. Permission to upload to Huggingface was given by the main author.
## Citation
If you found this model useful, please cite the original work:
```
@inproceedings{style20,
author={Kalpesh Krishna and John Wieting and Mohit Iyyer},
Booktitle = {Empirical Methods in Natural Language Processing},
Year = "2020",
Title={Reformulating Unsupervised Style Transfer as Paraphrase Generation},
}
```
|
{}
|
filco306/gpt2-base-style-paraphraser
| null |
[
"transformers",
"pytorch",
"text-generation",
"arxiv:2010.05700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
# GPT2 Bible style transfer paraphraser
This is the trained Bible model from the paper [Reformulating Unsupervised Style Transfer as Paraphrase Generation](https://arxiv.org/abs/2010.05700) by Krishna K. et al. Note that I (the uploader) am not the author of the paper. Permission to upload to Huggingface was given by the main author.
## Citation
If you found this model useful, please cite the original work:
```
@inproceedings{style20,
author={Kalpesh Krishna and John Wieting and Mohit Iyyer},
Booktitle = {Empirical Methods in Natural Language Processing},
Year = "2020",
Title={Reformulating Unsupervised Style Transfer as Paraphrase Generation},
}
```
|
{}
|
filco306/gpt2-bible-paraphraser
| null |
[
"transformers",
"pytorch",
"text-generation",
"arxiv:2010.05700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
# GPT2 Romantic poetry style transfer paraphraser
This is the trained Romantic poetry-model from the paper [Reformulating Unsupervised Style Transfer as Paraphrase Generation](https://arxiv.org/abs/2010.05700) by Krishna K. et al. Note that I (the uploader) am not the author of the paper. Permission to upload to Huggingface was given by the main author.
## Citation
If you found this model useful, please cite the original work:
```
@inproceedings{style20,
author={Kalpesh Krishna and John Wieting and Mohit Iyyer},
Booktitle = {Empirical Methods in Natural Language Processing},
Year = "2020",
Title={Reformulating Unsupervised Style Transfer as Paraphrase Generation},
}
```
|
{}
|
filco306/gpt2-romantic-poetry-paraphraser
| null |
[
"transformers",
"pytorch",
"text-generation",
"arxiv:2010.05700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
# GPT2 Shakespeare style transfer paraphraser
This is the trained Shakespeare-model from the paper [Reformulating Unsupervised Style Transfer as Paraphrase Generation](https://arxiv.org/abs/2010.05700) by Krishna K. et al. Note that I (the uploader) am not the author of the paper. Permission to upload to Huggingface was given by the main author.
## Citation
If you found this model useful, please cite the original work:
```
@inproceedings{style20,
author={Kalpesh Krishna and John Wieting and Mohit Iyyer},
Booktitle = {Empirical Methods in Natural Language Processing},
Year = "2020",
Title={Reformulating Unsupervised Style Transfer as Paraphrase Generation},
}
```
|
{}
|
filco306/gpt2-shakespeare-paraphraser
| null |
[
"transformers",
"pytorch",
"text-generation",
"arxiv:2010.05700",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
# GPT2 Switchboard style transfer paraphraser
This is the trained Switchboard-model from the paper [Reformulating Unsupervised Style Transfer as Paraphrase Generation](https://arxiv.org/abs/2010.05700) by Krishna K. et al. Note that I (the uploader) am not the author of the paper. Permission to upload to Huggingface was given by the main author.
## Citation
If you found this model useful, please cite the original work:
```
@inproceedings{style20,
author={Kalpesh Krishna and John Wieting and Mohit Iyyer},
Booktitle = {Empirical Methods in Natural Language Processing},
Year = "2020",
Title={Reformulating Unsupervised Style Transfer as Paraphrase Generation},
}
```
|
{}
|
filco306/gpt2-switchboard-paraphraser
| null |
[
"transformers",
"pytorch",
"text-generation",
"arxiv:2010.05700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-generation
|
transformers
|
# GPT2 Tweet style transfer paraphraser
This is the trained Tweet-model from the paper [Reformulating Unsupervised Style Transfer as Paraphrase Generation](https://arxiv.org/abs/2010.05700) by Krishna K. et al. Note that I (the uploader) am not the author of the paper. Permission to upload to Huggingface was given by the main author.
## Citation
If you found this model useful, please cite the original work:
```
@inproceedings{style20,
author={Kalpesh Krishna and John Wieting and Mohit Iyyer},
Booktitle = {Empirical Methods in Natural Language Processing},
Year = "2020",
Title={Reformulating Unsupervised Style Transfer as Paraphrase Generation},
}
```
|
{}
|
filco306/gpt2-tweet-paraphraser
| null |
[
"transformers",
"pytorch",
"text-generation",
"arxiv:2010.05700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
image-classification
|
transformers
|
# beer_vs_wine
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### beer

#### wine

|
{"tags": ["image-classification", "pytorch", "huggingpics"], "metrics": ["accuracy"]}
|
filipafcastro/beer_vs_wine
| null |
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null |
transformers
|
{}
|
finiteautomata/bert-contextualized-hate-category-es
| null |
[
"transformers",
"pytorch",
"bert",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-classification
|
transformers
|
{}
|
finiteautomata/bert-contextualized-hate-speech-es
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null |
transformers
|
{}
|
finiteautomata/bert-non-contextualized-hate-category-es
| null |
[
"transformers",
"pytorch",
"bert",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-classification
|
transformers
|
{}
|
finiteautomata/bert-non-contextualized-hate-speech-es
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
finiteautomata/bert-title-body-hate-speech-es
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-classification
|
transformers
|
# Emotion Analysis in English
## bertweet-base-emotion-analysis
Repository: [https://github.com/finiteautomata/pysentimiento/](https://github.com/finiteautomata/pysentimiento/)
Model trained with EmoEvent corpus for Emotion detection in English. Base model is [BerTweet](https://huggingface.co/vinai/bertweet-base).
## License
`pysentimiento` is an open-source library for non-commercial use and scientific research purposes only. Please be aware that models are trained with third-party datasets and are subject to their respective licenses.
1. [TASS Dataset license](http://tass.sepln.org/tass_data/download.php)
2. [SEMEval 2017 Dataset license]()
## Citation
If you use `pysentimiento` in your work, please cite [this paper](https://arxiv.org/abs/2106.09462)
```
@misc{perez2021pysentimiento,
title={pysentimiento: A Python Toolkit for Sentiment Analysis and SocialNLP tasks},
author={Juan Manuel Pérez and Juan Carlos Giudici and Franco Luque},
year={2021},
eprint={2106.09462},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
and also the dataset related paper
```
@inproceedings{del2020emoevent,
title={EmoEvent: A multilingual emotion corpus based on different events},
author={del Arco, Flor Miriam Plaza and Strapparava, Carlo and Lopez, L Alfonso Urena and Mart{\'\i}n-Valdivia, M Teresa},
booktitle={Proceedings of the 12th Language Resources and Evaluation Conference},
pages={1492--1498},
year={2020}
}
```
Enjoy! 🤗
|
{"language": ["en"], "tags": ["emotion-analysis"]}
|
finiteautomata/bertweet-base-emotion-analysis
| null |
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"text-classification",
"emotion-analysis",
"en",
"arxiv:2106.09462",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-classification
|
transformers
|
# Sentiment Analysis in English
## bertweet-sentiment-analysis
Repository: [https://github.com/finiteautomata/pysentimiento/](https://github.com/finiteautomata/pysentimiento/)
Model trained with SemEval 2017 corpus (around ~40k tweets). Base model is [BERTweet](https://github.com/VinAIResearch/BERTweet), a RoBERTa model trained on English tweets.
Uses `POS`, `NEG`, `NEU` labels.
## License
`pysentimiento` is an open-source library for non-commercial use and scientific research purposes only. Please be aware that models are trained with third-party datasets and are subject to their respective licenses.
1. [TASS Dataset license](http://tass.sepln.org/tass_data/download.php)
2. [SEMEval 2017 Dataset license]()
## Citation
If you use `pysentimiento` in your work, please cite [this paper](https://arxiv.org/abs/2106.09462)
```
@misc{perez2021pysentimiento,
title={pysentimiento: A Python Toolkit for Sentiment Analysis and SocialNLP tasks},
author={Juan Manuel Pérez and Juan Carlos Giudici and Franco Luque},
year={2021},
eprint={2106.09462},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
Enjoy! 🤗
|
{"language": ["en"], "tags": ["sentiment-analysis"]}
|
finiteautomata/bertweet-base-sentiment-analysis
| null |
[
"transformers",
"pytorch",
"tf",
"roberta",
"text-classification",
"sentiment-analysis",
"en",
"arxiv:2106.09462",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-classification
|
transformers
|
# Emotion Analysis in Spanish
## beto-emotion-analysis
Repository: [https://github.com/finiteautomata/pysentimiento/](https://github.com/finiteautomata/pysentimiento/)
Model trained with TASS 2020 Task 2 corpus for Emotion detection in Spanish. Base model is [BETO](https://github.com/dccuchile/beto), a BERT model trained in Spanish.
## License
`pysentimiento` is an open-source library for non-commercial use and scientific research purposes only. Please be aware that models are trained with third-party datasets and are subject to their respective licenses.
1. [TASS Dataset license](http://tass.sepln.org/tass_data/download.php)
2. [SEMEval 2017 Dataset license]()
## Citation
If you use `pysentimiento` in your work, please cite [this paper](https://arxiv.org/abs/2106.09462)
```
@misc{perez2021pysentimiento,
title={pysentimiento: A Python Toolkit for Sentiment Analysis and SocialNLP tasks},
author={Juan Manuel Pérez and Juan Carlos Giudici and Franco Luque},
year={2021},
eprint={2106.09462},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
and also the dataset related paper
```
@inproceedings{del2020emoevent,
title={EmoEvent: A multilingual emotion corpus based on different events},
author={del Arco, Flor Miriam Plaza and Strapparava, Carlo and Lopez, L Alfonso Urena and Mart{\'\i}n-Valdivia, M Teresa},
booktitle={Proceedings of the 12th Language Resources and Evaluation Conference},
pages={1492--1498},
year={2020}
}
```
Enjoy! 🤗
|
{"language": ["es"], "tags": ["emotion-analysis"]}
|
finiteautomata/beto-emotion-analysis
| null |
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"emotion-analysis",
"es",
"arxiv:2106.09462",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-classification
|
transformers
|
# Targeted Sentiment Analysis in News Headlines
BERT classifier fine-tuned in a news headlines dataset annotated for target polarity.
(details to be published)
## Examples
Input is as follows
`Headline [SEP] Target`
where headline is the news title and target is an entity present in the headline.
Try
`Alberto Fernández: "El gobierno de Macri fue un desastre" [SEP] Macri` (should be NEG)
and
`Alberto Fernández: "El gobierno de Macri fue un desastre" [SEP] Alberto Fernández` (POS or NEU)
|
{}
|
finiteautomata/beto-headlines-sentiment-analysis
| null |
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
text-classification
|
transformers
|
# Sentiment Analysis in Spanish
## beto-sentiment-analysis
**NOTE: this model will be removed soon -- use [pysentimiento/robertuito-sentiment-analysis](https://huggingface.co/pysentimiento/robertuito-sentiment-analysis) instead**
Repository: [https://github.com/finiteautomata/pysentimiento/](https://github.com/pysentimiento/pysentimiento/)
Model trained with TASS 2020 corpus (around ~5k tweets) of several dialects of Spanish. Base model is [BETO](https://github.com/dccuchile/beto), a BERT model trained in Spanish.
Uses `POS`, `NEG`, `NEU` labels.
## License
`pysentimiento` is an open-source library for non-commercial use and scientific research purposes only. Please be aware that models are trained with third-party datasets and are subject to their respective licenses.
1. [TASS Dataset license](http://tass.sepln.org/tass_data/download.php)
2. [SEMEval 2017 Dataset license]()
## Citation
If you use this model in your work, please cite the following papers:
```
@misc{perez2021pysentimiento,
title={pysentimiento: A Python Toolkit for Sentiment Analysis and SocialNLP tasks},
author={Juan Manuel Pérez and Juan Carlos Giudici and Franco Luque},
year={2021},
eprint={2106.09462},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@article{canete2020spanish,
title={Spanish pre-trained bert model and evaluation data},
author={Ca{\~n}ete, Jos{\'e} and Chaperon, Gabriel and Fuentes, Rodrigo and Ho, Jou-Hui and Kang, Hojin and P{\'e}rez, Jorge},
journal={Pml4dc at iclr},
volume={2020},
number={2020},
pages={1--10},
year={2020}
}
```
Enjoy! 🤗
|
{"language": ["es"], "tags": ["sentiment-analysis"]}
|
finiteautomata/beto-sentiment-analysis
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"sentiment-analysis",
"es",
"arxiv:2106.09462",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
fill-mask
|
transformers
|
{}
|
finiteautomata/betonews-bodycontext
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
fill-mask
|
transformers
|
{}
|
finiteautomata/betonews-nonecontext
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
fill-mask
|
transformers
|
{}
|
piuba-bigdata/betonews-tweetcontext
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
fill-mask
|
transformers
|
{}
|
finiteautomata/robertuitonews-cased-tweetcontext
| null |
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
fill-mask
|
transformers
|
{}
|
finiteautomata/robertuitonews-tweetcontext
| null |
[
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
fionakaeser/xlm-roberta-base-finetuned-marc-en
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
image-classification
|
transformers
|
# llama_or_what
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### alpaca

#### guanaco

#### llama

#### vicuna

|
{"tags": ["image-classification", "pytorch", "huggingpics"], "metrics": ["accuracy"]}
|
firebolt/llama_or_what
| null |
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
image-classification
|
transformers
|
# llama_or_what2
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### alpaca

#### guanaco

#### llama

#### vicuna

|
{"tags": ["image-classification", "pytorch", "huggingpics"], "metrics": ["accuracy"]}
|
firebolt/llama_or_what2
| null |
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
fill-mask
|
transformers
|
{"widget": [{"text": "Pneumonia adalah penyakit yang disebabkan oleh [MASK]"}]}
|
firqaaa/indo-biobert-base-uncased
| null |
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
firstmediabuyer/sber
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-classification
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 310939
## Validation Metrics
- Loss: 0.027471264824271202
- Accuracy: 0.9931118314424635
- Precision: 0.946236559139785
- Recall: 0.88
- AUC: 0.9952871621621622
- F1: 0.911917098445596
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/fjarrett/autonlp-giveaway_detection_05-310939
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("fjarrett/autonlp-giveaway_detection_05-310939", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("fjarrett/autonlp-giveaway_detection_05-310939", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
{"language": "en", "tags": ["autonlp"], "datasets": ["fjarrett/autonlp-data-giveaway_detection_05"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}]}
|
popsmash-admin/autonlp-giveaway_detection_05-310939
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autonlp",
"en",
"dataset:fjarrett/autonlp-data-giveaway_detection_05",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
fill-mask
|
transformers
|
{}
|
fjluque/dummy-model
| null |
[
"transformers",
"tf",
"camembert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
fill-mask
|
transformers
|
{}
|
fjluque/dummy-model2
| null |
[
"transformers",
"tf",
"camembert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-amazon_reviews_multi
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2157
- Accuracy: 0.9173
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1125 | 1.0 | 13 | 0.2066 | 0.9165 |
| 0.0186 | 2.0 | 26 | 0.2157 | 0.9173 |
### Framework versions
- Transformers 4.10.2
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["amazon_reviews_multi"], "metrics": ["accuracy"], "model-index": [{"name": "roberta-base-bne-finetuned-amazon_reviews_multi", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "amazon_reviews_multi", "type": "amazon_reviews_multi", "args": "es"}, "metrics": [{"type": "accuracy", "value": 0.91725, "name": "Accuracy"}]}]}]}
|
fjluque/roberta-base-bne-finetuned-amazon_reviews_multi
| null |
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
null | null |
{}
|
fk/hii
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
null | null |
{}
|
fkHug/model3FromWav2vec
| null |
[
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
|
automatic-speech-recognition
|
transformers
|
this is my model card
|
{}
|
fkHug/modelFromWav2vec
| null |
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
token-classification
|
flair
|
## English Chunking in Flair (fast model)
This is the fast phrase chunking model for English that ships with [Flair](https://github.com/flairNLP/flair/).
F1-Score: **96,22** (CoNLL-2000)
Predicts 4 tags:
| **tag** | **meaning** |
|---------------------------------|-----------|
| ADJP | adjectival |
| ADVP | adverbial |
| CONJP | conjunction |
| INTJ | interjection |
| LST | list marker |
| NP | noun phrase |
| PP | prepositional |
| PRT | particle |
| SBAR | subordinate clause |
| VP | verb phrase |
Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF.
---
### Demo: How to use in Flair
Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("flair/chunk-english-fast")
# make example sentence
sentence = Sentence("The happy man has been eating at the diner")
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('np'):
print(entity)
```
This yields the following output:
```
Span [1,2,3]: "The happy man" [− Labels: NP (0.9958)]
Span [4,5,6]: "has been eating" [− Labels: VP (0.8759)]
Span [7]: "at" [− Labels: PP (1.0)]
Span [8,9]: "the diner" [− Labels: NP (0.9991)]
```
So, the spans "*The happy man*" and "*the diner*" are labeled as **noun phrases** (NP) and "*has been eating*" is labeled as a **verb phrase** (VP) in the sentence "*The happy man has been eating at the diner*".
---
### Training: Script to train this model
The following Flair script was used to train this model:
```python
from flair.data import Corpus
from flair.datasets import CONLL_2000
from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings
# 1. get the corpus
corpus: Corpus = CONLL_2000()
# 2. what tag do we want to predict?
tag_type = 'np'
# 3. make the tag dictionary from the corpus
tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type)
# 4. initialize each embedding we use
embedding_types = [
# contextual string embeddings, forward
FlairEmbeddings('news-forward-fast'),
# contextual string embeddings, backward
FlairEmbeddings('news-backward-fast'),
]
# embedding stack consists of Flair and GloVe embeddings
embeddings = StackedEmbeddings(embeddings=embedding_types)
# 5. initialize sequence tagger
from flair.models import SequenceTagger
tagger = SequenceTagger(hidden_size=256,
embeddings=embeddings,
tag_dictionary=tag_dictionary,
tag_type=tag_type)
# 6. initialize trainer
from flair.trainers import ModelTrainer
trainer = ModelTrainer(tagger, corpus)
# 7. run training
trainer.train('resources/taggers/chunk-english-fast',
train_with_dev=True,
max_epochs=150)
```
---
### Cite
Please cite the following paper when using this model.
```
@inproceedings{akbik2018coling,
title={Contextual String Embeddings for Sequence Labeling},
author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland},
booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics},
pages = {1638--1649},
year = {2018}
}
```
---
### Issues?
The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
|
{"language": "en", "tags": ["flair", "token-classification", "sequence-tagger-model"], "datasets": ["conll2000"], "widget": [{"text": "The happy man has been eating at the diner"}]}
|
flair/chunk-english-fast
| null |
[
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"en",
"dataset:conll2000",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
token-classification
|
flair
|
## English Chunking in Flair (default model)
This is the standard phrase chunking model for English that ships with [Flair](https://github.com/flairNLP/flair/).
F1-Score: **96,48** (CoNLL-2000)
Predicts 4 tags:
| **tag** | **meaning** |
|---------------------------------|-----------|
| ADJP | adjectival |
| ADVP | adverbial |
| CONJP | conjunction |
| INTJ | interjection |
| LST | list marker |
| NP | noun phrase |
| PP | prepositional |
| PRT | particle |
| SBAR | subordinate clause |
| VP | verb phrase |
Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF.
---
### Demo: How to use in Flair
Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("flair/chunk-english")
# make example sentence
sentence = Sentence("The happy man has been eating at the diner")
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('np'):
print(entity)
```
This yields the following output:
```
Span [1,2,3]: "The happy man" [− Labels: NP (0.9958)]
Span [4,5,6]: "has been eating" [− Labels: VP (0.8759)]
Span [7]: "at" [− Labels: PP (1.0)]
Span [8,9]: "the diner" [− Labels: NP (0.9991)]
```
So, the spans "*The happy man*" and "*the diner*" are labeled as **noun phrases** (NP) and "*has been eating*" is labeled as a **verb phrase** (VP) in the sentence "*The happy man has been eating at the diner*".
---
### Training: Script to train this model
The following Flair script was used to train this model:
```python
from flair.data import Corpus
from flair.datasets import CONLL_2000
from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings
# 1. get the corpus
corpus: Corpus = CONLL_2000()
# 2. what tag do we want to predict?
tag_type = 'np'
# 3. make the tag dictionary from the corpus
tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type)
# 4. initialize each embedding we use
embedding_types = [
# contextual string embeddings, forward
FlairEmbeddings('news-forward'),
# contextual string embeddings, backward
FlairEmbeddings('news-backward'),
]
# embedding stack consists of Flair and GloVe embeddings
embeddings = StackedEmbeddings(embeddings=embedding_types)
# 5. initialize sequence tagger
from flair.models import SequenceTagger
tagger = SequenceTagger(hidden_size=256,
embeddings=embeddings,
tag_dictionary=tag_dictionary,
tag_type=tag_type)
# 6. initialize trainer
from flair.trainers import ModelTrainer
trainer = ModelTrainer(tagger, corpus)
# 7. run training
trainer.train('resources/taggers/chunk-english',
train_with_dev=True,
max_epochs=150)
```
---
### Cite
Please cite the following paper when using this model.
```
@inproceedings{akbik2018coling,
title={Contextual String Embeddings for Sequence Labeling},
author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland},
booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics},
pages = {1638--1649},
year = {2018}
}
```
---
### Issues?
The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
|
{"language": "en", "tags": ["flair", "token-classification", "sequence-tagger-model"], "datasets": ["conll2000"], "widget": [{"text": "The happy man has been eating at the diner"}]}
|
flair/chunk-english
| null |
[
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"en",
"dataset:conll2000",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
token-classification
|
flair
|
## English Verb Disambiguation in Flair (fast model)
This is the fast verb disambiguation model for English that ships with [Flair](https://github.com/flairNLP/flair/).
F1-Score: **88,27** (Ontonotes) - predicts [Proposition Bank verb frames](http://verbs.colorado.edu/propbank/framesets-english-aliases/).
Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF.
---
### Demo: How to use in Flair
Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("flair/frame-english-fast")
# make example sentence
sentence = Sentence("George returned to Berlin to return his hat.")
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following frame tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('frame'):
print(entity)
```
This yields the following output:
```
Span [2]: "returned" [− Labels: return.01 (0.9867)]
Span [6]: "return" [− Labels: return.02 (0.4741)]
```
So, the word "*returned*" is labeled as **return.01** (as in *go back somewhere*) while "*return*" is labeled as **return.02** (as in *give back something*) in the sentence "*George returned to Berlin to return his hat*".
---
### Training: Script to train this model
The following Flair script was used to train this model:
```python
from flair.data import Corpus
from flair.datasets import ColumnCorpus
from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings
# 1. load the corpus (Ontonotes does not ship with Flair, you need to download and reformat into a column format yourself)
corpus = ColumnCorpus(
"resources/tasks/srl", column_format={1: "text", 11: "frame"}
)
# 2. what tag do we want to predict?
tag_type = 'frame'
# 3. make the tag dictionary from the corpus
tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type)
# 4. initialize each embedding we use
embedding_types = [
BytePairEmbeddings("en"),
FlairEmbeddings("news-forward-fast"),
FlairEmbeddings("news-backward-fast"),
]
# embedding stack consists of Flair and GloVe embeddings
embeddings = StackedEmbeddings(embeddings=embedding_types)
# 5. initialize sequence tagger
from flair.models import SequenceTagger
tagger = SequenceTagger(hidden_size=256,
embeddings=embeddings,
tag_dictionary=tag_dictionary,
tag_type=tag_type)
# 6. initialize trainer
from flair.trainers import ModelTrainer
trainer = ModelTrainer(tagger, corpus)
# 7. run training
trainer.train('resources/taggers/frame-english-fast',
train_with_dev=True,
max_epochs=150)
```
---
### Cite
Please cite the following paper when using this model.
```
@inproceedings{akbik2019flair,
title={FLAIR: An easy-to-use framework for state-of-the-art NLP},
author={Akbik, Alan and Bergmann, Tanja and Blythe, Duncan and Rasul, Kashif and Schweter, Stefan and Vollgraf, Roland},
booktitle={{NAACL} 2019, 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)},
pages={54--59},
year={2019}
}
```
---
### Issues?
The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
|
{"language": "en", "tags": ["flair", "token-classification", "sequence-tagger-model"], "datasets": ["ontonotes"], "widget": [{"text": "George returned to Berlin to return his hat."}]}
|
flair/frame-english-fast
| null |
[
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"en",
"dataset:ontonotes",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
token-classification
|
flair
|
## English Verb Disambiguation in Flair (default model)
This is the standard verb disambiguation model for English that ships with [Flair](https://github.com/flairNLP/flair/).
F1-Score: **89,34** (Ontonotes) - predicts [Proposition Bank verb frames](http://verbs.colorado.edu/propbank/framesets-english-aliases/).
Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF.
---
### Demo: How to use in Flair
Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("flair/frame-english")
# make example sentence
sentence = Sentence("George returned to Berlin to return his hat.")
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following frame tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('frame'):
print(entity)
```
This yields the following output:
```
Span [2]: "returned" [− Labels: return.01 (0.9951)]
Span [6]: "return" [− Labels: return.02 (0.6361)]
```
So, the word "*returned*" is labeled as **return.01** (as in *go back somewhere*) while "*return*" is labeled as **return.02** (as in *give back something*) in the sentence "*George returned to Berlin to return his hat*".
---
### Training: Script to train this model
The following Flair script was used to train this model:
```python
from flair.data import Corpus
from flair.datasets import ColumnCorpus
from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings
# 1. load the corpus (Ontonotes does not ship with Flair, you need to download and reformat into a column format yourself)
corpus = ColumnCorpus(
"resources/tasks/srl", column_format={1: "text", 11: "frame"}
)
# 2. what tag do we want to predict?
tag_type = 'frame'
# 3. make the tag dictionary from the corpus
tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type)
# 4. initialize each embedding we use
embedding_types = [
BytePairEmbeddings("en"),
FlairEmbeddings("news-forward"),
FlairEmbeddings("news-backward"),
]
# embedding stack consists of Flair and GloVe embeddings
embeddings = StackedEmbeddings(embeddings=embedding_types)
# 5. initialize sequence tagger
from flair.models import SequenceTagger
tagger = SequenceTagger(hidden_size=256,
embeddings=embeddings,
tag_dictionary=tag_dictionary,
tag_type=tag_type)
# 6. initialize trainer
from flair.trainers import ModelTrainer
trainer = ModelTrainer(tagger, corpus)
# 7. run training
trainer.train('resources/taggers/frame-english',
train_with_dev=True,
max_epochs=150)
```
---
### Cite
Please cite the following paper when using this model.
```
@inproceedings{akbik2019flair,
title={FLAIR: An easy-to-use framework for state-of-the-art NLP},
author={Akbik, Alan and Bergmann, Tanja and Blythe, Duncan and Rasul, Kashif and Schweter, Stefan and Vollgraf, Roland},
booktitle={{NAACL} 2019, 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)},
pages={54--59},
year={2019}
}
```
---
### Issues?
The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
|
{"language": "en", "tags": ["flair", "token-classification", "sequence-tagger-model"], "datasets": ["ontonotes"], "widget": [{"text": "George returned to Berlin to return his hat."}]}
|
flair/frame-english
| null |
[
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"en",
"dataset:ontonotes",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
token-classification
|
flair
|
# Danish NER in Flair (default model)
This is the standard 4-class NER model for Danish that ships with [Flair](https://github.com/flairNLP/flair/).
F1-Score: **81.78** (DaNER)
Predicts 4 tags:
| **tag** | **meaning** |
|---------------------------------|-----------|
| PER | person name |
| LOC | location name |
| ORG | organization name |
| MISC | other name |
Based on Transformer embeddings and LSTM-CRF.
---
# Demo: How to use in Flair
Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("flair/ner-danish")
# make example sentence
sentence = Sentence("Jens Peter Hansen kommer fra Danmark")
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('ner'):
print(entity)
```
This yields the following output:
```
Span [1,2,3]: "Jens Peter Hansen" [− Labels: PER (0.9961)]
Span [6]: "Danmark" [− Labels: LOC (0.9816)]
```
So, the entities "*Jens Peter Hansen*" (labeled as a **person**) and "*Danmark*" (labeled as a **location**) are found in the sentence "*Jens Peter Hansen kommer fra Danmark*".
---
### Training: Script to train this model
The model was trained by the [DaNLP project](https://github.com/alexandrainst/danlp) using the [DaNE corpus](https://github.com/alexandrainst/danlp/blob/master/docs/docs/datasets.md#danish-dependency-treebank-dane-dane). Check their repo for more information.
The following Flair script may be used to train such a model:
```python
from flair.data import Corpus
from flair.datasets import DANE
from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings
# 1. get the corpus
corpus: Corpus = DANE()
# 2. what tag do we want to predict?
tag_type = 'ner'
# 3. make the tag dictionary from the corpus
tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type)
# 4. initialize each embedding we use
embedding_types = [
# GloVe embeddings
WordEmbeddings('da'),
# contextual string embeddings, forward
FlairEmbeddings('da-forward'),
# contextual string embeddings, backward
FlairEmbeddings('da-backward'),
]
# embedding stack consists of Flair and GloVe embeddings
embeddings = StackedEmbeddings(embeddings=embedding_types)
# 5. initialize sequence tagger
from flair.models import SequenceTagger
tagger = SequenceTagger(hidden_size=256,
embeddings=embeddings,
tag_dictionary=tag_dictionary,
tag_type=tag_type)
# 6. initialize trainer
from flair.trainers import ModelTrainer
trainer = ModelTrainer(tagger, corpus)
# 7. run training
trainer.train('resources/taggers/ner-danish',
train_with_dev=True,
max_epochs=150)
```
---
### Cite
Please cite the following papers when using this model.
```
@inproceedings{akbik-etal-2019-flair,
title = "{FLAIR}: An Easy-to-Use Framework for State-of-the-Art {NLP}",
author = "Akbik, Alan and
Bergmann, Tanja and
Blythe, Duncan and
Rasul, Kashif and
Schweter, Stefan and
Vollgraf, Roland",
booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics (Demonstrations)",
year = "2019",
url = "https://www.aclweb.org/anthology/N19-4010",
pages = "54--59",
}
```
And check the [DaNLP project](https://github.com/alexandrainst/danlp) for more information.
---
### Issues?
The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
|
{"language": "da", "tags": ["flair", "token-classification", "sequence-tagger-model"], "datasets": ["DaNE"], "widget": [{"text": "Jens Peter Hansen kommer fra Danmark"}]}
|
flair/ner-danish
| null |
[
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"da",
"dataset:DaNE",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
token-classification
|
flair
|
## Dutch NER in Flair (large model)
This is the large 4-class NER model for Dutch that ships with [Flair](https://github.com/flairNLP/flair/).
F1-Score: **95,25** (CoNLL-03 Dutch)
Predicts 4 tags:
| **tag** | **meaning** |
|---------------------------------|-----------|
| PER | person name |
| LOC | location name |
| ORG | organization name |
| MISC | other name |
Based on document-level XLM-R embeddings and [FLERT](https://arxiv.org/pdf/2011.06993v1.pdf/).
---
### Demo: How to use in Flair
Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("flair/ner-dutch-large")
# make example sentence
sentence = Sentence("George Washington ging naar Washington")
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('ner'):
print(entity)
```
This yields the following output:
```
Span [1,2]: "George Washington" [− Labels: PER (1.0)]
Span [5]: "Washington" [− Labels: LOC (1.0)]
```
So, the entities "*George Washington*" (labeled as a **person**) and "*Washington*" (labeled as a **location**) are found in the sentence "*George Washington ging naar Washington*".
---
### Training: Script to train this model
The following Flair script was used to train this model:
```python
import torch
# 1. get the corpus
from flair.datasets import CONLL_03_DUTCH
corpus = CONLL_03_DUTCH()
# 2. what tag do we want to predict?
tag_type = 'ner'
# 3. make the tag dictionary from the corpus
tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type)
# 4. initialize fine-tuneable transformer embeddings WITH document context
from flair.embeddings import TransformerWordEmbeddings
embeddings = TransformerWordEmbeddings(
model='xlm-roberta-large',
layers="-1",
subtoken_pooling="first",
fine_tune=True,
use_context=True,
)
# 5. initialize bare-bones sequence tagger (no CRF, no RNN, no reprojection)
from flair.models import SequenceTagger
tagger = SequenceTagger(
hidden_size=256,
embeddings=embeddings,
tag_dictionary=tag_dictionary,
tag_type='ner',
use_crf=False,
use_rnn=False,
reproject_embeddings=False,
)
# 6. initialize trainer with AdamW optimizer
from flair.trainers import ModelTrainer
trainer = ModelTrainer(tagger, corpus, optimizer=torch.optim.AdamW)
# 7. run training with XLM parameters (20 epochs, small LR)
from torch.optim.lr_scheduler import OneCycleLR
trainer.train('resources/taggers/ner-dutch-large',
learning_rate=5.0e-6,
mini_batch_size=4,
mini_batch_chunk_size=1,
max_epochs=20,
scheduler=OneCycleLR,
embeddings_storage_mode='none',
weight_decay=0.,
)
)
```
---
### Cite
Please cite the following paper when using this model.
```
@misc{schweter2020flert,
title={FLERT: Document-Level Features for Named Entity Recognition},
author={Stefan Schweter and Alan Akbik},
year={2020},
eprint={2011.06993},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
---
### Issues?
The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
|
{"language": "nl", "tags": ["flair", "token-classification", "sequence-tagger-model"], "datasets": ["conll2003"], "widget": [{"text": "George Washington ging naar Washington"}]}
|
flair/ner-dutch-large
| null |
[
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"nl",
"dataset:conll2003",
"arxiv:2011.06993",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
token-classification
|
flair
|
# Dutch NER in Flair (default model)
This is the standard 4-class NER model for Dutch that ships with [Flair](https://github.com/flairNLP/flair/).
F1-Score: **92,58** (CoNLL-03)
Predicts 4 tags:
| **tag** | **meaning** |
|---------------------------------|-----------|
| PER | person name |
| LOC | location name |
| ORG | organization name |
| MISC | other name |
Based on Transformer embeddings and LSTM-CRF.
---
# Demo: How to use in Flair
Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("flair/ner-dutch")
# make example sentence
sentence = Sentence("George Washington ging naar Washington")
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('ner'):
print(entity)
```
This yields the following output:
```
Span [1,2]: "George Washington" [− Labels: PER (0.997)]
Span [5]: "Washington" [− Labels: LOC (0.9996)]
```
So, the entities "*George Washington*" (labeled as a **person**) and "*Washington*" (labeled as a **location**) are found in the sentence "*George Washington ging naar Washington*".
---
### Training: Script to train this model
The following Flair script was used to train this model:
```python
from flair.data import Corpus
from flair.datasets import CONLL_03_DUTCH
from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings
# 1. get the corpus
corpus: Corpus = CONLL_03_DUTCH()
# 2. what tag do we want to predict?
tag_type = 'ner'
# 3. make the tag dictionary from the corpus
tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type)
# 4. initialize embeddings
embeddings = TransformerWordEmbeddings('wietsedv/bert-base-dutch-cased')
# 5. initialize sequence tagger
tagger: SequenceTagger = SequenceTagger(hidden_size=256,
embeddings=embeddings,
tag_dictionary=tag_dictionary,
tag_type=tag_type)
# 6. initialize trainer
trainer: ModelTrainer = ModelTrainer(tagger, corpus)
# 7. run training
trainer.train('resources/taggers/ner-dutch',
train_with_dev=True,
max_epochs=150)
```
---
### Cite
Please cite the following paper when using this model.
```
@inproceedings{akbik-etal-2019-flair,
title = "{FLAIR}: An Easy-to-Use Framework for State-of-the-Art {NLP}",
author = "Akbik, Alan and
Bergmann, Tanja and
Blythe, Duncan and
Rasul, Kashif and
Schweter, Stefan and
Vollgraf, Roland",
booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics (Demonstrations)",
year = "2019",
url = "https://www.aclweb.org/anthology/N19-4010",
pages = "54--59",
}
```
---
### Issues?
The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
|
{"language": "nl", "tags": ["flair", "token-classification", "sequence-tagger-model"], "datasets": ["conll2003"], "widget": [{"text": "George Washington ging naar Washington."}]}
|
flair/ner-dutch
| null |
[
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"nl",
"dataset:conll2003",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
token-classification
|
flair
|
## English NER in Flair (fast model)
This is the fast 4-class NER model for English that ships with [Flair](https://github.com/flairNLP/flair/).
F1-Score: **92,92** (corrected CoNLL-03)
Predicts 4 tags:
| **tag** | **meaning** |
|---------------------------------|-----------|
| PER | person name |
| LOC | location name |
| ORG | organization name |
| MISC | other name |
Based on [Flair embeddings](https://www.aclweb.org/anthology/C18-1139/) and LSTM-CRF.
---
### Demo: How to use in Flair
Requires: **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`)
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("flair/ner-english-fast")
# make example sentence
sentence = Sentence("George Washington went to Washington")
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('ner'):
print(entity)
```
This yields the following output:
```
Span [1,2]: "George Washington" [− Labels: PER (0.9515)]
Span [5]: "Washington" [− Labels: LOC (0.992)]
```
So, the entities "*George Washington*" (labeled as a **person**) and "*Washington*" (labeled as a **location**) are found in the sentence "*George Washington went to Washington*".
---
### Training: Script to train this model
The following Flair script was used to train this model:
```python
from flair.data import Corpus
from flair.datasets import CONLL_03
from flair.embeddings import WordEmbeddings, StackedEmbeddings, FlairEmbeddings
# 1. get the corpus
corpus: Corpus = CONLL_03()
# 2. what tag do we want to predict?
tag_type = 'ner'
# 3. make the tag dictionary from the corpus
tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type)
# 4. initialize each embedding we use
embedding_types = [
# GloVe embeddings
WordEmbeddings('glove'),
# contextual string embeddings, forward
FlairEmbeddings('news-forward-fast'),
# contextual string embeddings, backward
FlairEmbeddings('news-backward-fast'),
]
# embedding stack consists of Flair and GloVe embeddings
embeddings = StackedEmbeddings(embeddings=embedding_types)
# 5. initialize sequence tagger
from flair.models import SequenceTagger
tagger = SequenceTagger(hidden_size=256,
embeddings=embeddings,
tag_dictionary=tag_dictionary,
tag_type=tag_type)
# 6. initialize trainer
from flair.trainers import ModelTrainer
trainer = ModelTrainer(tagger, corpus)
# 7. run training
trainer.train('resources/taggers/ner-english',
train_with_dev=True,
max_epochs=150)
```
---
### Cite
Please cite the following paper when using this model.
```
@inproceedings{akbik2018coling,
title={Contextual String Embeddings for Sequence Labeling},
author={Akbik, Alan and Blythe, Duncan and Vollgraf, Roland},
booktitle = {{COLING} 2018, 27th International Conference on Computational Linguistics},
pages = {1638--1649},
year = {2018}
}
```
---
### Issues?
The Flair issue tracker is available [here](https://github.com/flairNLP/flair/issues/).
|
{"language": "en", "tags": ["flair", "token-classification", "sequence-tagger-model"], "datasets": ["conll2003"], "widget": [{"text": "George Washington went to Washington"}]}
|
flair/ner-english-fast
| null |
[
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"en",
"dataset:conll2003",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.