Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
text-classification
transformers
# xlm-r-finetuned-toxic-political-tweets-es This model is based on the pre-trained model [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) and was fine-tuned on a dataset of tweets from members of the [Spanish Congress of the Deputies](https://www.congreso.es/) annotated regarding the level of political toxicity they generate. ### Inputs The model has been trained on the text of Spanish tweets authored by politicians in 2021, so this is the input expected and its performance can degrade when applied to texts from other domains. ### Outputs The model predicts 2 signals of political toxicity: * Toxic: the tweet has at least some degree of toxicity. * Very Toxic: the tweet has a strong degree of toxicity. A value between 0 and 1 is predicted for each signal. ### Intended uses & limitations The model was created to be used as a toxicity detector of spanish tweets from Spanish Congress Deputies. If the intended use is other one, for instance; toxicity detection on films reviews, the results won't be reliable and you might look for another model with this concrete purpose. ### How to use The model can be used directly with a text-classification pipeline: ```python >>> from transformers import pipeline >>> text = "Es usted un auténtico impresentable, su señoría." >>> pipe = pipeline("text-classification", model="Newtral/xlm-r-finetuned-toxic-political-tweets-es") >>> pipe(text, return_all_scores=True) [[{'label': 'toxic', 'score': 0.92560875415802}, {'label': 'very toxic', 'score': 0.8310967683792114}]] ``` ### Training procedure The pre-trained model was fine-tuned for sequence classification using the following hyperparameters, which were selected from a validation set: * Batch size = 32 * Learning rate = 2e-5 * Epochs = 5 * Max length = 64 The optimizer used was AdamW and the loss optimized was binary cross-entropy with class weights proportional to the class imbalance.
{"language": "es", "license": "apache-2.0"}
Newtral/xlm-r-finetuned-toxic-political-tweets-es
null
[ "transformers", "pytorch", "safetensors", "xlm-roberta", "text-classification", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Nezz222/Nerezz
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
NhatPham/my-new-shiny-tokenizer
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
image-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ## labels - 0: Object - 1: Recycle - 2: Non-Recycle # vit-base-patch16-224 This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1510 - Accuracy: 0.9443 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 60 - eval_batch_size: 60 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 240 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1438 | 1.0 | 150 | 0.1645 | 0.9353 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["image-classification", "generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "vit-base-patch16-224", "results": []}]}
NhatPham/vit-base-patch16-224-recylce-ft
null
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
audio-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-finetuned-ks This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset. It achieves the following results on the evaluation set: - Loss: 0.1258 - Accuracy: 0.9793 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.1561 | 1.0 | 399 | 1.1127 | 0.6643 | | 0.4803 | 2.0 | 798 | 0.3547 | 0.9687 | | 0.2855 | 3.0 | 1197 | 0.1663 | 0.9763 | | 0.1987 | 4.0 | 1596 | 0.1258 | 0.9793 | | 0.2097 | 5.0 | 1995 | 0.1171 | 0.9791 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["superb"], "metrics": ["accuracy"], "model-index": [{"name": "wav2vec2-base-finetuned-ks", "results": []}]}
NhatPham/wav2vec2-base-finetuned-ks
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "audio-classification", "generated_from_trainer", "dataset:superb", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
# wav2vec2-large-xlsr-53-french Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in French using the [Common Voice](https://huggingface.co/datasets/common_voice) When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "fr", split="test[:20%]") processor = Wav2Vec2Processor.from_pretrained("Nhut/wav2vec2-large-xlsr-french") model = Wav2Vec2ForCTC.from_pretrained("Nhut/wav2vec2-large-xlsr-french") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the French test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "fr") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("Nhut/wav2vec2-large-xlsr-french") model = Wav2Vec2ForCTC.from_pretrained("Nhut/wav2vec2-large-xlsr-french") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 29.31 % ## Training V1 of the Common Voice `train`, `validation` datasets were used for training. ## Testing 20% of V6.1 of the Common Voice `Test` dataset were used for training.
{"language": "fr", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xlsr-53-French by Nhut DOAN NGUYEN", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice fr", "type": "common_voice", "args": "fr"}, "metrics": [{"type": "wer", "value": "xx.xx", "name": "Test WER"}]}]}]}
Nhut/wav2vec2-large-xlsr-french
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "fr", "dataset:common_voice", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
# Wav2Vec2-Large-XLSR-53-Vietnamese Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Vietnamese using the [Common Voice](https://huggingface.co/datasets/common_voice), [FOSD](https://data.mendeley.com/datasets/k9sxg2twv4/4) and [VIVOS](https://ailab.hcmus.edu.vn/vivos). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor ENCODER = { "ia ": "iê ", "ìa ": "iề ", "ía ": "iế ", "ỉa ": "iể ", "ĩa ": "iễ ", "ịa ": "iệ ", "ya ": "yê ", "ỳa ": "yề ", "ýa ": "yế ", "ỷa ": "yể ", "ỹa ": "yễ ", "ỵa ": "yệ ", "ua ": "uô ", "ùa ": "uồ ", "úa ": "uố ", "ủa ": "uổ ", "ũa ": "uỗ ", "ụa ": "uộ ", "ưa ": "ươ ", "ừa ": "ườ ", "ứa ": "ướ ", "ửa ": "ưở ", "ữa ": "ưỡ ", "ựa ": "ượ ", "ke": "ce", "kè": "cè", "ké": "cé", "kẻ": "cẻ", "kẽ": "cẽ", "kẹ": "cẹ", "kê": "cê", "kề": "cề", "kế": "cế", "kể": "cể", "kễ": "cễ", "kệ": "cệ", "ki": "ci", "kì": "cì", "kí": "cí", "kỉ": "cỉ", "kĩ": "cĩ", "kị": "cị", "ky": "cy", "kỳ": "cỳ", "ký": "cý", "kỷ": "cỷ", "kỹ": "cỹ", "kỵ": "cỵ", "ghe": "ge", "ghè": "gè", "ghé": "gé", "ghẻ": "gẻ", "ghẽ": "gẽ", "ghẹ": "gẹ", "ghê": "gê", "ghề": "gề", "ghế": "gế", "ghể": "gể", "ghễ": "gễ", "ghệ": "gệ", "ngh": "\x80", "uyê": "\x96", "uyề": "\x97", "uyế": "\x98", "uyể": "\x99", "uyễ": "\x9a", "uyệ": "\x9b", "ng": "\x81", "ch": "\x82", "gh": "\x83", "nh": "\x84", "gi": "\x85", "ph": "\x86", "kh": "\x87", "th": "\x88", "tr": "\x89", "uy": "\x8a", "uỳ": "\x8b", "uý": "\x8c", "uỷ": "\x8d", "uỹ": "\x8e", "uỵ": "\x8f", "iê": "\x90", "iề": "\x91", "iế": "\x92", "iể": "\x93", "iễ": "\x94", "iệ": "\x95", "uô": "\x9c", "uồ": "\x9d", "uố": "\x9e", "uổ": "\x9f", "uỗ": "\xa0", "uộ": "\xa1", "ươ": "\xa2", "ườ": "\xa3", "ướ": "\xa4", "ưở": "\xa5", "ưỡ": "\xa6", "ượ": "\xa7", } def decode_string(x): for k, v in list(reversed(list(ENCODER.items()))): x = x.replace(v, k) return x test_dataset = load_dataset("common_voice", "vi", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("Nhut/wav2vec2-large-xlsr-vietnamese") model = Wav2Vec2ForCTC.from_pretrained("Nhut/wav2vec2-large-xlsr-vietnamese") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", [decode_string(x) for x in processor.batch_decode(predicted_ids)]) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Vietnamese test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re ENCODER = { "ia ": "iê ", "ìa ": "iề ", "ía ": "iế ", "ỉa ": "iể ", "ĩa ": "iễ ", "ịa ": "iệ ", "ya ": "yê ", "ỳa ": "yề ", "ýa ": "yế ", "ỷa ": "yể ", "ỹa ": "yễ ", "ỵa ": "yệ ", "ua ": "uô ", "ùa ": "uồ ", "úa ": "uố ", "ủa ": "uổ ", "ũa ": "uỗ ", "ụa ": "uộ ", "ưa ": "ươ ", "ừa ": "ườ ", "ứa ": "ướ ", "ửa ": "ưở ", "ữa ": "ưỡ ", "ựa ": "ượ ", "ke": "ce", "kè": "cè", "ké": "cé", "kẻ": "cẻ", "kẽ": "cẽ", "kẹ": "cẹ", "kê": "cê", "kề": "cề", "kế": "cế", "kể": "cể", "kễ": "cễ", "kệ": "cệ", "ki": "ci", "kì": "cì", "kí": "cí", "kỉ": "cỉ", "kĩ": "cĩ", "kị": "cị", "ky": "cy", "kỳ": "cỳ", "ký": "cý", "kỷ": "cỷ", "kỹ": "cỹ", "kỵ": "cỵ", "ghe": "ge", "ghè": "gè", "ghé": "gé", "ghẻ": "gẻ", "ghẽ": "gẽ", "ghẹ": "gẹ", "ghê": "gê", "ghề": "gề", "ghế": "gế", "ghể": "gể", "ghễ": "gễ", "ghệ": "gệ", "ngh": "\x80", "uyê": "\x96", "uyề": "\x97", "uyế": "\x98", "uyể": "\x99", "uyễ": "\x9a", "uyệ": "\x9b", "ng": "\x81", "ch": "\x82", "gh": "\x83", "nh": "\x84", "gi": "\x85", "ph": "\x86", "kh": "\x87", "th": "\x88", "tr": "\x89", "uy": "\x8a", "uỳ": "\x8b", "uý": "\x8c", "uỷ": "\x8d", "uỹ": "\x8e", "uỵ": "\x8f", "iê": "\x90", "iề": "\x91", "iế": "\x92", "iể": "\x93", "iễ": "\x94", "iệ": "\x95", "uô": "\x9c", "uồ": "\x9d", "uố": "\x9e", "uổ": "\x9f", "uỗ": "\xa0", "uộ": "\xa1", "ươ": "\xa2", "ườ": "\xa3", "ướ": "\xa4", "ưở": "\xa5", "ưỡ": "\xa6", "ượ": "\xa7", } def decode_string(x): for k, v in list(reversed(list(ENCODER.items()))): x = x.replace(v, k) return x test_dataset = load_dataset("common_voice", "vi", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("Nhut/wav2vec2-large-xlsr-vietnamese") model = Wav2Vec2ForCTC.from_pretrained("Nhut/wav2vec2-large-xlsr-vietnamese") model.to("cuda") chars_to_ignore_regex = '[\\\+\@\ǀ\,\?\.\!\-\;\:\"\“\%\‘\”\�]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) # decode_string: We replace the encoded letter with the initial letters batch["pred_strings"] = [decode_string(x) for x in batch["pred_strings"]] return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 49.59 % ## Training The Common Voice `train`, `validation` and FOSD datasets and VIVOS datasets were used for training as well. The script used for training can be found [here](https://colab.research.google.com/drive/11pP4uVJj4SYZTzGjlCUtOHywlhYqs0cPx)
{"language": "vi", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice", {"FOSD": "https://data.mendeley.com/datasets/k9sxg2twv4/4"}, {"VIVOS": "https://ailab.hcmus.edu.vn/vivos"}], "metrics": ["wer"], "model-index": [{"name": "XLSR Wav2Vec2 Vietnamese by Nhut", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice vi", "type": "common_voice", "args": "vi"}, "metrics": [{"type": "wer", "value": 49.59, "name": "Test WER"}]}]}]}
Nhut/wav2vec2-large-xlsr-vietnamese
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "xlsr-fine-tuning-week", "vi", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Harry Potter DialoGPT Model
{"tags": ["conversational"]}
NibrasShami/DialopGPT-small-HarryPotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
keras
{}
Niciu/keras-dummy-functional-demo
null
[ "keras", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
keras
{}
Niciu/keras-dummy-model-mixin-demo
null
[ "keras", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
keras
{}
Niciu/keras-dummy-sequential-demo
null
[ "keras", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
this project was created to use in wav2vec
{}
Niciu/testtest1
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Niciu/wav2vec2-base-timit-demo-colab
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Niciu/wav2vec2-base-timit-demo-colab1
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Niciu/wav2vec2-base-timit-demo-colab2
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Niciu/wav2vec2-base-vivo-demo-colab1
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Niciu/wav2vec2-large-xlsr-thai-demo
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Niciu/wav2vectest1
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Nick96/B-A
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# My Awesome Laffy
{"tags": ["conversational"]}
NickCavarretta/DialoGPT-small-laffy
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Nickis/distilgpt2-finetuned-wikitext2
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Nickis/distilroberta-base-finetuned-data
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Nickis/distilroberta-base-finetuned-wikitext2
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Nickolay/sevbot
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4519 - Wer: 0.3375 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.4351 | 4.0 | 500 | 1.2740 | 0.8259 | | 0.5828 | 8.0 | 1000 | 0.4276 | 0.4403 | | 0.2274 | 12.0 | 1500 | 0.4646 | 0.3739 | | 0.135 | 16.0 | 2000 | 0.4320 | 0.3662 | | 0.0962 | 20.0 | 2500 | 0.4831 | 0.3607 | | 0.0719 | 24.0 | 3000 | 0.4506 | 0.3463 | | 0.0556 | 28.0 | 3500 | 0.4519 | 0.3375 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-demo-colab", "results": []}]}
NicoGrageda/wav2vec2-base-timit-demo-colab
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
NicolasPeruchot/Biography
null
[ "transformers", "tf", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Squi
{"tags": ["conversational"]}
Nihwy/DialoSqui
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Nikhil8800868912/wav2vec2-base-timit-demo-colab-new-ASR-wer
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Harry Potter DialoGPT Model
{"tags": ["conversational"]}
NikhilKrishna/DialoGPT-medium-harrypotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
NikhilRamesh/Fetch_Loc
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Nikhilshandilya9/Unet
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
# **-- EMODa --** ## BERT-model for danish multi-class classification of emotions Classifies a danish sentence into one of 6 different emotions: | Danish emotion | Ekman's emotion | | ----- | ----- | | 😞 **Afsky** | Disgust | | 😨 **Frygt** | Fear | | 😄 **Glæde** | Joy | | 😱 **Overraskelse** | Surprise | | 😢 **Tristhed** | Sadness | | 😠 **Vrede** | Anger | # How to use ```python from transformers import pipeline model_path = "NikolajMunch/danish-emotion-classification" classifier = pipeline("sentiment-analysis", model=model_path, tokenizer=model_path) prediction = classifier("Jeg er godt nok ked af at mine SMS'er er slettet") print(prediction) # [{'label': 'Tristhed', 'score': 0.9725030660629272}] ``` or ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("NikolajMunch/danish-emotion-classification") model = AutoModelForSequenceClassification.from_pretrained("NikolajMunch/danish-emotion-classification") ```
{"language": ["da"], "tags": ["sentiment", "emotion", "danish"], "widget": [{"text": "Hold da op! Kan det virkelig passe?"}]}
NikolajMunch/danish-emotion-classification
null
[ "transformers", "pytorch", "bert", "text-classification", "sentiment", "emotion", "danish", "da", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
{}
NikolajW/BaselineThesis
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Nilav/layoutlmv2-finetuned-funsd-test
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
transformers
# AOT-GAN CelebA-HQ AOT-GAN is a model that can be used for image in-painting. The CelebA-HQ checkpoint is trained on synthetic human faces, which should make it suitable for touching up and restoring portraits. This model was generated using [AOT-GAN-for-Inpainting](https://github.com/researchmm/AOT-GAN-for-Inpainting), cited as ``` @inproceedings{yan2021agg, author = {Zeng, Yanhong and Fu, Jianlong and Chao, Hongyang and Guo, Baining}, title = {Aggregated Contextual Transformations for High-Resolution Image Inpainting}, booktitle = {Arxiv}, pages={-}, year = {2020} } ``` ## Dataset The CelebA-HQ dataset was created with this codebase: https://github.com/tkarras/progressive_growing_of_gans, owned by NVidia and licensed under Creative Commons Attribution-NonCommercial 4.0 International.
{"tags": ["face-recognition", "face-generation", "face-segmentation", "generative-adversarial-network"], "datasets": ["celeba-hq"], "metrics": ["L1", "PSNR", "SSIM", "FID"]}
NimaBoscarino/aot-gan-celebahq
null
[ "transformers", "pytorch", "face-recognition", "face-generation", "face-segmentation", "generative-adversarial-network", "dataset:celeba-hq", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
transformers
# AOT-GAN Places2 AOT-GAN is a model that can be used for image in-painting. The Places2 checkpoint is trained on a dataset which should make it suitable for touching up and restoring images of landscapes, buildings, and other natural and developed places. This model was generated using [AOT-GAN-for-Inpainting](https://github.com/researchmm/AOT-GAN-for-Inpainting), cited as ``` @inproceedings{yan2021agg, author = {Zeng, Yanhong and Fu, Jianlong and Chao, Hongyang and Guo, Baining}, title = {Aggregated Contextual Transformations for High-Resolution Image Inpainting}, booktitle = {Arxiv}, pages={-}, year = {2020} } ``` ## Dataset The Places2 dataset can be found here: http://places2.csail.mit.edu/download.html
{"tags": ["scene-recognition", "scene-generation", "generative-adversarial-network"], "datasets": ["places2"], "metrics": ["L1", "PSNR", "SSIM", "FID"]}
NimaBoscarino/aot-gan-places2
null
[ "transformers", "pytorch", "scene-recognition", "scene-generation", "generative-adversarial-network", "dataset:places2", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
NimaFar/distilbert-base-uncased-finetuned-squad
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
feature-extraction
transformers
{}
NinaR21/Albert_funny
null
[ "transformers", "tf", "albert", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Harry Potter DialoGPT Model
{"tags": ["conversational"]}
Ninja5000/DialoGPT-medium-HarryPotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# DialoGPT-medium-TWEWYJoshua Another not-so-good AI chatbot. Joshua from the game TWEWY(The World Ends With You). * Credits to Lynn's Devlab who made the amazing tutorial.
{"tags": ["conversational"]}
Ninja5000/DialoGPT-medium-TWEWYJoshua
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
#LOTR DialoGPT Model
{"tags": ["conversational"]}
Niphredil/DialoGPT-small-lotr
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Nirmal/nlp_v1
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
license: apache-2.0 --- ### Rick DialoGPT Model
{"tags": ["conversational"]}
Nisarg2701/DialoGPT-medium-Rick
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Nive/xls-r-en-t1
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Nivedhan/WebOrders
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Nix/model-1
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
transformers
# ELECTRA ## Introduction **ELECTRA** is a method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset. Electra-base-vn is trained on more 148gb text with max length 512. You can download tensorflow version at [Electra base TF version](https://drive.google.com/drive/folders/1hN0LiOlMfNDDQVo2bgEYHd03I-xXDLVr?usp=sharing) ### Contact information For personal communication related to this project, please contact Nha Nguyen Van ([email protected]).
{}
NlpHUST/electra-base-vn
null
[ "transformers", "pytorch", "electra", "pretraining", "arxiv:1406.2661", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
transformers
{}
NlpHUST/electra-legal-vi
null
[ "transformers", "pytorch", "electra", "pretraining", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# GPT-Neo-small for vietnamese First GPT for vietnamese ## Model Description GPT-Neo-vi-small is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. ## Training data GPT-Neo-vi-smal was trained on the News datasets, a large scale dataset created by from News Website for the purpose of training this model. ### How to use his example generates a different sequence each time it's run: ```py from transformers import GPTNeoForCausalLM, GPT2Tokenizer model = GPTNeoForCausalLM.from_pretrained("NlpHUST/gpt-neo-vi-small") tokenizer = GPT2Tokenizer.from_pretrained("NlpHUST/gpt-neo-vi-small") prompt = "Ngay sau Tết Nguyên đán Tân Sửu, hiện tượng giá đất tăng tại nhiều địa phương. Thị trường nhộn nhịp, tạo ra những cơn sóng sốt đất khó tin khiến bộ ngành, địa phương đưa cảnh báo." input_ids = tokenizer(prompt, return_tensors="pt").input_ids gen_tokens = model.generate(input_ids, do_sample=True, temperature=1.0, max_length=1024) gen_text = tokenizer.batch_decode(gen_tokens)[0] print(gen_text) ``` ### Contact information For personal communication related to this project, please contact Nha Nguyen Van ([email protected]).
{"language": "vi", "tags": ["vi", "vietnamese", "text-generation", "gpt3", "lm", "nlp"], "datasets": ["vietnamese"], "widget": [{"text": "Vi\u1ec7t Nam l\u00e0 qu\u1ed1c gia c\u00f3"}], "pipeline_tag": "text-generation"}
NlpHUST/gpt-neo-vi-small
null
[ "transformers", "pytorch", "gpt_neo", "text-generation", "vi", "vietnamese", "gpt3", "lm", "nlp", "dataset:vietnamese", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# T5-EN-VI-BASE:Pretraining Text-To-Text Transfer Transformer for English Vietnamese Translation # Dataset The *IWSLT'15 English-Vietnamese* data is used from [Stanford NLP group](https://nlp.stanford.edu/projects/nmt/). For all experiments the corpus was split into training, development and test set: | Data set | Sentences | Download | ----------- | --------- | --------------------------------------------------------------------------------------------------------------------------------- | Training | 133,317 | via [GitHub](https://github.com/stefan-it/nmt-en-vi/raw/master/data/train-en-vi.tgz) or located in `data/train-en-vi.tgz` | Development | 1,553 | via [GitHub](https://github.com/stefan-it/nmt-en-vi/raw/master/data/dev-2012-en-vi.tgz) or located in `data/dev-2012-en-vi.tgz` | Test | 1,268 | via [GitHub](https://github.com/stefan-it/nmt-en-vi/raw/master/data/test-2013-en-vi.tgz) or located in `data/test-2013-en-vi.tgz` ## Results The results on test set. | Model | BLEU (Beam Search) | ----------------------------------------------------------------------------------------------------- | ------------------ | [Luong & Manning (2015)](https://nlp.stanford.edu/pubs/luong-manning-iwslt15.pdf) | 23.30 | Sequence-to-sequence model with attention | 26.10 | Neural Phrase-based Machine Translation [Huang et. al. (2017)](https://arxiv.org/abs/1706.05565) | 27.69 | Neural Phrase-based Machine Translation + LM [Huang et. al. (2017)](https://arxiv.org/abs/1706.05565) | 28.07 | t5-en-vi-small (pretraining, without training data) | **28.46** (cased) / **29.23** (uncased) |t5-en-vi-small (fineturning with training data) | **32.38** (cased) / **33.19** (uncased) | t5-en-vi-base (pretraining, without training data) | **29.66** (cased) / **30.37** (uncased) #### Example Using ``` bash import torch from transformers import T5ForConditionalGeneration, T5Tokenizer import torch if torch.cuda.is_available(): device = torch.device("cuda") print('There are %d GPU(s) available.' % torch.cuda.device_count()) print('We will use the GPU:', torch.cuda.get_device_name(0)) else: print('No GPU available, using the CPU instead.') device = torch.device("cpu") model = T5ForConditionalGeneration.from_pretrained("NlpHUST/t5-en-vi-small") tokenizer = T5Tokenizer.from_pretrained("NlpHUST/t5-en-vi-small") model.to(device) src = "In school , we spent a lot of time studying the history of Kim Il-Sung , but we never learned much about the outside world , except that America , South Korea , Japan are the enemies ." tokenized_text = tokenizer.encode(src, return_tensors="pt").to(device) model.eval() summary_ids = model.generate( tokenized_text, max_length=128, num_beams=5, repetition_penalty=2.5, length_penalty=1.0, early_stopping=True ) output = tokenizer.decode(summary_ids[0], skip_special_tokens=True) print(output) ``` #### Output ``` bash Ở trường, chúng tôi dành nhiều thời gian để nghiên cứu về lịch sử Kim Il-Sung, nhưng chúng tôi chưa bao giờ học được nhiều về thế giới bên ngoài, ngoại trừ Mỹ, Hàn Quốc, Nhật Bản là kẻ thù. ``` ### Contact information For personal communication related to this project, please contact Nha Nguyen Van ([email protected]).
{}
NlpHUST/t5-en-vi-base
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "arxiv:1706.05565", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# T5-EN-VI-SMALL:Pretraining Text-To-Text Transfer Transformer for English Vietnamese Translation # Dataset The *IWSLT'15 English-Vietnamese* data is used from [Stanford NLP group](https://nlp.stanford.edu/projects/nmt/). For all experiments the corpus was split into training, development and test set: | Data set | Sentences | Download | ----------- | --------- | --------------------------------------------------------------------------------------------------------------------------------- | Training | 133,317 | via [GitHub](https://github.com/stefan-it/nmt-en-vi/raw/master/data/train-en-vi.tgz) or located in `data/train-en-vi.tgz` | Development | 1,553 | via [GitHub](https://github.com/stefan-it/nmt-en-vi/raw/master/data/dev-2012-en-vi.tgz) or located in `data/dev-2012-en-vi.tgz` | Test | 1,268 | via [GitHub](https://github.com/stefan-it/nmt-en-vi/raw/master/data/test-2013-en-vi.tgz) or located in `data/test-2013-en-vi.tgz` ## Results The results on test set. | Model | BLEU (Beam Search) | ----------------------------------------------------------------------------------------------------- | ------------------ | [Luong & Manning (2015)](https://nlp.stanford.edu/pubs/luong-manning-iwslt15.pdf) | 23.30 | Sequence-to-sequence model with attention | 26.10 | Neural Phrase-based Machine Translation [Huang et. al. (2017)](https://arxiv.org/abs/1706.05565) | 27.69 | Neural Phrase-based Machine Translation + LM [Huang et. al. (2017)](https://arxiv.org/abs/1706.05565) | 28.07 | t5-en-vi-small (pretraining, without training data) | **28.46** (cased) / **29.23** (uncased) |t5-en-vi-small (fineturning with training data) | **32.38** (cased) / **33.19** (uncased) #### Example Using ``` bash import torch from transformers import T5ForConditionalGeneration, T5Tokenizer import torch if torch.cuda.is_available(): device = torch.device("cuda") print('There are %d GPU(s) available.' % torch.cuda.device_count()) print('We will use the GPU:', torch.cuda.get_device_name(0)) else: print('No GPU available, using the CPU instead.') device = torch.device("cpu") model = T5ForConditionalGeneration.from_pretrained("NlpHUST/t5-en-vi-small") tokenizer = T5Tokenizer.from_pretrained("NlpHUST/t5-en-vi-small") model.to(device) src = "In school , we spent a lot of time studying the history of Kim Il-Sung , but we never learned much about the outside world , except that America , South Korea , Japan are the enemies ." tokenized_text = tokenizer.encode(src, return_tensors="pt").to(device) model.eval() summary_ids = model.generate( tokenized_text, max_length=128, num_beams=5, repetition_penalty=2.5, length_penalty=1.0, early_stopping=True ) output = tokenizer.decode(summary_ids[0], skip_special_tokens=True) print(output) ``` #### Output ``` bash Ở trường, chúng tôi dành nhiều thời gian để nghiên cứu về lịch sử Kim Il-Sung, nhưng chúng tôi chưa bao giờ học được nhiều về thế giới bên ngoài, ngoại trừ Mỹ, Hàn Quốc, Nhật Bản là kẻ thù. ``` ### Contact information For personal communication related to this project, please contact Nha Nguyen Van ([email protected]).
{}
NlpHUST/t5-en-vi-small
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "arxiv:1706.05565", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# T5-SMALL-SUMMARIZATION :Pretraining Text-To-Text Transfer Transformer for Vietnamese Text Summarization #### Example Using ``` bash import torch from transformers import T5ForConditionalGeneration, T5Tokenizer import torch if torch.cuda.is_available(): device = torch.device("cuda") print('There are %d GPU(s) available.' % torch.cuda.device_count()) print('We will use the GPU:', torch.cuda.get_device_name(0)) else: print('No GPU available, using the CPU instead.') device = torch.device("cpu") model = T5ForConditionalGeneration.from_pretrained("NlpHUST/t5-small-vi-summarization") tokenizer = T5Tokenizer.from_pretrained("NlpHUST/t5-small-vi-summarization") model.to(device) src = "Theo BHXH Việt Nam, nhiều doanh nghiệp vẫn chỉ đóng BHXH cho người lao động theo mức lương. \\\\ Dù quy định từ 1/1/2018, tiền lương tháng đóng BHXH gồm mức lương và thêm khoản bổ sung khác. \\\\ BHXH Việt Nam vừa có báo cáo về tình hình thực hiện chính sách BHXH thời gian qua. \\\\ Theo đó, tình trạng nợ, trốn đóng BHXH, BHTN vẫn xảy ra ở hầu hết các tỉnh, thành. \\\\ Thống kê tới ngày 31/12/2020, tổng số nợ BHXH, BHYT, BHTN là hơn 13.500 tỷ đồng, \\\\ chiếm 3,35 % số phải thu, trong đó: Số nợ BHXH bắt buộc là hơn 8.600 tỷ đồng, \\\\ nợ BHTN là 335 tỷ đồng. Liên quan tới tiền lương đóng BHXH, báo cáo của \\\\ BHXH Việt Nam cho thấy: Nhiều doanh nghiệp vẫn chủ yếu xây dựng thang, \\\\ bảng lương để đóng BHXH bằng mức thấp nhất. Tức là bằng mức lương tối \\\\ thiểu vùng, cộng thêm 7 % đối với lao động đã qua đào tạo nghề và cộng \\\\ thêm 5 % hoặc 7 % đối với lao động làm nghề hoặc công việc nặng nhọc, \\\\ độc hại, nguy hiểm, đặc biệt nặng nhọc độc hại và nguy hiểm. Đối với \\\\ lao động giữ chức vụ, khoảng 80 % doanh nghiệp đã xây dựng thang, \\\\ bảng lương cụ thể theo chức danh. Đơn cử như với chức vụ giám đốc \\\\ sản xuất, giám đốc điều hành, trưởng phòng. Còn lại các doanh nghiệp \\\\ xây dựng đối với lao động giữ chức vụ theo thang lương, bảng lương \\\\ chuyên môn nghiệp vụ và bảng phụ cấp chức vụ, phụ cấp trách nhiệm. \\\\ Thống kê của BHXH Việt Nam cũng cho thấy, đa số doanh nghiệp đã đăng \\\\ ký đóng BHXH cho người lao động theo mức lương mà không có khoản bổ \\\\ sung khác. Mặc dù quy định từ ngày 1/1/2018, tiền lương tháng đóng BHXH \\\\ gồm mức lương và thêm khoản bổ sung khác." tokenized_text = tokenizer.encode(src, return_tensors="pt").to(device) model.eval() summary_ids = model.generate( tokenized_text, max_length=256, num_beams=5, repetition_penalty=2.5, length_penalty=1.0, early_stopping=True ) output = tokenizer.decode(summary_ids[0], skip_special_tokens=True) print(output) ``` #### Output ``` bash Nhiều doanh nghiệp vẫn chủ yếu xây dựng thang, bảng lương để đóng BHXH bằng mức thấp nhất. \\ Dù quy định từ 1/1/2018, tiền lương tháng đóng BHXH gồm mức lương và thêm khoản bổ sung khác. \\ Thống kê của BHXH Việt Nam cho thấy, nhiều doanh nghiệp vẫn chỉ đóng BHXH \\ cho người lao động theo mức lương mà không có khoản bổ sung khác. ``` ### Contact information For personal communication related to this project, please contact Nha Nguyen Van ([email protected]).
{}
NlpHUST/t5-small-vi-summarization
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
--- language: - vi tags: - t5 - seq2seq # Machine translation for vietnamese ## Model Description T5-vi-en-base is a transformer model for vietnamese machine translation designed using T5 architecture. ## Training data T5-vi-en-base was trained on 4M sentence pairs (english,vietnamese) ### How to use ```py from transformers import T5ForConditionalGeneration, T5Tokenizer import torch if torch.cuda.is_available(): device = torch.device("cuda") print('There are %d GPU(s) available.' % torch.cuda.device_count()) print('We will use the GPU:', torch.cuda.get_device_name(0)) else: print('No GPU available, using the CPU instead.') device = torch.device("cpu") model = T5ForConditionalGeneration.from_pretrained("NlpHUST/t5-vi-en-base") tokenizer = T5Tokenizer.from_pretrained("NlpHUST/t5-vi-en-base") model.to(device) src = "Theo lãnh đạo Sở Y tế, 3 người này không có triệu chứng sốt, ho, khó thở, đã được lấy mẫu xét nghiệm và cách ly tập trung." tokenized_text = tokenizer.encode(src, return_tensors="pt").to(device) model.eval() summary_ids = model.generate( tokenized_text, max_length=256, num_beams=5, repetition_penalty=2.5, length_penalty=1.0, early_stopping=True ) output = tokenizer.decode(summary_ids[0], skip_special_tokens=True) print(output) According to the head of the Department of Health, the three people had no symptoms of fever, cough, shortness of breath, were taken samples for testing and concentrated quarantine. ```
{}
NlpHUST/t5-vi-en-base
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
--- language: - vi tags: - t5 - seq2seq # Machine translation for vietnamese ## Model Description T5-vi-en-small is a transformer model for vietnamese machine translation designed using T5 architecture. ## Training data T5-vi-en-small was trained on 4M sentence pairs (english,vietnamese) ### How to use ```py from transformers import T5ForConditionalGeneration, T5Tokenizer import torch if torch.cuda.is_available(): device = torch.device("cuda") print('There are %d GPU(s) available.' % torch.cuda.device_count()) print('We will use the GPU:', torch.cuda.get_device_name(0)) else: print('No GPU available, using the CPU instead.') device = torch.device("cpu") model = T5ForConditionalGeneration.from_pretrained("NlpHUST/t5-vi-en-small") tokenizer = T5Tokenizer.from_pretrained("NlpHUST/t5-vi-en-small") model.to(device) src = "Indonesia phỏng đoán nguyên nhân tàu ngầm chở 53 người mất tích bí ẩn" tokenized_text = tokenizer.encode(src, return_tensors="pt").to(device) model.eval() summary_ids = model.generate( tokenized_text, max_length=256, num_beams=5, repetition_penalty=2.5, length_penalty=1.0, early_stopping=True ) output = tokenizer.decode(summary_ids[0], skip_special_tokens=True) print(output) Indonesia anticipates the cause of the submarine transporting 53 mysterious missing persons ```
{}
NlpHUST/t5-vi-en-small
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
transformers
{}
NlpHUST/vi-electra-small
null
[ "transformers", "pytorch", "electra", "pretraining", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
# BERT for Vietnamese is trained on more 20 GB news dataset Apply for task sentiment analysis on using [AIViVN's comments dataset](https://www.aivivn.com/contests/6) The model achieved 0.90268 on the public leaderboard, (winner's score is 0.90087) Bert4news is used for a toolkit Vietnames(segmentation and Named Entity Recognition) at ViNLPtoolkit(https://github.com/bino282/ViNLP) We use word sentencepiece, use basic bert tokenization and same config with bert base with lowercase = False. You can download trained model: - [tensorflow](https://drive.google.com/file/d/1X-sRDYf7moS_h61J3L79NkMVGHP-P-k5/view?usp=sharing). - [pytorch](https://drive.google.com/file/d/11aFSTpYIurn-oI2XpAmcCTccB_AonMOu/view?usp=sharing). Use with huggingface/transformers ``` bash import torch from transformers import BertTokenizer,BertModel tokenizer= BertTokenizer.from_pretrained("NlpHUST/vibert4news-base-cased") bert_model = BertModel.from_pretrained("NlpHUST/vibert4news-base-cased") line = "Tôi là sinh viên trường Bách Khoa Hà Nội ." input_id = tokenizer.encode(line,add_special_tokens = True) att_mask = [int(token_id > 0) for token_id in input_id] input_ids = torch.tensor([input_id]) att_masks = torch.tensor([att_mask]) with torch.no_grad(): features = bert_model(input_ids,att_masks) print(features) ``` # Vietnamese toolkit with bert ViNLP is a system annotation for Vietnamese, it use pretrain [Bert4news](https://github.com/bino282/bert4news/) to fine-turning to NLP problems in Vietnamese components of wordsegmentation,Named entity recognition (NER) and achieve high accuravy. ### Installation ```bash git clone https://github.com/bino282/ViNLP.git cd ViNLP python setup.py develop build ``` ### Test Segmentation The model achieved F1 score : 0.984 on VLSP 2013 dataset |Model | F1 | |--------|-----------| | **BertVnTokenizer** | 98.40 | | **DongDu** | 96.90 | | **JvnSegmenter-Maxent** | 97.00 | | **JvnSegmenter-CRFs** | 97.06 | | **VnTokenizer** | 97.33 | | **UETSegmenter** | 97.87 | | **VnTokenizer** | 97.33 | | **VnCoreNLP (i.e. RDRsegmenter)** | 97.90 | ``` bash from ViNLP import BertVnTokenizer tokenizer = BertVnTokenizer() sentences = tokenizer.split(["Tổng thống Donald Trump ký sắc lệnh cấm mọi giao dịch của Mỹ với ByteDance và Tecent - chủ sở hữu của 2 ứng dụng phổ biến TikTok và WeChat sau 45 ngày nữa."]) print(sentences[0]) ``` ``` bash Tổng_thống Donald_Trump ký sắc_lệnh cấm mọi giao_dịch của Mỹ với ByteDance và Tecent - chủ_sở_hữu của 2 ứng_dụng phổ_biến TikTok và WeChat sau 45 ngày nữa . ``` ### Test Named Entity Recognition The model achieved F1 score VLSP 2018 for all named entities including nested entities : 0.786 |Model | F1 | |--------|-----------| | **BertVnNer** | 78.60 | | **VNER Attentive Neural Network** | 77.52 | | **vietner CRF (ngrams + word shapes + cluster + w2v)** | 76.63 | | **ZA-NER BiLSTM** | 74.70 | ``` bash from ViNLP import BertVnNer bert_ner_model = BertVnNer() sentence = "Theo SCMP, báo cáo của CSIS với tên gọi Định hình Tương lai Chính sách của Mỹ với Trung Quốc cũng cho thấy sự ủng hộ tương đối rộng rãi của các chuyên gia về việc cấm Huawei, tập đoàn viễn thông khổng lồ của Trung Quốc" entities = bert_ner_model.annotate([sentence]) print(entities) ``` ``` bash [{'ORGANIZATION': ['SCMP', 'CSIS', 'Huawei'], 'LOCATION': ['Mỹ', 'Trung Quốc']}] ``` Run training with base config ``` bash python train_pytorch.py \\\\ --model_path=bert4news.pytorch \\\\ --max_len=200 \\\\ --batch_size=16 \\\\ --epochs=6 \\\\ --lr=2e-5 ``` ### Contact information For personal communication related to this project, please contact Nha Nguyen Van ([email protected]).
{"language": "vn"}
NlpHUST/vibert4news-base-cased
null
[ "transformers", "pytorch", "safetensors", "fill-mask", "vn", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Nlpxyz/firstnlp
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Hagrid DialoGPT medium model
{"tags": ["conversational"]}
NoLawz/DialoGPT-medium-hagrid
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Harry Potter DialoGPT medium model
{"tags": ["conversational"]}
NoLawz/DialoGPT-medium-harrypotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Spong Bob DialoGPT medium model
{"tags": ["conversational"]}
NoLawz/DialoGPT-medium-spongebob
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Noah23/jhgyfevhudkmls
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# NLGP docstring model The NLGP docstring model was introduced in the paper [Natural Language-Guided Programming](https://arxiv.org/abs/2108.05198). The model was trained on a collection of Jupyter notebooks and can be used to synthesize Python code that addresses a natural language **intent** in a certain code **context** (see the example below). Also see the [NLGP natural](https://huggingface.co/Nokia/nlgp-natural) model. This work was carried out by a research team in Nokia Bell Labs. **Context** ```py import matplotlib.pyplot as plt values = [1, 2, 3, 4] labels = ["a", "b", "c", "d"] ``` **Intent** ```py # plot a bart chart ``` **Prediction** ```py plt.bar(labels, values) plt.show() ``` ## Usage ```py import re from transformers import GPT2LMHeadModel, GPT2TokenizerFast # load the model tok = GPT2TokenizerFast.from_pretrained("Nokia/nlgp-docstring") model = GPT2LMHeadModel.from_pretrained("Nokia/nlgp-docstring") # preprocessing functions num_spaces = [2, 4, 6, 8, 10, 12, 14, 16, 18] def preprocess(context, query): """ Encodes context + query as a single string and replaces whitespace with special tokens <|2space|>, <|4space|>, ... """ input_str = f"{context}\n{query} <|endofcomment|>\n" indentation_symbols = {n: f"<|{n}space|>" for n in num_spaces} m = re.match("^[ ]+", input_str) if not m: return input_str leading_whitespace = m.group(0) N = len(leading_whitespace) for n in self.num_spaces: leading_whitespace = leading_whitespace.replace(n * " ", self.indentation_symbols[n]) return leading_whitespace + input_str[N:] detokenize_pattern = re.compile(fr"<\|(\d+)space\|>") def postprocess(output): output = output.split("<|cell|>")[0] def insert_space(m): num_spaces = int(m.group(1)) return num_spaces * " " return detokenize_pattern.sub(insert_space, output) # inference code_context = """ import matplotlib.pyplot as plt values = [1, 2, 3, 4] labels = ["a", "b", "c", "d"] """ query = "# plot a bar chart" input_str = preprocess(code_context, query) input_ids = tok(input_str, return_tensors="pt").input_ids max_length = 150 # don't generate output longer than this length total_max_length = min(1024 - input_ids.shape[-1], input_ids.shape[-1] + 150) # total = input + output input_and_output = model.generate( input_ids=input_ids, max_length=total_max_length, min_length=10, do_sample=False, num_beams=4, early_stopping=True, eos_token_id=tok.encode("<|cell|>")[0] ) output = input_and_output[:, input_ids.shape[-1]:] # remove the tokens that correspond to the input_str output_str = tok.decode(output[0]) postprocess(output_str) ``` ## License and copyright Copyright 2021 Nokia Licensed under the Apache License 2.0 SPDX-License-Identifier: Apache-2.0
{"language": ["en", "code"], "license": "apache-2.0", "tags": ["code completion", "code generation"]}
Nokia/nlgp-docstring
null
[ "transformers", "pytorch", "gpt2", "text-generation", "code completion", "code generation", "en", "code", "arxiv:2108.05198", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# NLGP natural model The NLGP natural model was introduced in the paper [Natural Language-Guided Programming](https://arxiv.org/abs/2108.05198). The model was trained on a collection of Jupyter notebooks and can be used to synthesize Python code that addresses a natural language **intent** in a certain code **context** (see the example below). This work was carried out by a research team in Nokia Bell Labs. **Context** ```py import matplotlib.pyplot as plt values = [1, 2, 3, 4] labels = ["a", "b", "c", "d"] ``` **Intent** ```py # plot a bar chart ``` **Prediction** ```py plt.bar(labels, values) plt.show() ``` ## Usage ```py import re from transformers import GPT2LMHeadModel, GPT2TokenizerFast # load the model tok = GPT2TokenizerFast.from_pretrained("Nokia/nlgp-natural") model = GPT2LMHeadModel.from_pretrained("Nokia/nlgp-natural") # preprocessing functions num_spaces = [2, 4, 6, 8, 10, 12, 14, 16, 18] def preprocess(context, query): """ Encodes context + query as a single string and replaces whitespace with special tokens <|2space|>, <|4space|>, ... """ input_str = f"{context}\n{query} <|endofcomment|>\n" indentation_symbols = {n: f"<|{n}space|>" for n in num_spaces} m = re.match("^[ ]+", input_str) if not m: return input_str leading_whitespace = m.group(0) N = len(leading_whitespace) for n in self.num_spaces: leading_whitespace = leading_whitespace.replace(n * " ", self.indentation_symbols[n]) return leading_whitespace + input_str[N:] detokenize_pattern = re.compile(fr"<\|(\d+)space\|>") def postprocess(output): output = output.split("<|cell|>")[0] def insert_space(m): num_spaces = int(m.group(1)) return num_spaces * " " return detokenize_pattern.sub(insert_space, output) # inference code_context = """ import matplotlib.pyplot as plt values = [1, 2, 3, 4] labels = ["a", "b", "c", "d"] """ query = "# plot a bar chart" input_str = preprocess(code_context, query) input_ids = tok(input_str, return_tensors="pt").input_ids max_length = 150 # don't generate output longer than this length total_max_length = min(1024 - input_ids.shape[-1], input_ids.shape[-1] + 150) # total = input + output input_and_output = model.generate( input_ids=input_ids, max_length=total_max_length, min_length=10, do_sample=False, num_beams=4, early_stopping=True, eos_token_id=tok.encode("<|cell|>")[0] ) output = input_and_output[:, input_ids.shape[-1]:] # remove the tokens that correspond to the input_str output_str = tok.decode(output[0]) postprocess(output_str) ``` ## License and copyright Copyright 2021 Nokia Licensed under the Apache License 2.0 SPDX-License-Identifier: Apache-2.0
{"language": ["en", "code"], "license": "apache-2.0", "tags": ["code completion", "code generation"]}
Nokia/nlgp-natural
null
[ "transformers", "pytorch", "gpt2", "text-generation", "code completion", "code generation", "en", "code", "arxiv:2108.05198", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Noman/layoutlmv2
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Noman/model_name
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
{}
Nomi97/Chatbot_QA
null
[ "transformers", "pytorch", "longformer", "question-answering", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Noobanand69420/Octane
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
NoodleOnHuggingFace/AITestModel-small-joshua
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Noodlezs/Monica
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Noremac/b
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
NoriZ/semval-finetune
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
# Wav2vec2 German Model This model has been fine-tuned on the wav2vec-large-xlsr-53 with the German CommonVoice dataset. It achieves a 11.26 WER on the full test dataset. It was basically trained with the code provided by [Max Idahl](https://huggingface.co/maxidl/wav2vec2-large-xlsr-german) with small adjustments in data preprocessing and on training parameters. You can use it to transcribe your own files by the following code. Please note, that your input file must be *.wav, encoded in 16 kHz and be single channel. To convert an audio file using ffmpeg use: "ffmpeg -i input.wav -ar 16000 -ac 1 output.wav". The transcribe process is very memory consuming (around 10GB per 10 seconds). If the script ends with "Killed" it means the Python interpreter ran out of memory. In this case, try with a shorter audio file. ```python # !pip3 install transformers torch soundfile import soundfile as sf import torch from transformers import Wav2Vec2ForCTC, Wav2Vec2Tokenizer # load pretrained model tokenizer = Wav2Vec2Tokenizer.from_pretrained("Noricum/wav2vec2-large-xlsr-53-german") model = Wav2Vec2ForCTC.from_pretrained("Noricum/wav2vec2-large-xlsr-53-german") #load audio audio_input, _ = sf.read("/path/to/your/audio.wav") # transcribe input_values = tokenizer(audio_input, return_tensors="pt").input_values logits = model(input_values).logits predicted_ids = torch.argmax(logits, dim=-1) transcription = tokenizer.batch_decode(predicted_ids)[0] print(str(transcription)) ``` To evaluate the model on the full CommonVoice test dataset, run this script: ```python import re import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "de", split="test") # use "test[:1%]" for 1% sample wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("Noricum/wav2vec2-large-xlsr-53-german") model = Wav2Vec2ForCTC.from_pretrained("Noricum/wav2vec2-large-xlsr-53-german") model.to("cuda") chars_to_ignore_regex = '[\\\\,\\\\?\\\\.\\\\!\\\\-\\\\;\\\\:\\\\"\\\\“]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the audio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=4) # batch_size=8 -> requires ~14.5GB GPU memory # Chunked version, see https://discuss.huggingface.co/t/spanish-asr-fine-tuning-wav2vec2/4586/5: import jiwer def chunked_wer(targets, predictions, chunk_size=None): if chunk_size is None: return jiwer.wer(targets, predictions) start = 0 end = chunk_size H, S, D, I = 0, 0, 0, 0 while start < len(targets): chunk_metrics = jiwer.compute_measures(targets[start:end], predictions[start:end]) H = H + chunk_metrics["hits"] S = S + chunk_metrics["substitutions"] D = D + chunk_metrics["deletions"] I = I + chunk_metrics["insertions"] start += chunk_size end += chunk_size return float(S + D + I) / float(H + S + D) print("Total (chunk_size=1000), WER: {:2f}".format(100 * chunked_wer(result["pred_strings"], result["sentence"], chunk_size=1000))) ``` Output: Total (chunk_size=1000), WER: 11.256522
{}
Noricum/wav2vec2-large-xlsr-53-german
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
{}
Norimoji/DialoGPT-medium-FF7
null
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# distilgpt2-base-pretrained-he A tiny GPT2 based Hebrew text generation model initially trained on a TPUv3-8 which was made avilable to me via the [TPU Research Cloud](https://sites.research.google/trc/) Program. Then was further fine-tuned on GPU. ## Dataset ### oscar (unshuffled deduplicated he) - [Homepage](https://oscar-corpus.com) | [Dataset Permalink](https://huggingface.co/datasets/viewer/?dataset=oscar&config=unshuffled_deduplicated_he) The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture. ### CC-100 (he) - [HomePage](https://data.statmt.org/cc-100/) This corpus comprises of monolingual data for 100+ languages and also includes data for romanized languages. This was constructed using the urls and paragraph indices provided by the CC-Net repository by processing January-December 2018 Commoncrawl snapshots. Each file comprises of documents separated by double-newlines and paragraphs within the same document separated by a newline. The data is generated using the open source CC-Net repository. ### Misc * Hebrew Twitter * Wikipedia * Various other sources ## Training * Done on a TPUv3-8 VM using [Huggingface's clm-flax example script](https://github.com/huggingface/transformers/blob/master/examples/flax/language-modeling/run_clm_flax.py) <BR> * I have made a list of items which might make it easier for other to use this script. The list was posted to [This discussion forum](https://discuss.huggingface.co/t/ideas-for-beginner-friendlier-tpu-vm-clm-training/8351) * Further training was performed on GPU ## Usage #### Simple usage sample code ```python from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline def main(): model_name="Norod78/distilgpt2-base-pretrained-he" prompt_text = "שלום, קוראים לי" generated_max_length = 192 print("Loading model...") model = AutoModelForCausalLM.from_pretrained(model_name) print('Loading Tokenizer...') tokenizer = AutoTokenizer.from_pretrained(model_name) text_generator = pipeline(task="text-generation", model=model, tokenizer=tokenizer) print("Generating text...") result = text_generator(prompt_text, num_return_sequences=1, batch_size=1, do_sample=True, top_k=40, top_p=0.92, temperature = 1, repetition_penalty=5.0, max_length = generated_max_length) print("result = " + str(result)) if __name__ == '__main__': main() ```
{"language": "he", "license": "mit", "thumbnail": "https://avatars1.githubusercontent.com/u/3617152?norod.jpg", "widget": [{"text": "\u05d4\u05d0\u05d9\u05e9 \u05d4\u05d0\u05d7\u05e8\u05d5\u05df \u05e2\u05dc\u05d9 \u05d0\u05d3\u05de\u05d5\u05ea \u05d9\u05e9\u05d1 \u05dc\u05d1\u05d3 \u05d1\u05d7\u05d3\u05e8\u05d5 \u05db\u05e9\u05dc\u05e4\u05ea\u05e2 \u05e0\u05e9\u05de\u05e2\u05d4 \u05e0\u05e7\u05d9\u05e9\u05d4"}, {"text": "\u05e9\u05dc\u05d5\u05dd, \u05e7\u05e8\u05d5\u05d0\u05d9\u05dd \u05dc\u05d9"}, {"text": "\u05d4\u05d0\u05e8\u05d9 \u05e4\u05d5\u05d8\u05e8 \u05d7\u05d9\u05d9\u05da \u05d7\u05d9\u05d5\u05da \u05e0\u05d1\u05d5\u05da"}, {"text": "\u05d4\u05d7\u05ea\u05d5\u05dc \u05e9\u05dc\u05da \u05de\u05d0\u05d5\u05d3 \u05d7\u05de\u05d5\u05d3 \u05d5"}]}
Norod78/distilgpt2-base-pretrained-he
null
[ "transformers", "pytorch", "tf", "jax", "coreml", "onnx", "safetensors", "gpt2", "text-generation", "he", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
{}
Norod78/english-sienfeld-distilgpt2
null
[ "transformers", "pytorch", "jax", "safetensors", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# hebrew-bad_wiki-gpt_neo-tiny ## Table of Contents - [Model Details](#model-details) - [Uses](#uses) - [Risks, Limitations and Biases](#risks-limitations-and-biases) - [Training](#training) - [Evaluation](#evaluation) - [Environmental Impact](#environmental-impact) - [How to Get Started With the Model](#how-to-get-started-with-the-model) ## Model Details **Model Description:** The model developer notes that the model is > Hebrew nonsense generation model which produces really bad wiki-abstract text. - **Developed by:** [Doron Adler](https://github.com/Norod) - **Model Type:** Text Generation - **Language(s):** Hebrew - **License:** MIT - **Resources for more information:** - [GitHub Repo](https://github.com/Norod/hebrew-gpt_neo) - [HuggingFace Space](https://huggingface.co/spaces/Norod78/Hebrew-GPT-Neo-Small) ## Uses #### Direct Use This model can be used for text generation. #### Misuse and Out-of-scope Use ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). ## Training #### Training Data [Hebrew Wikipedia Dump](https://dumps.wikimedia.org/hewiki/latest/) (hewiki abstract) from May 2020 #### Training Procedure This model was fined tuned upon [hebrew-gpt_neo-tiny](https://huggingface.co/Norod78/hebrew-gpt_neo-tiny) which was previously trained using [EleutherAI's gpt-neo](https://github.com/EleutherAI/gpt-neo). Fine-tuning on the wiki-absract text was done using [@minimaxir](https://twitter.com/minimaxir)'s [aitextgen](https://github.com/minimaxir/aitextgen). ## Evaluation #### Configs Model configs for the hebrew-gpt_neo-tiny is available on the [hebrew-gpt_neo model github](https://github.com/Norod/hebrew-gpt_neo/tree/main/hebrew-gpt_neo-tiny/configs) * **Activation Function:** gelu * **Number_Head:** 12 * **Number_Vocab:** 50257 * **Train batch size:** 250 * **Eval batch size:** 64 * **Predict batch size:** 1 ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). We present the hardware type based on the [associated paper](https://arxiv.org/pdf/2105.09680.pdf). - **Hardware Type:** [More information needed] - **Hours used:** Unknown - **Cloud Provider:** GCP tpu-v8s - **Compute Region:** europe-west4 - **Carbon Emitted:** [More information needed] ## How to Get Started With the Model A Google Colab Notebook is also available [here](https://colab.research.google.com/github/Norod/hebrew-gpt_neo/blob/main/hebrew-gpt_neo-tiny/Norod78_hebrew_gpt_neo_tiny_Colab.ipynb) ​​ ``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Norod78/hebrew-bad_wiki-gpt_neo-tiny") model = AutoModelForCausalLM.from_pretrained("Norod78/hebrew-bad_wiki-gpt_neo-tiny") ```
{"language": "he", "license": "mit", "thumbnail": "https://avatars1.githubusercontent.com/u/3617152?norod.jpg", "widget": [{"text": "\u05de\u05ea\u05de\u05d8\u05d9\u05e7\u05d4:"}, {"text": "\u05e2\u05dc\u05d9\u05d9\u05ea \u05d4\u05de\u05db\u05d5\u05e0\u05d5\u05ea"}, {"text": "\u05d5\u05d9\u05e7\u05d9\u05e4\u05d3\u05d9\u05d4 \u05d4\u05e2\u05d1\u05e8\u05d9\u05ea"}, {"text": "\u05d4\u05d0\u05d9\u05e8\u05d5\u05d5\u05d9\u05d6\u05d9\u05d5\u05df \u05d4\u05d5\u05d0"}, {"text": "\u05d3\u05d5\u05d3 \u05d1\u05df-\u05d2\u05d5\u05e8\u05d9\u05d5\u05df \u05d4\u05d9\u05d4"}]}
Norod78/hebrew-bad_wiki-gpt_neo-tiny
null
[ "transformers", "pytorch", "coreml", "safetensors", "gpt_neo", "text-generation", "he", "arxiv:1910.09700", "arxiv:2105.09680", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# hebrew-gpt_neo-small Hebrew text generation model based on [EleutherAI's gpt-neo](https://github.com/EleutherAI/gpt-neo). Each was trained on a TPUv3-8 which was made avilable to me via the [TPU Research Cloud](https://sites.research.google/trc/) Program. ## Datasets 1. An assortment of various Hebrew corpuses - I have made it available [here](https://mega.nz/folder/CodSSA4R#4INvMes-56m_WUi7jQMbJQ) 2. oscar / unshuffled_deduplicated_he - [Homepage](https://oscar-corpus.com) | [Dataset Permalink](https://huggingface.co/datasets/viewer/?dataset=oscar&config=unshuffled_deduplicated_he) The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture. 3. CC100-Hebrew Dataset [Homepage](https://metatext.io/datasets/cc100-hebrew) Created by Conneau & Wenzek et al. at 2020, the CC100-Hebrew This dataset is one of the 100 corpora of monolingual data that was processed from the January-December 2018 Commoncrawl snapshots from the CC-Net repository. The size of this corpus is 6.1G., in Hebrew language. ## Training Config Available [here](https://github.com/Norod/hebrew-gpt_neo/tree/main/hebrew-gpt_neo-small/configs) <BR> ## Usage ### Google Colab Notebook Available [here ](https://colab.research.google.com/github/Norod/hebrew-gpt_neo/blob/main/hebrew-gpt_neo-small/Norod78_hebrew_gpt_neo_small_Colab.ipynb) <BR> #### Simple usage sample code ```python !pip install tokenizers==0.10.2 transformers==4.6.0 from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Norod78/hebrew-gpt_neo-small") model = AutoModelForCausalLM.from_pretrained("Norod78/hebrew-gpt_neo-small", pad_token_id=tokenizer.eos_token_id) prompt_text = "אני אוהב שוקולד ועוגות" max_len = 512 sample_output_num = 3 seed = 1000 import numpy as np import torch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") n_gpu = 0 if torch.cuda.is_available()==False else torch.cuda.device_count() print(f"device: {device}, n_gpu: {n_gpu}") np.random.seed(seed) torch.manual_seed(seed) if n_gpu > 0: torch.cuda.manual_seed_all(seed) model.to(device) encoded_prompt = tokenizer.encode( prompt_text, add_special_tokens=False, return_tensors="pt") encoded_prompt = encoded_prompt.to(device) if encoded_prompt.size()[-1] == 0: input_ids = None else: input_ids = encoded_prompt print("input_ids = " + str(input_ids)) if input_ids != None: max_len += len(encoded_prompt[0]) if max_len > 2048: max_len = 2048 print("Updated max_len = " + str(max_len)) stop_token = "<|endoftext|>" new_lines = "\n\n\n" sample_outputs = model.generate( input_ids, do_sample=True, max_length=max_len, top_k=50, top_p=0.95, num_return_sequences=sample_output_num ) print(100 * '-' + "\n\t\tOutput\n" + 100 * '-') for i, sample_output in enumerate(sample_outputs): text = tokenizer.decode(sample_output, skip_special_tokens=True) # Remove all text after the stop token text = text[: text.find(stop_token) if stop_token else None] # Remove all text after 3 newlines text = text[: text.find(new_lines) if new_lines else None] print("\n{}: {}".format(i, text)) print("\n" + 100 * '-') ```
{"language": "he", "license": "mit", "thumbnail": "https://avatars1.githubusercontent.com/u/3617152?norod.jpg", "widget": [{"text": "\u05e2\u05d5\u05d3 \u05d1\u05d9\u05de\u05d9 \u05e7\u05d3\u05dd"}, {"text": "\u05e7\u05d5\u05e8\u05d0\u05d9\u05dd \u05dc\u05d9 \u05d3\u05d5\u05e8\u05d5\u05df \u05d5\u05d0\u05e0\u05d9 \u05de\u05e2\u05d5\u05e0\u05d9\u05d9\u05df \u05dc"}, {"text": "\u05e7\u05d5\u05e8\u05d0\u05d9\u05dd \u05dc\u05d9 \u05d0\u05d9\u05e6\u05d9\u05e7 \u05d5\u05d0\u05e0\u05d9 \u05d7\u05d5\u05e9\u05d1 \u05e9"}, {"text": "\u05d4\u05d7\u05ea\u05d5\u05dc \u05e9\u05dc\u05da \u05de\u05d0\u05d5\u05d3 \u05d7\u05de\u05d5\u05d3 \u05d5"}]}
Norod78/hebrew-gpt_neo-small
null
[ "transformers", "pytorch", "jax", "onnx", "safetensors", "gpt_neo", "text-generation", "he", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# hebrew-gpt_neo-tiny Hebrew text generation model based on [EleutherAI's gpt-neo](https://github.com/EleutherAI/gpt-neo). Each was trained on a TPUv3-8 which was made avilable to me via the [TPU Research Cloud](https://sites.research.google/trc/) Program. ## Datasets 1. An assortment of various Hebrew corpuses - I have made it available [here](https://mega.nz/folder/CodSSA4R#4INvMes-56m_WUi7jQMbJQ) 2. oscar / unshuffled_deduplicated_he - [Homepage](https://oscar-corpus.com) | [Dataset Permalink](https://huggingface.co/datasets/viewer/?dataset=oscar&config=unshuffled_deduplicated_he) The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture. ## Training Config Available [here](https://github.com/Norod/hebrew-gpt_neo/tree/main/hebrew-gpt_neo-tiny/configs) <BR> ## Usage ### Google Colab Notebook Available [here ](https://colab.research.google.com/github/Norod/hebrew-gpt_neo/blob/main/hebrew-gpt_neo-tiny/Norod78_hebrew_gpt_neo_tiny_Colab.ipynb) <BR> #### Simple usage sample code ```python !pip install tokenizers==0.10.2 transformers==4.6.0 from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Norod78/hebrew-gpt_neo-tiny") model = AutoModelForCausalLM.from_pretrained("Norod78/hebrew-gpt_neo-tiny", pad_token_id=tokenizer.eos_token_id) prompt_text = "אני אוהב שוקולד ועוגות" max_len = 512 sample_output_num = 3 seed = 1000 import numpy as np import torch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") n_gpu = 0 if torch.cuda.is_available()==False else torch.cuda.device_count() print(f"device: {device}, n_gpu: {n_gpu}") np.random.seed(seed) torch.manual_seed(seed) if n_gpu > 0: torch.cuda.manual_seed_all(seed) model.to(device) encoded_prompt = tokenizer.encode( prompt_text, add_special_tokens=False, return_tensors="pt") encoded_prompt = encoded_prompt.to(device) if encoded_prompt.size()[-1] == 0: input_ids = None else: input_ids = encoded_prompt print("input_ids = " + str(input_ids)) if input_ids != None: max_len += len(encoded_prompt[0]) if max_len > 1024: max_len = 1024 print("Updated max_len = " + str(max_len)) stop_token = "<|endoftext|>" new_lines = "\n\n\n" sample_outputs = model.generate( input_ids, do_sample=True, max_length=max_len, top_k=50, top_p=0.95, num_return_sequences=sample_output_num ) print(100 * '-' + "\n\t\tOutput\n" + 100 * '-') for i, sample_output in enumerate(sample_outputs): text = tokenizer.decode(sample_output, skip_special_tokens=True) # Remove all text after the stop token text = text[: text.find(stop_token) if stop_token else None] # Remove all text after 3 newlines text = text[: text.find(new_lines) if new_lines else None] print("\n{}: {}".format(i, text)) print("\n" + 100 * '-') ```
{"language": "he", "license": "mit", "thumbnail": "https://avatars1.githubusercontent.com/u/3617152?norod.jpg", "widget": [{"text": "\u05e2\u05d5\u05d3 \u05d1\u05d9\u05de\u05d9 \u05e7\u05d3\u05dd"}, {"text": "\u05e7\u05d5\u05e8\u05d0\u05d9\u05dd \u05dc\u05d9 \u05d3\u05d5\u05e8\u05d5\u05df \u05d5\u05d0\u05e0\u05d9 \u05de\u05e2\u05d5\u05e0\u05d9\u05d9\u05df \u05dc"}, {"text": "\u05e7\u05d5\u05e8\u05d0\u05d9\u05dd \u05dc\u05d9 \u05d0\u05d9\u05e6\u05d9\u05e7 \u05d5\u05d0\u05e0\u05d9 \u05d7\u05d5\u05e9\u05d1 \u05e9"}, {"text": "\u05d4\u05d7\u05ea\u05d5\u05dc \u05e9\u05dc\u05da \u05de\u05d0\u05d5\u05d3 \u05d7\u05de\u05d5\u05d3 \u05d5"}]}
Norod78/hebrew-gpt_neo-tiny
null
[ "transformers", "pytorch", "jax", "onnx", "safetensors", "gpt_neo", "text-generation", "he", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# hebrew-gpt_neo-xl-poetry Hebrew poetry text generation model which was fine tuned upon on [hebrew-gpt_neo-xl](https://huggingface.co/Norod78/hebrew-gpt_neo-xl). ## Datasets An assortment of various Hebrew books, magazines and poetry corpuses ## Training Config Similar to [this one](https://github.com/Norod/hebrew-gpt_neo/tree/main/hebrew-gpt_neo-xl/configs) <BR> ## Usage ### Google Colab Notebook Available [here ](https://colab.research.google.com/github/Norod/hebrew-gpt_neo/blob/main/hebrew-gpt_neo-xl/Norod78_hebrew_gpt_neo_xl_Colab.ipynb) <BR> #### Simple usage sample code ```python !pip install tokenizers==0.10.3 transformers==4.8.0 from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Norod78/hebrew-gpt_neo-xl-poetry") model = AutoModelForCausalLM.from_pretrained("Norod78/hebrew-gpt_neo-xl-poetry", pad_token_id=tokenizer.eos_token_id) prompt_text = "אני אוהב שוקולד ועוגות" max_len = 512 sample_output_num = 3 seed = 1000 import numpy as np import torch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") n_gpu = 0 if torch.cuda.is_available()==False else torch.cuda.device_count() print(f"device: {device}, n_gpu: {n_gpu}") np.random.seed(seed) torch.manual_seed(seed) if n_gpu > 0: torch.cuda.manual_seed_all(seed) model.to(device) encoded_prompt = tokenizer.encode( prompt_text, add_special_tokens=False, return_tensors="pt") encoded_prompt = encoded_prompt.to(device) if encoded_prompt.size()[-1] == 0: input_ids = None else: input_ids = encoded_prompt print("input_ids = " + str(input_ids)) if input_ids != None: max_len += len(encoded_prompt[0]) if max_len > 2048: max_len = 2048 print("Updated max_len = " + str(max_len)) stop_token = "<|endoftext|>" new_lines = "\n\n\n" sample_outputs = model.generate( input_ids, do_sample=True, max_length=max_len, top_k=50, top_p=0.95, num_return_sequences=sample_output_num ) print(100 * '-' + "\n\t\tOutput\n" + 100 * '-') for i, sample_output in enumerate(sample_outputs): text = tokenizer.decode(sample_output, skip_special_tokens=True) # Remove all text after the stop token text = text[: text.find(stop_token) if stop_token else None] # Remove all text after 3 newlines text = text[: text.find(new_lines) if new_lines else None] print("\n{}: {}".format(i, text)) print("\n" + 100 * '-') ```
{"language": "he", "license": "mit", "thumbnail": "https://avatars1.githubusercontent.com/u/3617152?norod.jpg", "widget": [{"text": "\u05e2\u05d5\u05d3 \u05d1\u05d9\u05de\u05d9 \u05e7\u05d3\u05dd"}, {"text": "\u05ea\u05e8\u05d9\u05e1\u05e8 \u05de\u05db\u05e9\u05e4\u05d5\u05ea \u05e1\u05d2"}, {"text": "\n\n\u05d4\u05d0\u05d9\u05e9 \u05d4\u05d0\u05d7\u05e8\u05d5\u05df \u05d1\u05e2\u05d5\u05dc\u05dd /"}, {"text": "\u05e4\u05e2\u05dd \u05d0\u05d7\u05ea, \u05dc\u05e4\u05e0\u05d9 \u05e9\u05e0\u05d9\u05dd \u05e8\u05d1\u05d5\u05ea"}, {"text": "\u05d4\u05e8\u05de\u05d9\u05d5\u05e0\u05d9 \u05d4\u05e1\u05ea\u05d9\u05e8\u05d4 \u05d0\u05ea"}, {"text": "\u05dc\u05e4\u05ea\u05e2, \u05d0\u05d5\u05e8 \u05d9\u05e8\u05d5\u05e7"}]}
Norod78/hebrew-gpt_neo-xl-poetry
null
[ "transformers", "pytorch", "jax", "safetensors", "gpt_neo", "text-generation", "he", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# hebrew-gpt_neo-xl Hebrew text generation model based on [EleutherAI's gpt-neo](https://github.com/EleutherAI/gpt-neo). Each was trained on a TPUv3-8 which was made avilable to me via the [TPU Research Cloud](https://sites.research.google/trc/) Program. ## Datasets 1. An assortment of various Hebrew corpuses - I have made it available [here](https://mega.nz/folder/CodSSA4R#4INvMes-56m_WUi7jQMbJQ) 2. oscar / unshuffled_deduplicated_he - [Homepage](https://oscar-corpus.com) | [Dataset Permalink](https://huggingface.co/datasets/viewer/?dataset=oscar&config=unshuffled_deduplicated_he) The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture. 3. CC100-Hebrew Dataset [Homepage](https://metatext.io/datasets/cc100-hebrew) Created by Conneau & Wenzek et al. at 2020, the CC100-Hebrew This dataset is one of the 100 corpora of monolingual data that was processed from the January-December 2018 Commoncrawl snapshots from the CC-Net repository. The size of this corpus is 6.1G., in Hebrew language. ## Training Config Available [here](https://github.com/Norod/hebrew-gpt_neo/tree/main/hebrew-gpt_neo-xl/configs) <BR> ## Usage ### Google Colab Notebook Available [here ](https://colab.research.google.com/github/Norod/hebrew-gpt_neo/blob/main/hebrew-gpt_neo-xl/Norod78_hebrew_gpt_neo_xl_Colab.ipynb) <BR> #### Simple usage sample code ```python !pip install tokenizers==0.10.3 transformers==4.8.0 from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Norod78/hebrew-gpt_neo-xl") model = AutoModelForCausalLM.from_pretrained("Norod78/hebrew-gpt_neo-xl", pad_token_id=tokenizer.eos_token_id) prompt_text = "אני אוהב שוקולד ועוגות" max_len = 512 sample_output_num = 3 seed = 1000 import numpy as np import torch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") n_gpu = 0 if torch.cuda.is_available()==False else torch.cuda.device_count() print(f"device: {device}, n_gpu: {n_gpu}") np.random.seed(seed) torch.manual_seed(seed) if n_gpu > 0: torch.cuda.manual_seed_all(seed) model.to(device) encoded_prompt = tokenizer.encode( prompt_text, add_special_tokens=False, return_tensors="pt") encoded_prompt = encoded_prompt.to(device) if encoded_prompt.size()[-1] == 0: input_ids = None else: input_ids = encoded_prompt print("input_ids = " + str(input_ids)) if input_ids != None: max_len += len(encoded_prompt[0]) if max_len > 2048: max_len = 2048 print("Updated max_len = " + str(max_len)) stop_token = "<|endoftext|>" new_lines = "\ \ \ " sample_outputs = model.generate( input_ids, do_sample=True, max_length=max_len, top_k=50, top_p=0.95, num_return_sequences=sample_output_num ) print(100 * '-' + "\ \t\tOutput\ " + 100 * '-') for i, sample_output in enumerate(sample_outputs): text = tokenizer.decode(sample_output, skip_special_tokens=True) # Remove all text after the stop token text = text[: text.find(stop_token) if stop_token else None] # Remove all text after 3 newlines text = text[: text.find(new_lines) if new_lines else None] print("\ {}: {}".format(i, text)) print("\ " + 100 * '-') ```
{"language": "he", "license": "mit", "thumbnail": "https://avatars1.githubusercontent.com/u/3617152?norod.jpg", "widget": [{"text": "\u05e2\u05d5\u05d3 \u05d1\u05d9\u05de\u05d9 \u05e7\u05d3\u05dd"}, {"text": "\u05e7\u05d5\u05e8\u05d0\u05d9\u05dd \u05dc\u05d9 \u05d3\u05d5\u05e8\u05d5\u05df \u05d5\u05d0\u05e0\u05d9 \u05de\u05e2\u05d5\u05e0\u05d9\u05d9\u05df \u05dc"}, {"text": "\u05e7\u05d5\u05e8\u05d0\u05d9\u05dd \u05dc\u05d9 \u05d0\u05d9\u05e6\u05d9\u05e7 \u05d5\u05d0\u05e0\u05d9 \u05d7\u05d5\u05e9\u05d1 \u05e9"}, {"text": "\u05d4\u05d7\u05ea\u05d5\u05dc \u05e9\u05dc\u05da \u05de\u05d0\u05d5\u05d3 \u05d7\u05de\u05d5\u05d3 \u05d5"}, {"text": "\u05d5\u05d1\u05d3\u05e8\u05da \u05e8\u05d0\u05d9\u05e0\u05d5 \u05e9\u05d4\u05d2\u05df"}]}
Norod78/hebrew-gpt_neo-xl
null
[ "transformers", "pytorch", "jax", "onnx", "safetensors", "gpt_neo", "text-generation", "he", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# hebrew_poetry-gpt_neo-small Hebrew poetry text generation model, fined tuned upon [hebrew-gpt_neo-small](https://huggingface.co/Norod78/hebrew-gpt_neo-small) which was trained using [EleutherAI's gpt-neo](https://github.com/EleutherAI/gpt-neo). Fine-tuning was done using [@minimaxir](https://twitter.com/minimaxir)'s [aitextgen](https://github.com/minimaxir/aitextgen). ## Datasets 1. Text from [New stage](http://stage.co.il/) 2. A dataset containing Hebrew lyrics
{"language": "he", "license": "mit", "thumbnail": "https://avatars1.githubusercontent.com/u/3617152?norod.jpg", "widget": [{"text": "\u05e4\u05e2\u05dd \u05d0\u05d7\u05ea \u05dc\u05e4\u05e0\u05d9 \u05e9\u05e0"}, {"text": "\u05d4\u05d9\u05dd \u05db\u05d7\u05d5\u05dc \u05d5\u05d0\u05e0\u05d9 \u05d7"}, {"text": "\u05e9\u05dd \u05d4\u05d9\u05e6\u05d9\u05e8\u05d4:"}, {"text": "\u05db\u05e9\u05d4\u05de\u05db\u05d5\u05e0\u05d5\u05ea"}]}
Norod78/hebrew_poetry-gpt_neo-small
null
[ "transformers", "pytorch", "jax", "safetensors", "gpt_neo", "text-generation", "he", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# hebrew_stories-gpt_neo-small Hebrew story-text generation model, fined tuned upon [hebrew-gpt_neo-small](https://huggingface.co/Norod78/hebrew-gpt_neo-small) which was trained using [EleutherAI's gpt-neo](https://github.com/EleutherAI/gpt-neo). ## Dataset Text from various Hebrew books
{"language": "he", "license": "mit", "thumbnail": "https://avatars1.githubusercontent.com/u/3617152?norod.jpg", "widget": [{"text": "\u05ea\u05e8\u05d9\u05e1\u05e8 \u05de\u05db\u05e9\u05e4\u05d5\u05ea \u05e1\u05d2"}, {"text": "\n\n\u05d4\u05d0\u05d9\u05e9 \u05d4\u05d0\u05d7\u05e8\u05d5\u05df \u05d1\u05e2\u05d5\u05dc\u05dd /"}, {"text": "\u05e4\u05e2\u05dd \u05d0\u05d7\u05ea, \u05dc\u05e4\u05e0\u05d9 \u05e9\u05e0\u05d9\u05dd \u05e8\u05d1\u05d5\u05ea"}, {"text": "\u05d4\u05e8\u05de\u05d9\u05d5\u05e0\u05d9 \u05d4\u05e1\u05ea\u05d9\u05e8\u05d4 \u05d0\u05ea"}, {"text": "\u05dc\u05e4\u05ea\u05e2, \u05d0\u05d5\u05e8 \u05d9\u05e8\u05d5\u05e7"}]}
Norod78/hebrew_stories-gpt_neo-small
null
[ "transformers", "pytorch", "jax", "safetensors", "gpt_neo", "text-generation", "he", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# hewiki-articles-distilGPT2py-il ## A tiny GPT2 model for generating Hebrew text A distilGPT2 sized model. <br> Training data was hewiki-20200701-pages-articles-multistream.xml.bz2 from https://dumps.wikimedia.org/hewiki/20200701/ <br> XML has been converted to plain text using Wikipedia Extractor http://medialab.di.unipi.it/wiki/Wikipedia_Extractor <br> I then added <|startoftext|> and <|endoftext|> markers and deleted empty lines. <br> #### How to use ```python import torch import torch.nn as nn from transformers import GPT2Tokenizer, GPT2LMHeadModel tokenizer = GPT2Tokenizer.from_pretrained("Norod78/hewiki-articles-distilGPT2py-il") model = GPT2LMHeadModel.from_pretrained("Norod78/hewiki-articles-distilGPT2py-il").eval() bos_token = tokenizer.bos_token #Beginning of sentace eos_token = tokenizer.eos_token #End of sentence def generate_word(model, tokens_tensor, temperature=1.0): """ Sample a word given a tensor of tokens of previous words from a model. Given the words we have, sample a plausible word. Temperature is used for controlling randomness. If using temperature==0 we simply use a greedy arg max. Else, we sample from a multinomial distribution using a lower inverse temperature to allow for more randomness to escape repetitions. """ with torch.no_grad(): outputs = model(tokens_tensor) predictions = outputs[0] if temperature>0: # Make the distribution more or less skewed based on the temperature predictions = outputs[0]/temperature # Sample from the distribution softmax = nn.Softmax(dim=0) predicted_index = torch.multinomial(softmax(predictions[0,-1,:]),1).item() # Simply take the arg-max of the distribution else: predicted_index = torch.argmax(predictions[0, -1, :]).item() # Decode the encoding to the corresponding word predicted_text = tokenizer.decode([predicted_index]) return predicted_text def generate_sentence(model, tokenizer, initial_text, temperature=1.0): """ Generate a sentence given some initial text using a model and a tokenizer. Returns the new sentence. """ # Encode a text inputs text = "" sentence = text # We avoid an infinite loop by setting a maximum range for i in range(0,84): indexed_tokens = tokenizer.encode(initial_text + text) # Convert indexed tokens in a PyTorch tensor tokens_tensor = torch.tensor([indexed_tokens]) new_word = generate_word(model, tokens_tensor, temperature=temperature) # Here the temperature is slowly decreased with each generated word, # this ensures that the sentence (ending) makes more sense. # We don't decrease to a temperature of 0.0 to leave some randomness in. if temperature<(1-0.008): temperature += 0.008 else: temperature = 0.996 text = text+new_word # Stop generating new words when we have reached the end of the line or the poem if eos_token in new_word: # returns new sentence and whether poem is done return (text.replace(eos_token,"").strip(), True) elif '/' in new_word: return (text.strip(), False) elif bos_token in new_word: return (text.replace(bos_token,"").strip(), False) return (text, True) for output_num in range(1,5): init_text = "בוקר טוב" text = bos_token + init_text for i in range(0,84): sentence = generate_sentence(model, tokenizer, text, temperature=0.9) text = init_text + sentence[0] print(text) if (sentence[1] == True): break ```
{"language": "he", "license": "mit", "thumbnail": "https://avatars1.githubusercontent.com/u/3617152?norod.jpg", "widget": [{"text": "<|startoftext|>\u05d4\u05d7\u05d5\u05e7 \u05d4\u05e9\u05e0\u05d9 \u05e9\u05dc \u05de\u05d5\u05e2\u05d3\u05d5\u05df \u05e7\u05e8\u05d1 \u05d4\u05d5\u05d0"}, {"text": "<|startoftext|>\u05e8\u05d0\u05e9 \u05d4\u05de\u05de\u05e9\u05dc\u05d4 \u05d1\u05df \u05d2\u05d5\u05e8\u05d9\u05d5\u05df"}, {"text": "<|startoftext|>\u05dc\u05de\u05d9\u05d3\u05ea \u05de\u05db\u05d5\u05e0\u05d4 (\u05e1\u05e8\u05d8)"}, {"text": "<|startoftext|>\u05de\u05e0\u05e9\u05d4 \u05e4\u05d5\u05de\u05e4\u05e8\u05e0\u05d9\u05e7\u05dc"}, {"text": "<|startoftext|>\u05d0\u05d9 \u05e9\u05d5\u05d5\u05d9\u05d5\u05df "}]}
Norod78/hewiki-articles-distilGPT2py-il
null
[ "transformers", "pytorch", "tf", "jax", "safetensors", "gpt2", "text-generation", "he", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
{}
Norrawee/monsoon-ner
null
[ "transformers", "pytorch", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Norrawee/mosoon-ner
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
{}
Norrawee/wangchanberta-ner-2
null
[ "transformers", "pytorch", "camembert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
{}
Norrawee/wangchanberta-w10
null
[ "transformers", "pytorch", "camembert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
{}
Norrawee/wangchanberta-w20
null
[ "transformers", "pytorch", "camembert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
{}
Norrawee/wangchanberta-w50
null
[ "transformers", "pytorch", "camembert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Not/test-model
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
NotSage/sage
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
NotSage/sagecodes
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Noureddine/model_name
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Noureddine/xlnet-base
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
#Lelouch DialoGPT model
{"tags": ["conversational"]}
Nova/DialoGPT-medium-Lelouch
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# My Awesome Model
{"tags": ["conversational"]}
NovaChrono/twervy
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Genji-JP 6B Please check our blog post for more details, samples, evaluations and more: [Blogpost](https://blog.novelai.net/data-efficient-language-transfer-with-gpt-j-45daedaaf35a) ## Model Description Genji-JP 6B is a model finetuned on our Japanese storytelling dataset based on EleutherAI's GPT-J 6B model. This particular model is trained on Japanese web novels. | Hyperparameter | Value | |-------------------|--------| | n_parameters | 6,053,381,344 | | n_layers | 28* | | d_model | 4,096 | | d_ff | 16,384 | | n_heads | 16 | | d_head | 256 | | n_ctx | 2,048 | | n_vocab | 50,400 (same tokenizer as GPT-2/3) | | position encoding | [Rotary position encodings (RoPE)](https://arxiv.org/abs/2104.09864) | | RoPE dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) | `*` each layer consists of one feedforward block and one self attention block The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model dimension is split into 16 heads, each with a dimension of 256. Rotary position encodings (RoPE) was applied to 64 dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as GPT-2/GPT-3. ## Training data GPT-J 6B was pretrained on the [Pile](pile.eleuther.ai), a large scale curated dataset created by EleutherAI for the purpose of training this model. After the pre-training, it's finetuned on our Japanese storytelling dataset. Check our blog post for more details. ### How to use ``` from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B") model = AutoModelForCausalLM.from_pretrained("NovelAI/genji-jp", torch_dtype=torch.float16, low_cpu_mem_usage=True).eval().cuda() text = '''あらすじ:あなたは異世界に転生してしまいました。勇者となって、仲間を作り、異世界を冒険しよう! *** 転生すると、ある能力を手に入れていた。それは、''' tokens = tokenizer(text, return_tensors="pt").input_ids generated_tokens = model.generate(tokens.long().cuda(), use_cache=True, do_sample=True, temperature=1, top_p=0.9, repetition_penalty=1.125, min_length=1, max_length=len(tokens[0]) + 400, pad_token_id=tokenizer.eos_token_id) last_tokens = generated_tokens[0] generated_text = tokenizer.decode(last_tokens).replace("�", "") print("Generation:\n" + generated_text) ``` When run, produces output like this: ``` Generation: あらすじ:あなたは異世界に転生してしまいました。勇者となって、仲間を作り、異世界を冒険しよう! *** 転生すると、ある能力を手に入れていた。それは、『予知』だ。過去から未来のことを、誰も知らない出来事も含めて見通すことが出来る。 悪魔の欠片と呼ばれる小さな結晶を取り込んで、使役することが出来る。人を惹きつけ、堕落させる。何より、俺は男なんて居なかったし、女に興味もない。……そんなクズの片棒を担ぎ上げる奴が多くなると思うと、ちょっと苦しい。 だが、一部の人間には協力者を得ることが出来る。目立たない街にある寺の中で、常に家に引きこもっている老人。そんなヤツの魂をコントロールすることが出来るのだ。便利な能力だ。しかし、裏切り者は大勢いる。気を抜けば、狂う。だから注意が必要だ。 ――「やってやるよ」  アーロンは不敵に笑った。この ``` ## Acknowledgements This project was possible because of the compute provided by the [TPU Research Cloud](https://sites.research.google/trc/) Thanks [EleutherAI](https://eleuther.ai/) for pretraining the GPT-J 6B model. Thanks to everyone who contributed to this project! - [Finetune](https://github.com/finetuneanon) - [Aero](https://github.com/AeroScripts) - [Kurumuz](https://github.com/kurumuz)
{"language": ["ja", "en"], "license": "apache-2.0", "tags": ["pytorch", "causal-lm"]}
NovelAI/genji-jp
null
[ "transformers", "pytorch", "gptj", "text-generation", "causal-lm", "ja", "en", "arxiv:2104.09864", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
# Genji-python 6B For example usage or to easily use the model you can check our colab notebook: [Notebook](https://colab.research.google.com/drive/1PnWpx02IEUkY8jhLKd_NewUGEXahAska?usp=sharing) ## Model Description Genji is a transformer model finetuned on EleutherAI's GPT-J 6B model. This particular model is trained on python only code approaching 4GB in size. Split model has the checkpoints splitted, which makes it use less system RAM while loading and makes it faster to load. This model needs more effort to set up as you need to install git-lfs and pull the repo. | Hyperparameter | Value | |-------------------|--------| | n_parameters | 6,053,381,344 | | n_layers | 28* | | d_model | 4,096 | | d_ff | 16,384 | | n_heads | 16 | | d_head | 256 | | n_ctx | 2,048 | | n_vocab | 50,400 (same tokenizer as GPT-2/3) | | position encoding | [Rotary position encodings (RoPE)](https://arxiv.org/abs/2104.09864) | | RoPE dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) | `*` each layer consists of one feedforward block and one self attention block The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model dimension is split into 16 heads, each with a dimension of 256. Rotary position encodings (RoPE) was applied to 64 dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as GPT-2/GPT-3. ## Training data GPT-J 6B was pretrained on the [Pile](pile.eleuther.ai), a large scale curated dataset created by EleutherAI for the purpose of training this model. After the pre-training, it's finetuned on the python code that was taken from the Pile. ## Training procedure Genji-python-6B is trained for 20k steps on around 655 million tokens with learning rate of 2e-06 ## Intended Use This model is trained for assistence on writing python code and having fun trying weird stuff with it. ### How to use This model is only usable with our fork because GPT-J is not merged to the main transformers repo yet. When it's merged, we will make this model easily loadable. For now, you need to use this fork: [Fork](https://github.com/finetuneanon/transformers) to install with pip: ```bash pip install git+https://github.com/finetuneanon/transformers@gpt-neo-localattention3-rp-b ``` **git-lfs** also needs to be installed, on ubuntu: ```bash apt install git-lfs ``` after it's installed, initialize git-lfs: ```bash git lfs install ``` then clone this repo: ```bash git clone https://huggingface.co/NovelAI/genji-python-6B-split ``` Now we can load the model. We recommend the usage of the model as FP16. That way, it fits in 16GB VRAM cards. How to use: ```python from transformers import ( AutoTokenizer, AutoModelForCausalLM, GPTNeoForCausalLM, ) model = AutoModelForCausalLM.from_pretrained("genji-python-6B-split/model").half().eval().cuda() tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-2.7B") text = '''def print_customer_name''' tokens = tokenizer(text, return_tensors="pt").input_ids generated_tokens = model.generate(tokens.long().cuda(), use_cache=True, do_sample=True, top_k=50, temperature=0.3, top_p=0.9, repetition_penalty=1.125, min_length=1, max_length=len(tokens[0]) + 400, pad_token_id=tokenizer.eos_token_id) last_tokens = generated_tokens[0][len(tokens[0]):] generated_text = tokenizer.decode(last_tokens) print("Generation:\n" + generated_text) ``` When ran, this code generates: ```python Prompt: def print_customer_name Generation: (self, customer): """Print the name of a customer.""" if not self.is_valid(): return print("Customer: {}".format(customer)) ``` For example usage, you can see our colab notebook as well: [Notebook](https://colab.research.google.com/drive/1PnWpx02IEUkY8jhLKd_NewUGEXahAska?usp=sharing) ## Eval results TBD ## Acknowledgements This project was possible because of the compute provided by the [TPU Research Cloud](https://sites.research.google/trc/) and [EleutherAI](https://eleuther.ai/) for pretraining of the GPT-J 6B. Thanks to everyone who contributed to this project: - [Aero](https://github.com/AeroScripts) - [Finetune](https://github.com/finetuneanon) - [Kurumuz](https://github.com/kurumuz)
{"language": ["en"], "license": "apache-2.0", "tags": ["pytorch", "causal-lm"], "datasets": ["the Pile"]}
NovelAI/genji-python-6B-split
null
[ "pytorch", "causal-lm", "en", "arxiv:2104.09864", "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Genji-python 6B For example usage or to easily use the model you can check our colab notebook: [Notebook](https://colab.research.google.com/drive/1PnWpx02IEUkY8jhLKd_NewUGEXahAska?usp=sharing) ## Model Description Genji is a transformer model finetuned on EleutherAI's GPT-J 6B model. This particular model is trained on python only code approaching 4GB in size. | Hyperparameter | Value | |-------------------|--------| | n_parameters | 6,053,381,344 | | n_layers | 28* | | d_model | 4,096 | | d_ff | 16,384 | | n_heads | 16 | | d_head | 256 | | n_ctx | 2,048 | | n_vocab | 50,400 (same tokenizer as GPT-2/3) | | position encoding | [Rotary position encodings (RoPE)](https://arxiv.org/abs/2104.09864) | | RoPE dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) | `*` each layer consists of one feedforward block and one self attention block The model consists of 28 layers with a model dimension of 4096, and a feedforward dimension of 16384. The model dimension is split into 16 heads, each with a dimension of 256. Rotary position encodings (RoPE) was applied to 64 dimensions of each head. The model is trained with a tokenization vocabulary of 50257, using the same set of BPEs as GPT-2/GPT-3. ## Training data GPT-J 6B was pretrained on the [Pile](pile.eleuther.ai), a large scale curated dataset created by EleutherAI for the purpose of training this model. After the pre-training, it's finetuned on the python code that was taken from the Pile. ## Training procedure Genji-python-6B is trained for 20k steps on around 655 million tokens with learning rate of 2e-06 ## Intended Use This model is trained for assistence on writing python code and having fun trying weird stuff with it. ### How to use This model is only usable with our fork because GPT-J is not merged to the main transformers repo yet. When it's merged, we will make this model easily loadable. For now, you need to use this fork: [Fork](https://github.com/finetuneanon/transformers) to install with pip: ```bash pip install git+https://github.com/finetuneanon/transformers@gpt-neo-localattention3-rp-b ``` This model takes more than 16 gigs of RAM to load. If you want more efficient and faster loading, please check our split model. We recommend the usage of the model as FP16. That way, it fits in 16GB VRAM cards. How to use: ```python from transformers import ( AutoTokenizer, AutoModelForCausalLM, GPTNeoForCausalLM, ) model = AutoModelForCausalLM.from_pretrained("NovelAI/genji-python-6B", use_auth_token=True).half().eval().cuda() tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-2.7B") text = '''def print_customer_name''' tokens = tokenizer(text, return_tensors="pt").input_ids generated_tokens = model.generate(tokens.long().cuda(), use_cache=True, do_sample=True, top_k=50, temperature=0.3, top_p=0.9, repetition_penalty=1.125, min_length=1, max_length=len(tokens[0]) + 400, pad_token_id=tokenizer.eos_token_id) last_tokens = generated_tokens[0][len(tokens[0]):] generated_text = tokenizer.decode(last_tokens) print("Generation:\n" + generated_text) ``` When ran, this code generates: ```python Prompt: def print_customer_name Generation: (self, customer): """Print the name of a customer.""" if not self.is_valid(): return print("Customer: {}".format(customer)) ``` For example usage, you can see our colab notebook as well: [Notebook](https://colab.research.google.com/drive/1PnWpx02IEUkY8jhLKd_NewUGEXahAska?usp=sharing) ## Eval results TBD ## Acknowledgements This project was possible because of the compute provided by the [TPU Research Cloud](https://sites.research.google/trc/) and [EleutherAI](https://eleuther.ai/) for pretraining of the GPT-J 6B. Thanks to everyone who contributed to this project! - [Aero](https://github.com/AeroScripts) - [Finetune](https://github.com/finetuneanon) - [Kurumuz](https://github.com/kurumuz)
{"language": ["en"], "license": "apache-2.0", "tags": ["pytorch", "causal-lm"], "datasets": ["the Pile"]}
NovelAI/genji-python-6B
null
[ "transformers", "pytorch", "gpt_neo", "text-generation", "causal-lm", "en", "arxiv:2104.09864", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
# bert-base-multilingual-uncased-sentiment This a bert-base-multilingual-uncased model finetuned for sentiment analysis on product reviews in six languages: English, Dutch, German, French, Spanish and Italian. It predicts the sentiment of the review as a number of stars (between 1 and 5). This model is intended for direct use as a sentiment analysis model for product reviews in any of the six languages above, or for further finetuning on related sentiment analysis tasks. ## Training data Here is the number of product reviews we used for finetuning the model: | Language | Number of reviews | | -------- | ----------------- | | English | 150k | | Dutch | 80k | | German | 137k | | French | 140k | | Italian | 72k | | Spanish | 50k | ## Accuracy The finetuned model obtained the following accuracy on 5,000 held-out product reviews in each of the languages: - Accuracy (exact) is the exact match on the number of stars. - Accuracy (off-by-1) is the percentage of reviews where the number of stars the model predicts differs by a maximum of 1 from the number given by the human reviewer. | Language | Accuracy (exact) | Accuracy (off-by-1) | | -------- | ---------------------- | ------------------- | | English | 67% | 95% | Dutch | 57% | 93% | German | 61% | 94% | French | 59% | 94% | Italian | 59% | 95% | Spanish | 58% | 95% ## Contact In addition to this model, [NLP Town](https://www.nlp.town) offers custom, monolingual sentiment models for many languages and an improved multilingual model through [RapidAPI](https://rapidapi.com/nlp-town-nlp-town-default/api/multilingual-sentiment-analysis2/). Feel free to contact us for questions, feedback and/or requests for similar models.
{"language": ["en", "nl", "de", "fr", "it", "es"], "license": "mit"}
Noxel/sentiments_multilenguaje
null
[ "transformers", "bert", "text-classification", "en", "nl", "de", "fr", "it", "es", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00