modelId
stringlengths
4
81
tags
sequence
pipeline_tag
stringclasses
17 values
config
dict
downloads
int64
0
59.7M
first_commit
timestamp[ns, tz=UTC]
card
stringlengths
51
438k
AnonymousSub/rule_based_roberta_twostagequadruplet_hier_epochs_1_shard_1
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
2021-06-03T13:08:46Z
--- tags: - tabular-classification - sklearn dataset: - wine-quality widget: structuredData: fixed_acidity: - 7.3 - 7.8 - 10.3 volatile_acidity: - 0.7 - 0.88 - 0.32 citric_acid: - 0 - 0 - 0.45 residual_sugar: - 1.9 - 2.6 - 6.4 chlorides: - 0.076 - 0.098 - 0.073 free_sulfur_dioxide: - 11 - 25 - 5 total_sulfur_dioxide: - 34 - 67 - 13 density: - 0.9978 - 0.9968 - 0.9976 pH: - 3.51 - 3.2 - 3.23 sulphates: - 0.56 - 0.68 - 0.82 alcohol: - 9.4 - 9.8 - 12.6 --- ## Wine Quality classification ### A Simple Example of Scikit-learn Pipeline > Inspired by https://towardsdatascience.com/a-simple-example-of-pipeline-in-machine-learning-with-scikit-learn-e726ffbb6976 by Saptashwa Bhattacharyya ### How to use ```python from huggingface_hub import hf_hub_url, cached_download import joblib import pandas as pd REPO_ID = "julien-c/wine-quality" FILENAME = "sklearn_model.joblib" model = joblib.load(cached_download( hf_hub_url(REPO_ID, FILENAME) )) # model is a `sklearn.pipeline.Pipeline` ``` #### Get sample data from this repo ```python data_file = cached_download( hf_hub_url(REPO_ID, "winequality-red.csv") ) winedf = pd.read_csv(data_file, sep=";") X = winedf.drop(["quality"], axis=1) Y = winedf["quality"] print(X[:3]) ``` | | fixed acidity | volatile acidity | citric acid | residual sugar | chlorides | free sulfur dioxide | total sulfur dioxide | density | pH | sulphates | alcohol | |---:|----------------:|-------------------:|--------------:|-----------------:|------------:|----------------------:|-----------------------:|----------:|-----:|------------:|----------:| | 0 | 7.4 | 0.7 | 0 | 1.9 | 0.076 | 11 | 34 | 0.9978 | 3.51 | 0.56 | 9.4 | | 1 | 7.8 | 0.88 | 0 | 2.6 | 0.098 | 25 | 67 | 0.9968 | 3.2 | 0.68 | 9.8 | | 2 | 7.8 | 0.76 | 0.04 | 2.3 | 0.092 | 15 | 54 | 0.997 | 3.26 | 0.65 | 9.8 | #### Get your prediction ```python labels = model.predict(X[:3]) # [5, 5, 5] ``` #### Eval ```python model.score(X, Y) # 0.6616635397123202 ``` ### 🍷 Disclaimer No red wine was drunk (unfortunately) while training this model 🍷
AnonymousSub/rule_based_roberta_twostagequadruplet_hier_epochs_1_shard_10
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8647022085959235 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1344 - F1: 0.8647 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2568 | 1.0 | 525 | 0.1596 | 0.8210 | | 0.1279 | 2.0 | 1050 | 0.1368 | 0.8522 | | 0.0814 | 3.0 | 1575 | 0.1344 | 0.8647 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1 - Datasets 1.18.0 - Tokenizers 0.10.3
AnonymousSub/rule_based_roberta_twostagetriplet_hier_epochs_1_shard_10
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- language: ar datasets: - common_voice metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Arabic by Othmane Rifki results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice ar type: common_voice args: ar metrics: - name: Test WER type: wer value: 46.77 --- # Wav2Vec2-Large-XLSR-53-Arabic Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Arabic using the [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "ar", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("othrif/wav2vec2-large-xlsr-arabic") model = Wav2Vec2ForCTC.from_pretrained("othrif/wav2vec2-large-xlsr-arabic") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Arabic test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "ar", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("othrif/wav2vec2-large-xlsr-arabic") model = Wav2Vec2ForCTC.from_pretrained("othrif/wav2vec2-large-xlsr-arabic") model.to("cuda") chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\؛\\\\\\\\\\\\\\\\—\\\\\\\\\\\\\\\\_get\\\\\\\\\\\\\\\\«\\\\\\\\\\\\\\\\»\\\\\\\\\\\\\\\\ـ\\\\\\\\\\\\\\\\ـ\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\!\\\\\\\\\\\\\\\\-\\\\\\\\\\\\\\\\;\\\\\\\\\\\\\\\\:\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\\“\\\\\\\\\\\\\\\\%\\\\\\\\\\\\\\\\‘\\\\\\\\\\\\\\\\”\\\\\\\\\\\\\\\\�\\\\\\\\\\\\\\\\#\\\\\\\\\\\\\\\\،\\\\\\\\\\\\\\\\☭,\\\\\\\\\\\\\\\\؟]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the audio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 46.77 ## Training The Common Voice `train`, `validation` datasets were used for training. The script used for training can be found [here](https://huggingface.co/othrif/wav2vec2-large-xlsr-arabic/tree/main)
AnonymousSub/rule_based_roberta_twostagetriplet_hier_epochs_1_shard_1_squad2.0
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- language: arz datasets: - https://arabicspeech.org/ metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Egyptian Arabic by Othmane Rifki results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: arabicspeech.org MGB-3 type: arabicspeech.org MGB-3 args: ar metrics: - name: Test WER type: wer value: 55.2 --- # Wav2Vec2-Large-XLSR-53-Egyptian-Arabic Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Egyptian using the [arabicspeech.org MGB-3](https://arabicspeech.org/mgb3-asr/) When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "ar", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("othrif/wav2vec2-large-xlsr-egyptian") model = Wav2Vec2ForCTC.from_pretrained("othrif/wav2vec2-large-xlsr-egyptian") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Arabic test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "ar", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("othrif/wav2vec2-large-xlsr-egyptian") model = Wav2Vec2ForCTC.from_pretrained("othrif/wav2vec2-large-xlsr-egyptian") model.to("cuda") chars_to_ignore_regex = '[\؛\—\_get\«\»\ـ\ـ\,\?\.\!\-\;\:\"\“\%\‘\”\�\#\،\☭,\؟]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the audio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 55.2 ## Training The Common Voice `train`, `validation` datasets were used for training. The script used for training can be found [here](https://github.com/othrif/xlsr-wav2vec2)
AnonymousSub/rule_based_roberta_twostagetriplet_hier_epochs_1_shard_1_wikiqa
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
23
null
--- language: ary datasets: - mgb5 metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Moroccan Arabic dialect by Othmane Rifki results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: MGB5 from ELDA and https://arabicspeech.org/ type: ELDA and https://arabicspeech.org/ args: ary metrics: - name: Test WER type: wer value: 66.45 --- # Wav2Vec2-Large-XLSR-53-Moroccan Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on [MGB5 Moroccan Arabic](http://www.islrn.org/resources/938-639-614-524-5/) kindly provided by [ELDA](http://www.elra.info/en/about/elda/) and [ArabicSpeech](https://arabicspeech.org/mgb5/). In order to have access to MGB5, please request it from ELDA. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import re import torch import librosa import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import soundfile as sf dataset = load_dataset("ma_speech_corpus", split="test") processor = Wav2Vec2Processor.from_pretrained("othrif/wav2vec2-large-xlsr-moroccan") model = Wav2Vec2ForCTC.from_pretrained("othrif/wav2vec2-large-xlsr-moroccan") model.to("cuda") chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“\\'\\�]' def remove_special_characters(batch): batch["text"] = re.sub(chars_to_ignore_regex, "", batch["sentence"]).lower() + " " return batch dataset = dataset.map(remove_special_characters) dataset = dataset.select(range(10)) def speech_file_to_array_fn(batch): start, stop = batch['segment'].split('_') speech_array, sampling_rate = torchaudio.load(batch["path"]) speech_array, sampling_rate = sf.read(batch["path"], start=int(float(start) * sampling_rate), stop=int(float(stop) * sampling_rate)) batch["speech"] = librosa.resample(speech_array, sampling_rate, 16_000) batch["sampling_rate"] = 16_000 batch["target_text"] = batch["text"] return batch dataset = dataset.map( speech_file_to_array_fn ) def predict(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["predicted"] = processor.batch_decode(pred_ids) return batch dataset = dataset.map(predict, batched=True, batch_size=32) for reference, predicted in zip(dataset["sentence"], dataset["predicted"]): print("reference:", reference) print("predicted:", predicted) print("--") ``` Here's the output: ``` reference: عشرين ألفريال الوحده وشي خمسميه دريال predicted: عشرين علف ريا لوحده وشي خمسميات ريال -- reference: واحد جوج تلاتة ربعه خمسة ستة predicted: غيحك تويش تتبة نتاست -- reference: هي هاديك غتجينا تقريبا ميه وسته وعشرين ألف ريال predicted: ياض كتجينا تقريبه ميه أو ستي و عشيناأفرين -- reference: ###والصرف ليبقا نجيب بيه الصالون فلهوندا... أهاه نديروها علاش لا؟... predicted: أواصرف ليبقا نجيب يه اصالون فالهندا أه نديروها علاش لا -- reference: ###صافي مشات... أنا أختي معندي مندير بهاد صداع الراس... predicted: صافي مشات أنا خصي معندي مندير بهاد داع راسك ف -- reference: خلصو ليا غير لكريدي ديالي وديرو ليعجبكوم predicted: خلصو ليا غير لكريدي ديالي أوديرو لي عجبكوم -- reference: أنا نتكلف يلاه لقى شي حاجه نشغل بيها راسي predicted: أنا نتكلف يالله لقا شي حاجه نشغل بيها راسي ``` ## Evaluation The model can be evaluated as follows on the Arabic test data of Common Voice. ```python import re import torch import librosa import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import soundfile as sf eval_dataset = load_dataset("ma_speech_corpus", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("othrif/wav2vec2-large-xlsr-moroccan") model = Wav2Vec2ForCTC.from_pretrained("othrif/wav2vec2-large-xlsr-moroccan") model.to("cuda") chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“\\'\\�]' def remove_special_characters(batch): batch["text"] = re.sub(chars_to_ignore_regex, "", batch["sentence"]).lower() + " " return batch eval_dataset = eval_dataset.map(remove_special_characters, remove_columns=["sentence"]) #eval_dataset = eval_dataset.select(range(100)) def speech_file_to_array_fn(batch): start, stop = batch['segment'].split('_') speech_array, sampling_rate = torchaudio.load(batch["path"]) speech_array, sampling_rate = sf.read(batch["path"], start=int(float(start) * sampling_rate), stop=int(float(stop) * sampling_rate)) batch["speech"] = librosa.resample(speech_array, sampling_rate, 16_000) batch["sampling_rate"] = 16_000 batch["target_text"] = batch["text"] return batch eval_dataset = eval_dataset.map( speech_file_to_array_fn, remove_columns=eval_dataset.column_names ) def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = eval_dataset.map(evaluate, batched=True, batch_size=32) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["target_text"]))) ``` **Test Result**: 66.45 ## Training The [MGB5](http://www.islrn.org/resources/938-639-614-524-5/) `train`, `validation` datasets were used for training. The script used for training can be found [here](https://github.com/othrif/xlsr-wav2vec2)
AnonymousSub/rule_based_twostage_quadruplet_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- language: ar datasets: - https://arabicspeech.org/ tags: - audio - automatic-speech-recognition - speech license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Egyptian by Zaid Alyafeai and Othmane Rifki results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: arabicspeech.org MGB-3 type: arabicspeech.org MGB-3 args: ar metrics: - name: Test WER type: wer value: 55.2 --- # Test Wav2Vec2 with egyptian arabic Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Egyptian using the [arabicspeech.org MGB-3](https://arabicspeech.org/mgb3-asr/) When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor dataset = load_dataset("arabic_speech_corpus", split="test") processor = Wav2Vec2Processor.from_pretrained("othrif/wav2vec_test") model = Wav2Vec2ForCTC.from_pretrained("othrif/wav2vec_test") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): \\tspeech_array, sampling_rate = torchaudio.load(batch["path"]) \\tbatch["speech"] = resampler(speech_array).squeeze().numpy() \\treturn batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): \\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ```
AnonymousSub/unsup-consert-base_copy
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
2021-08-21T11:25:17Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy - f1 model_index: - name: finetuned-bert-mrpc results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue args: mrpc metric: name: F1 type: f1 value: 0.9003322259136212 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-bert-mrpc This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.5280 - Accuracy: 0.8529 - F1: 0.9003 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.5704 | 1.0 | 230 | 0.4204 | 0.7917 | 0.8542 | | 0.3391 | 2.0 | 460 | 0.4157 | 0.8456 | 0.8955 | | 0.1923 | 3.0 | 690 | 0.5280 | 0.8529 | 0.9003 | ### Framework versions - Transformers 4.9.2 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.3
AnonymousSubmission/pretrained-model-1
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
## ParsTwiNER: Transformer-based Model for Named Entity Recognition at Informal Persian An open, broad-coverage corpus and model for informal Persian named entity recognition collected from Twitter. Paper presenting ParsTwiNER: [2021.wnut-1.16](https://aclanthology.org/2021.wnut-1.16/) --- ## Results The following table summarizes the F1 score on our corpus obtained by ParsTwiNER as compared to ParsBERT as a SoTA for Persian NER. ### Named Entity Recognition on Our Corpus | Entity Type | ParsTwiNER F1 | ParsBert F1 | |:-----------:|:-------------:|:--------------:| | PER | 91 | 80 | | LOC | 82 | 68 | | ORG | 69 | 55 | | EVE | 41 | 12 | | POG | 85 | - | | NAT | 82.3 | - | | Total | 81.5 | 69.5 | ## How to use ### TensorFlow 2.0 ```python from transformers import TFAutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("overfit/twiner-bert-base-mtl") model = TFAutoModelForTokenClassification.from_pretrained("overfit/twiner-bert-base-mtl") twiner_mtl = pipeline('ner', model=model, tokenizer=tokenizer, ignore_labels=[]) ``` ## Cite Please cite the following paper in your publication if you are using [ParsTwiNER](https://aclanthology.org/2021.wnut-1.16/) in your research: ```markdown @inproceedings{aghajani-etal-2021-parstwiner, title = "{P}ars{T}wi{NER}: A Corpus for Named Entity Recognition at Informal {P}ersian", author = "Aghajani, MohammadMahdi and Badri, AliAkbar and Beigy, Hamid", booktitle = "Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021)", month = nov, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.wnut-1.16", pages = "131--136", abstract = "As a result of unstructured sentences and some misspellings and errors, finding named entities in a noisy environment such as social media takes much more effort. ParsTwiNER contains about 250k tokens, based on standard instructions like MUC-6 or CoNLL 2003, gathered from Persian Twitter. Using Cohen{'}s Kappa coefficient, the consistency of annotators is 0.95, a high score. In this study, we demonstrate that some state-of-the-art models degrade on these corpora, and trained a new model using parallel transfer learning based on the BERT architecture. Experimental results show that the model works well in informal Persian as well as in formal Persian.", } ``` ## Acknowledgments The authors would like to thank Dr. Momtazi for her support. Furthermore, we would like to acknowledge the accompaniment provided by Mohammad Mahdi Samiei and Abbas Maazallahi. ## Contributors - Mohammad Mahdi Aghajani: [Linkedin](https://www.linkedin.com/in/mohammadmahdi-aghajani-821843147/), [Github](https://github.com/mmaghajani) - Ali Akbar Badri: [Linkedin](https://www.linkedin.com/in/aliakbarbadri/), [Github](https://github.com/AliAkbarBadri) - Dr. Hamid Beigy: [Linkedin](https://www.linkedin.com/in/hamid-beigy-8982604b/) - Overfit Team: [Github](https://github.com/overfit-ir), [Telegram](https://t.me/nlp_stuff) ## Releases ### Release v1.0.0 (Aug 01, 2021) This is the first version of our ParsTwiNER.
AntonClaesson/finetuning_test
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
{0: 'Anorexia', 1: 'Anxiety', 2: 'Bullying', 3: 'Care', 4: 'Creativity', 5: 'Culture', 6: 'Depression', 7: 'Friends', 8: 'Getting help', 9: 'Happiness', 10: 'Helping others', 11: 'Helping yourself', 12: 'Hope', 13: 'Learning', 14: 'Life Issues', 15: 'Mental Health', 16: 'Mental Health Matters', 17: 'Mental health awareness', 18: 'PTSD', 19: 'Positivity', 20: 'Resilience', 21: 'Self-care', 22: 'Sharing', 23: 'Support', 24: 'University'}
Aplinxy9plin/toxic-detection-rus
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: english datasets: - bioASQ pipeline_tag: question-answering license: mit --- # T5-base model fine-tuned on BioASQ for Biological Question Answering 👩‍⚕️👨‍⚕️ [Google's T5-base](https://huggingface.co/t5-base) fine-tuned on [BioASQ](https://github.com/dmis-lab/biobert) (secondary task) for **Q&A** downstream task. ## Details of T5 [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) Pretraining Dataset: [C4](https://huggingface.co/datasets/c4) Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* ## Dependencies transformers == 4.3.3 sentencepiece >= 0.1.94 ## Usage 🚀 ```python import torch from transformers import T5ForConditionalGeneration, T5Tokenizer tokenizer = T5Tokenizer.from_pretrained("ozcangundes/T5-base-for-BioQA") model = T5ForConditionalGeneration.from_pretrained("ozcangundes/T5-base-for-BioQA") def get_answer(question,context): source_encoding=tokenizer( question, context, max_length=512, padding="max_length", truncation="only_second", return_attention_mask=True, add_special_tokens=True, return_tensors="pt") generated_ids=model.generate( input_ids=source_encoding["input_ids"], attention_mask=source_encoding["attention_mask"]) preds=[tokenizer.decode(gen_id, skip_special_tokens=True, clean_up_tokenization_spaces=True) for gen_id in generated_ids] return "".join(preds) ``` ### Example 1 ```python question={ "context":"Effect of food on the pharmacokinetics of empagliflozin, a sodium glucose cotransporter 2 (SGLT2) inhibitor, and assessment of dose proportionality in healthy volunteers. OBJECTIVES: Empagliflozin is an orally available, potent and highly selective inhibitor of the sodium glucose cotransporter 2 (SGLT2). This study was undertaken to investigate the effect of food on the pharmacokinetics of 25 mg empagliflozin and to assess dose proportionality between 10 mg and 25 mg empagliflozin under fasted conditions. MATERIALS AND METHODS: In this open-label, 3-way, cross-over study, 18 healthy volunteers received 3 single doses of empagliflozin in a randomized sequence (25 mg empagliflozin under fasted conditions, 25 mg empagliflozin after a high-fat, high-calorie breakfast and 10 mg empagliflozin under fasted conditions), each separated by a washout period of at least 7 days. Serial plasma samples were collected at selected time points over a period of 72 hours. RESULTS: Administration with food had no clinically relevant effect on the area under the plasma concentration-time curve (AUC0-∞) of empagliflozin (geometric mean ratio (GMR): 84.04, 90% confidence interval (CI): 80.86 - 87.34). The decrease observed in the maximum plasma concentrations (Cmax) of empagliflozin (GMR: 63.22, 90% CI: 56.74 - 70.44) when administered with food was not considered clinically meaningful. The increases in AUC0-∞ and Cmax for 10 mg vs. 25 mg empagliflozin administered under fasting conditions were roughly dose-proportional, as demonstrated by the slope β of the regression lines being slightly less than 1 (slope β for AUC0-∞: 0.94, 95% CI: 0.90 - 0.97; slope β for Cmax: 0.91, 95% CI: 0.80 - 1.01). Empagliflozin was well tolerated under fed and fasting conditions. CONCLUSIONS: The results support administration of empagliflozin tablets independently of food. Increases in empagliflozin exposure under fasting conditions were roughly dose-proportional between 10 mg and 25 mg empagliflozin.", "question":"Which protein does empagliflozin inhibit?" } get_answer(question["question"],question["context"]) ``` > SGLT2 ### Example 2 ```python question2={ "context":"Dermatitis herpetiformis: jejunal findings and skin response to gluten free diet. Fifty seven children with dermatitis herpetiformis, 18 from Finland and 39 from Hungary, were studied. Diagnostic criteria included the finding of granular IgA deposits in the skin of all patients. The mean age at onset of the rash was 7 X 2 years and favoured sites were the elbows, knees, and buttocks. Symptoms suggesting small intestinal disease were rare but in 35 (61%) of the children subtotal villous atrophy and in 16 (28%) partial villous atrophy were found on jejunal biopsy. Eighteen children underwent a second biopsy after a mean of 21 months on a gluten free diet; villous height was found to be increased and the intraepithelial lymphocyte count decreased in all these patients. Gluten challenge caused a reversal in the two children who underwent a third biopsy. The effect of the gluten free diet on the rash was examined in Finnish children by observing the daily requirements of dapsone, a drug used to control the rash at the beginning of the diet. Eight (67%) of the 12 children were able to stop taking dapsone after a mean of 11 months on the diet and all three patients treated with diet alone became asymptomatic after three to 6 months on the diet. These results confirm that most children with dermatitis herpetiformis have jejunal villous atrophy, though they rarely have gastrointestinal symptoms. The central role of gluten in childhood dermatitis herpetiformis is evidenced by the fact that a gluten free diet helps the damaged jejunal mucosa to recover and controls the rash even in those children who do not have an abnormal jejunal biopsy.", "question":"What is the typical rash associated with gluten?" } get_answer(question2["question"],question2["context"]) ``` > dermatitis herpetiformis Created by Özcan Gündeş ✌️ --- Twitter: <a href="https://twitter.com/ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/twitter.svg" alt="ozcangundes" height="30" width="30" /></a> Linkedin: <a href="https://www.linkedin.com/in/%C3%B6zcan-g%C3%BCnde%C5%9F-7693055b/" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/linkedin.svg" alt="13198517" height="30" width="30" /></a> Medium: <a href="https://medium.com/@ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/medium.svg" alt="@ozcangundes" height="30" width="30" /></a> Github: <a href="https://github.com/ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/github.svg" alt="@ozcangundes" height="30" width="30" /></a>
Apoorva/k2t-test
[ "pytorch", "t5", "text2text-generation", "en", "transformers", "keytotext", "k2t", "Keywords to Sentences", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
7
null
--- language: tr datasets: - TQUAD tags: - question-answering - question-generation - multitask-model license: apache-2.0 --- # mT5-small based Turkish Multitask (Answer Extraction, Question Generation and Question Answering) System [Google's Multilingual T5-small](https://github.com/google-research/multilingual-t5) is fine-tuned on [Turkish Question Answering dataset](https://github.com/okanvk/Turkish-Reading-Comprehension-Question-Answering-Dataset) for three downstream task **Answer extraction, Question Generation and Question Answering** served in this single model. mT5 model was also trained for multiple text2text NLP tasks. All data processing, training and pipeline codes can be found on my [Github](https://github.com/ozcangundes/multitask-question-generation). I will share the training details in the repo as soon as possible. mT5 small model has 300 million parameters and model size is about 1.2GB. Therefore, it takes significant amount of time to fine tune it. 8 epoch and 1e-4 learning rate with 0 warmup steps was applied during training. These hparams and the others can be fine-tuned for much more better results. ## Requirements ❗❗❗ ``` !pip install transformers==4.4.2 !pip install sentencepiece==0.1.95 !git clone https://github.com/ozcangundes/multitask-question-generation.git %cd multitask-question-generation/ ``` ## Usage 🚀🚀 ``` from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("ozcangundes/mt5-multitask-qa-qg-turkish") model = AutoModelForSeq2SeqLM.from_pretrained("ozcangundes/mt5-multitask-qa-qg-turkish") from pipelines import pipeline #pipelines.py script in the cloned repo multimodel = pipeline("multitask-qa-qg",tokenizer=tokenizer,model=model) #sample text text="Özcan Gündeş, 1993 yılı Tarsus doğumludur. Orta Doğu Teknik Üniversitesi \\\\ Endüstri Mühendisliği bölümünde 2011 2016 yılları arasında lisans eğitimi görmüştür. \\\\ Yüksek lisansını ise 2020 Aralık ayında, 4.00 genel not ortalaması ile \\\\ Boğaziçi Üniversitesi, Yönetim Bilişim Sistemleri bölümünde tamamlamıştır.\\\\ Futbolla yakından ilgilenmekle birlikte, Galatasaray kulübü taraftarıdır." ``` ## Example - Both Question Generation and Question Answering 💬💬 ``` multimodel(text) #output => [{'answer': 'Tarsus', 'question': 'Özcan Gündeş nerede doğmuştur?'}, {'answer': '1993', 'question': 'Özcan Gündeş kaç yılında doğmuştur?'}, {'answer': '2011 2016', 'question': 'Özcan Gündeş lisans eğitimini hangi yıllar arasında tamamlamıştır?'}, {'answer': 'Boğaziçi Üniversitesi, Yönetim Bilişim Sistemleri', 'question': 'Özcan Gündeş yüksek lisansını hangi bölümde tamamlamıştır?'}, {'answer': 'Galatasaray kulübü', 'question': 'Özcan Gündeş futbolla yakından ilgilenmekle birlikte hangi kulübü taraftarıdır?'}] ``` From this text, 5 questions are generated and they are answered by the model. ## Example - Question Answering 💭💭 Both text and also, related question should be passed into pipeline. ``` multimodel({"context":text,"question":"Özcan hangi takımı tutmaktadır?"}) #output => Galatasaray multimodel({"context":text,"question":"Özcan, yüksek lisanstan ne zaman mezun oldu?"}) #output => 2020 Aralık ayında multimodel({"context":text,"question":"Özcan'ın yüksek lisans bitirme notu kaçtır?"}) #output => 4.00 #Sorry for being cocky 😝😝 ``` ## ACKNOWLEDGEMENT This work is inspired from [Suraj Patil's great repo](https://github.com/patil-suraj/question_generation). I would like to thank him for the clean codes and also,[Okan Çiftçi](https://github.com/okanvk) for the Turkish dataset 🙏
Appolo/TestModel
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: tr datasets: - TQUAD pipeline_tag: question-answering license: mit --- # mT5-small based Turkish Question Answering System [Google's Multilingual T5-small](https://github.com/google-research/multilingual-t5) is fine-tuned on [Turkish Question Answering dataset](https://github.com/TQuad/turkish-nlp-qa-dataset) for **Q&A** downstream task by using Pytorch Lightning.⚡ The notebook that includes all fine tuning process will be shared on my Github page later. mT5 small model has 300 million parameters and model size is about 1.2GB. Therefore, it takes significant amount of time to fine tune it. **Important Note**: mT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) excluding any supervised training. Therefore, the mT5 model has to be fine-tuned before it is useable on a downstream task. ## Usage 🚀 ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("ozcangundes/mt5-small-turkish-squad") model = AutoModelForSeq2SeqLM.from_pretrained("ozcangundes/mt5-small-turkish-squad") def get_answer(question,context): source_encoding=tokenizer( question, context, max_length=512, padding="max_length", truncation="only_second", return_attention_mask=True, add_special_tokens=True, return_tensors="pt") generated_ids=model.generate( input_ids=source_encoding["input_ids"], attention_mask=source_encoding["attention_mask"], max_length=120) preds=[tokenizer.decode(gen_id, skip_special_tokens=True, clean_up_tokenization_spaces=True) for gen_id in generated_ids] return "".join(preds) ``` ### Example 1 ```python question={ "context":"Pardus, Google'ın öğrencilerle staj ve kendini geliştirme imkânı ile \ tasarılara geliştirici ve katkı sağlamayı amaçladığı açık kaynak tasarısı \ Google Summer of Code'a 2008 ve 2009 olmak üzere iki kere katılmıştır. Bu organizasyona \ ilk katılan Türk tasarısı Pardus olmuştur. Bazı dönemlerde Pardus hakkındaki gelişmeleri \ halka duyurmak ve tasarıya olan ilgiyi arttırmak amacıyla CeBIT Eurasia Bilişim Fuarı'na \ katılım sağlanmaktadır. 2006, 2008, 2009, 2010, 2011,2013 ve 2014 bu fuarlarda Pardus \ standı kurulmuştur.2014 yılında ICT SummitT Now Bilişim Zirvesi'nde yer alınmıştır. \ BİLİŞİM’2014 TBD 31. Ulusal Bilişim Kurultayı ve CITEX’2014 Ankara Bilişim Fuarı’na \ Gümüş sponsorluk ile katkıda bulunulmuş ve Pardus standı kurulmuştur.", "question":"Pardus’un Google Summer of Code'a katıldığı yıllar nelerdir?" } get_answer(question["question"],question["context"]) ``` > 2008 ve 2009 ### Example 2 ```python question2={ "context":"II. Bayezid ve I. Selim devrinde yaşadı ve iki defa hekimbaşılık yaptı. \ Böbrek ve idrar kesesindeki taş oluşumunun nedenlerini ve tedavisini incelediği \ eseriyle tanınır. Adı kaynaklarda Ahmed ve Mahmud olarak da geçer. Ahi Çelebi \ olarak ün yapmıştır. Babası Tabib Mevlana Kemal ile birlikte 1463’te İstanbul’a yerleşti. \ Mevlana Kemal, devrin ünlü hekimlerindendir. Tebriz ya da Şirvan asıllı olduğu çeşitli \ kaynaklarda belirtilir. Ahi Mehmet Çelebi, hekimliği daha çok babasından öğrendi. Onun \ ölümünden sonra devrin önemli hekimleri Kutbüddin ile Altunîzâde’den ders alıp kısa zamanda \ mesleğini ilerletti. Hekimlik becerisinin yanı sıra kuramsal bilgisiyle de kendisini \ kabul ettirerek önce Fâtih Darüşşifasına hekim, sonra da başhekim oldu. II. Bayezid’in \ güvenini kazanarak mutfak eminliğine, ardından da Hekimbaşılığa getirildi. Dört buçuk \ yıl bu görevde kalan Ahî Çelebi, II. Bayezid’in ölümü üzerine geleneğe uyularak azledildi. \ Bir müddet sonra Yavuz onu tekrar Hekimbaşılığa getirdi ve Mısır seferine beraberinde \ götürdü. I. Selim'in ölümünden sonra Hekimbaşılık tan tekrar azledildi. Kaynakların \ belirttiğine göre, yaşı doksanı geçmiş olduğu halde, hacdan dönerken Kahire’de \ ölmüş ve İmam Şafi'nin kabri civarına defnedilmiştir.", "question":"Ahi Mehmet Çelebi hangi eseri ile tanınır?" } get_answer(question2["question"],question2["context"]) ``` > Böbrek ve idrar kesesindeki taş oluşumunun nedenlerini ve tedavisini incelediği eseriyle Created by Özcan Gündeş ✌️ --- Twitter: <a href="https://twitter.com/ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/twitter.svg" alt="ozcangundes" height="30" width="30" /></a> Linkedin: <a href="https://www.linkedin.com/in/%C3%B6zcan-g%C3%BCnde%C5%9F-7693055b/" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/linkedin.svg" alt="13198517" height="30" width="30" /></a> Medium: <a href="https://medium.com/@ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/medium.svg" alt="@ozcangundes" height="30" width="30" /></a> Github: <a href="https://github.com/ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/github.svg" alt="@ozcangundes" height="30" width="30" /></a>
ArBert/albert-base-v2-finetuned-ner-agglo-twitter
[ "pytorch", "tensorboard", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
27
null
--- language: tr datasets: - MLSUM pipeline_tag: summarization license: mit --- # mT5-small based Turkish Summarization System [Google's Multilingual T5-small](https://github.com/google-research/multilingual-t5) is fine-tuned on [MLSUM Turkish news dataset](https://github.com/recitalAI/MLSUM) for **Summarization** downstream task by using Pytorch Lightning.⚡ mT5 small model has 300 million parameters and model size is about 1.2GB. Therefore, it takes significant amount of time to fine tune it. The model is trained with 10 epochs, 8 batch size and 10e-4 learning rate. It took almost 4 hours. The max news length is kept as 784 and max summary length is determined as 64. **Important Note**: mT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) excluding any supervised training. Therefore, the mT5 model has to be fine-tuned before it is useable on a downstream task. ## Dataset MLSUM dataset has more than 250K Turkish news with their related summaries. Since the mT5 model size and vocabulary is so large, 20K data is used for training and 4K data is used for validation. For more information about the dataset, please read this [great paper](https://arxiv.org/abs/2004.14900). ## Usage 🚀 ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("ozcangundes/mt5-small-turkish-summarization") model = AutoModelForSeq2SeqLM.from_pretrained("ozcangundes/mt5-small-turkish-summarization") def generate_summary(main_news): source_encoding=tokenizer( main_news, max_length=784, padding="max_length", truncation=True, return_attention_mask=True, add_special_tokens=True, return_tensors="pt") generated_ids=model.generate( input_ids=source_encoding["input_ids"], attention_mask=source_encoding["attention_mask"], num_beams=2, max_length=120, repetition_penalty=2.5, length_penalty=2.0, early_stopping=True, use_cache=True ) preds=[tokenizer.decode(gen_id, skip_special_tokens=True, clean_up_tokenization_spaces=True) for gen_id in generated_ids] return "".join(preds) ``` ### Example 1 ```python main_news= "Final etabının üçüncü karşılaşması 29 Nisan Pazartesi günü saat 18.00 ’ de Burhan Felek Voleybol Salonu ’ nda oynanacak . Sezonu FIVB Kulüpler Dünya Şampiyonluğu ile açan ve CEV Avrupa Şampiyonlar Ligi'ni üçüncü olarak tamamlayan VakıfBank Kadın Voleybol Takımı , Vestel Venus Sultanlar Ligi final serisi ikinci maçında Eczacıbaşı VitrA'yı VakıfBank Spor Sarayı'nda 16-25 , 25-10 , 25-18 ve 25-17'lik setlerle 3-1 mağlup ederek seride durumu 1-1 ' e getirdi . İlk setini 25-16 kaybettiği karşılaşmanın ikinci setinde etkili servisler kullanan sarı-siyahlılar , teknik molasına 12-5 önde girdiği seti 25-10 almayı başardı . Etkili servis performansını üçüncü sette de sürdüren VakıfBank , teknik molasına 12-5 önde girdiği seti 25-18 alarak , karşılaşmada 2-1 öne geçti . Dördüncü sette rakibinin geri dönüşüne izin vermeyen VakıfBank , seti 25-17 , maçı da 3-1 kazanarak seride durumu eşitledi." generate_summary(main_news) #original summary -> "Vestel Venus Sultanlar Ligi final etabı ikinci karşılaşmasında VakıfBank kendi sahasında Eczacıbaşı VitrA'yı 3-1 mağlup etti ve seride durumu 1-1 ' e getirdi ." #output -> "CEV Avrupa Şampiyonlar Ligi'ni üçüncü olarak tamamlayan VakıfBank Kadın Voleybol Takımı, Vestel Venus Sultanlar Ligi final serisi ikinci maçında Eczacıbaşı VitrA'yı 3-1 mağlup ederek seride durumu 1-1'e getirdi." ``` ### Example 2 ```python main_news="2023'te yerli tank motoru : Bir taraftan da tankın motorunu yerlileştirmeye çalıştıklarını ifade eden Öztürk , şu değerlendirmelerde bulundu : `` Bin 500 beygirlik , şanzımanıyla beraber motoru yerlileştirmeye çalışıyoruz . Bu da bir aksilik çıkmazsa ilk tankımızın üzerine 2023'te koyacağız . Bundan sonra hiçbir ülkeye bağımlılığımız kalmadan bu araçları üretmeye devam edeceğiz . Sorumluluğumuzun ağır olduğunu biliyoruz . Ülkemize hizmet etmeye çalışıyoruz . Bunu daha da ileriye götürmek için elimizden gelen çabayı sarf ediyoruz . Ama bu tek başınıza yapılan bir operasyon değil . Türkiye'deki yerli firmalarla beraber ortaklaşa bu işi yürütmeye çalışıyoruz." generate_summary(main_news) #output -> "TÜRKİYE'de bir taraftan da tankın motorunu yerlileştirmeye çalıştıklarını belirten Öztürk, `` Bin 500 beygirlik, şanzımanıyla beraber motoru yerlileştirmeye çalışıyoruz. Bu da bir aksilik çıkmazsa ilk tankımızın üzerine 2023'te koyacağız.'' dedi." ``` Created by Özcan Gündeş ✌️ --- Twitter: <a href="https://twitter.com/ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/twitter.svg" alt="ozcangundes" height="30" width="30" /></a> Linkedin: <a href="https://www.linkedin.com/in/%C3%B6zcan-g%C3%BCnde%C5%9F-7693055b/" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/linkedin.svg" alt="13198517" height="30" width="30" /></a> Medium: <a href="https://medium.com/@ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/medium.svg" alt="@ozcangundes" height="30" width="30" /></a> Github: <a href="https://github.com/ozcangundes" target="blank"><img align="center" src="https://cdn.jsdelivr.net/npm/[email protected]/icons/github.svg" alt="@ozcangundes" height="30" width="30" /></a>
ArBert/albert-base-v2-finetuned-ner-agglo
[ "pytorch", "tensorboard", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- language: - tr datasets: - common_voice metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: Ozcan Gundes XLSR Wav2Vec2 Large Turkish results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice tr type: common_voice args: tr metrics: - name: Test WER type: wer value: 29.62 --- # Wav2Vec2-Large-XLSR-53-Turkish Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Turkish using the [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "tr", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("ozcangundes/wav2vec2-large-xlsr-53-turkish") model = Wav2Vec2ForCTC.from_pretrained("ozcangundes/wav2vec2-large-xlsr-53-turkish") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): \\tspeech_array, sampling_rate = torchaudio.load(batch["path"]) \\tbatch["speech"] = resampler(speech_array).squeeze().numpy() \\treturn batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): \\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Turkish test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "tr", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("ozcangundes/wav2vec2-large-xlsr-53-turkish") model = Wav2Vec2ForCTC.from_pretrained("ozcangundes/wav2vec2-large-xlsr-53-turkish") model.to("cuda") chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“\\%\\‘\\”\\�\\’\\']' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 29.62 % ## Training The Common Voice `train` and `validation` datasets were used for training. The script used for training can be found [here](https://colab.research.google.com/drive/1hesw9z_kFFINT93jBvGuFspOLrHx10AE?usp=sharing)
ArBert/albert-base-v2-finetuned-ner-kmeans
[ "pytorch", "tensorboard", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
2021-08-06T14:26:53Z
--- tags: - text2text-generation library_name: generic --- random test repo
ArBert/bert-base-uncased-finetuned-ner-agglo
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2021-05-03T02:46:38Z
--- datasets: - squad tags: - question-generation widget: - text: "Harry Potter is a series of seven fantasy novels written by British author, [HL]J. K. Rowling[HL]." --- # Transformer QG on SQuAD HLQG is Proposed by [Ying-Hong Chan & Yao-Chung Fan. (2019). A Re-current BERT-based Model for Question Generation.](https://www.aclweb.org/anthology/D19-5821/) **This is a Reproduce Version** More detail: [p208p2002/Transformer-QG-on-SQuAD](https://github.com/p208p2002/Transformer-QG-on-SQuAD) ## Usage ### Input Format ``` C' = [c1, c2, ..., [HL], a1, ..., a|A|, [HL], ..., c|C|] ``` ### Input Example ``` Harry Potter is a series of seven fantasy novels written by British author, [HL]J. K. Rowling[HL]. ``` > # Who wrote Harry Potter? ## Data setting We report two dataset setting as Follow ### SQuAD - train: 87599\\\\t - validation: 10570 > [SQuAD: 100,000+ Questions for Machine Comprehension of Text](https://arxiv.org/abs/1606.05250) ### SQuAD NQG - train: 75722 - dev: 10570 - test: 11877 > [Learning to Ask: Neural Question Generation for Reading Comprehension](https://arxiv.org/abs/1705.00106) ## Available models - BART - GPT2 - T5 ## Expriments We report score with `NQG Scorer` which is using in SQuAD NQG. If not special explanation, the size of the model defaults to "base". ### SQuAD Model |Bleu 1|Bleu 2|Bleu 3|Bleu 4|METEOR|ROUGE-L| ---------------------------------|------|------|------|------|------|-------| BART-HLSQG |54.67 |39.26 |30.34 |24.15 |25.43 |52.64 | GPT2-HLSQG |49.31 |33.95 |25.41| 19.69 |22.29 |48.82 | T5-HLSQG |54.29 |39.22 |30.43 |24.26 |25.56 |53.11 | ### SQuAD NQG Model |Bleu 1|Bleu 2|Bleu 3|Bleu 4|METEOR|ROUGE-L| ---------------------------------|------|------|------|------|------|-------| BERT-HLSQG (Chan et al.) |49.73 |34.60 |26.13 |20.33 |23.88 |48.23 | BART-HLSQG |54.12 |38.19 |28.84 |22.35 |24.55 |51.03 | GPT2-HLSQG |49.82 |33.69 |24.71 |18.63 |21.90 |47.60 | T5-HLSQG |53.13 |37.60 |28.62 |22.38 |24.48 |51.20 |
ArBert/bert-base-uncased-finetuned-ner-gmm
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- datasets: - squad tags: - question-generation widget: - text: "Harry Potter is a series of seven fantasy novels written by British author, [HL]J. K. Rowling[HL]." --- # Transformer QG on SQuAD HLQG is Proposed by [Ying-Hong Chan & Yao-Chung Fan. (2019). A Re-current BERT-based Model for Question Generation.](https://www.aclweb.org/anthology/D19-5821/) **This is a Reproduce Version** More detail: [p208p2002/Transformer-QG-on-SQuAD](https://github.com/p208p2002/Transformer-QG-on-SQuAD) ## Usage ### Input Format ``` C' = [c1, c2, ..., [HL], a1, ..., a|A|, [HL], ..., c|C|] ``` ### Input Example ``` Harry Potter is a series of seven fantasy novels written by British author, [HL]J. K. Rowling[HL]. ``` > # Who wrote Harry Potter? ## Data setting We report two dataset setting as Follow ### SQuAD - train: 87599\\\\t - validation: 10570 > [SQuAD: 100,000+ Questions for Machine Comprehension of Text](https://arxiv.org/abs/1606.05250) ### SQuAD NQG - train: 75722 - dev: 10570 - test: 11877 > [Learning to Ask: Neural Question Generation for Reading Comprehension](https://arxiv.org/abs/1705.00106) ## Available models - BART - GPT2 - T5 ## Expriments We report score with `NQG Scorer` which is using in SQuAD NQG. If not special explanation, the size of the model defaults to "base". ### SQuAD Model |Bleu 1|Bleu 2|Bleu 3|Bleu 4|METEOR|ROUGE-L| ---------------------------------|------|------|------|------|------|-------| BART-HLSQG |54.67 |39.26 |30.34 |24.15 |25.43 |52.64 | GPT2-HLSQG |49.31 |33.95 |25.41| 19.69 |22.29 |48.82 | T5-HLSQG |54.29 |39.22 |30.43 |24.26 |25.56 |53.11 | ### SQuAD NQG Model |Bleu 1|Bleu 2|Bleu 3|Bleu 4|METEOR|ROUGE-L| ---------------------------------|------|------|------|------|------|-------| BERT-HLSQG (Chan et al.) |49.73 |34.60 |26.13 |20.33 |23.88 |48.23 | BART-HLSQG |54.12 |38.19 |28.84 |22.35 |24.55 |51.03 | GPT2-HLSQG |49.82 |33.69 |24.71 |18.63 |21.90 |47.60 | T5-HLSQG |53.13 |37.60 |28.62 |22.38 |24.48 |51.20 |
ArBert/bert-base-uncased-finetuned-ner-kmeans-twitter
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
## Usage Please use BertTokenizerFast as tokenizer instead of AutoTokenizer. 請使用 BertTokenizerFast 而非 AutoTokenizer。 ``` from transformers import ( BertTokenizerFast, AutoModelForCausalLM, ) tokenizer = BertTokenizerFast.from_pretrained('p208p2002/gpt2-drcd-qg-hl') model = AutoModelForCausalLM.from_pretrained('p208p2002/gpt2-drcd-qg-hl') ``` ### Input Format ``` C' = [c1, c2, ..., [HL], a1, ..., a|A|, [HL], ..., c|C|] ``` ### Input Example ``` 哈利·波特是英國作家[HL]羅琳[HL]撰寫的七部幻想小說系列。 ``` > 誰撰寫哈利·波特?
ArBert/bert-base-uncased-finetuned-ner
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- datasets: - squad tags: - question-generation widget: - text: "Harry Potter is a series of seven fantasy novels written by British author, [HL]J. K. Rowling[HL]." --- # Transformer QG on SQuAD HLQG is Proposed by [Ying-Hong Chan & Yao-Chung Fan. (2019). A Re-current BERT-based Model for Question Generation.](https://www.aclweb.org/anthology/D19-5821/) **This is a Reproduce Version** More detail: [p208p2002/Transformer-QG-on-SQuAD](https://github.com/p208p2002/Transformer-QG-on-SQuAD) ## Usage ### Input Format ``` C' = [c1, c2, ..., [HL], a1, ..., a|A|, [HL], ..., c|C|] ``` ### Input Example ``` Harry Potter is a series of seven fantasy novels written by British author, [HL]J. K. Rowling[HL]. ``` > # Who wrote Harry Potter? ## Data setting We report two dataset setting as Follow ### SQuAD - train: 87599\\\\t - validation: 10570 > [SQuAD: 100,000+ Questions for Machine Comprehension of Text](https://arxiv.org/abs/1606.05250) ### SQuAD NQG - train: 75722 - dev: 10570 - test: 11877 > [Learning to Ask: Neural Question Generation for Reading Comprehension](https://arxiv.org/abs/1705.00106) ## Available models - BART - GPT2 - T5 ## Expriments We report score with `NQG Scorer` which is using in SQuAD NQG. If not special explanation, the size of the model defaults to "base". ### SQuAD Model |Bleu 1|Bleu 2|Bleu 3|Bleu 4|METEOR|ROUGE-L| ---------------------------------|------|------|------|------|------|-------| BART-HLSQG |54.67 |39.26 |30.34 |24.15 |25.43 |52.64 | GPT2-HLSQG |49.31 |33.95 |25.41| 19.69 |22.29 |48.82 | T5-HLSQG |54.29 |39.22 |30.43 |24.26 |25.56 |53.11 | ### SQuAD NQG Model |Bleu 1|Bleu 2|Bleu 3|Bleu 4|METEOR|ROUGE-L| ---------------------------------|------|------|------|------|------|-------| BERT-HLSQG (Chan et al.) |49.73 |34.60 |26.13 |20.33 |23.88 |48.23 | BART-HLSQG |54.12 |38.19 |28.84 |22.35 |24.55 |51.03 | GPT2-HLSQG |49.82 |33.69 |24.71 |18.63 |21.90 |47.60 | T5-HLSQG |53.13 |37.60 |28.62 |22.38 |24.48 |51.20 |
ArBert/roberta-base-finetuned-ner-agglo
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
# EQGG: Educational Question Group Generation <span> <a target="_blank" href="https://github.com/p208p2002/Neural-Question-Group-Generation"> <img src="https://img.shields.io/badge/GitHub-100000?style=for-the-badge&logo=github&logoColor=white"> </a> <a target="_blank" href="https://huggingface.co/p208p2002/qmst-qgg"> <img src="https://img.shields.io/badge/🤗 HF Model Hub-ffea00?style=for-the-badge&logoColor=white"> </a> <a target="_blank" href="https://huggingface.co/spaces/p208p2002/Question-Group-Generator"> <img src="https://img.shields.io/badge/💻 Live Demo-78ab78?style=for-the-badge&logoColor=white"> </a> </span>
ArBert/roberta-base-finetuned-ner-gmm-twitter
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- datasets: - squad tags: - question-generation widget: - text: "Harry Potter is a series of seven fantasy novels written by British author, [HL]J. K. Rowling[HL]." --- # Transformer QG on SQuAD HLQG is Proposed by [Ying-Hong Chan & Yao-Chung Fan. (2019). A Re-current BERT-based Model for Question Generation.](https://www.aclweb.org/anthology/D19-5821/) **This is a Reproduce Version** More detail: [p208p2002/Transformer-QG-on-SQuAD](https://github.com/p208p2002/Transformer-QG-on-SQuAD) ## Usage ### Input Format ``` C' = [c1, c2, ..., [HL], a1, ..., a|A|, [HL], ..., c|C|] ``` ### Input Example ``` Harry Potter is a series of seven fantasy novels written by British author, [HL]J. K. Rowling[HL]. ``` > # Who wrote Harry Potter? ## Data setting We report two dataset setting as Follow ### SQuAD - train: 87599\\t - validation: 10570 > [SQuAD: 100,000+ Questions for Machine Comprehension of Text](https://arxiv.org/abs/1606.05250) ### SQuAD NQG - train: 75722 - dev: 10570 - test: 11877 > [Learning to Ask: Neural Question Generation for Reading Comprehension](https://arxiv.org/abs/1705.00106) ## Available models - BART - GPT2 - T5 ## Expriments We report score with `NQG Scorer` which is using in SQuAD NQG. If not special explanation, the size of the model defaults to "base". ### SQuAD Model |Bleu 1|Bleu 2|Bleu 3|Bleu 4|METEOR|ROUGE-L| ---------------------------------|------|------|------|------|------|-------| BART-HLSQG |54.67 |39.26 |30.34 |24.15 |25.43 |52.64 | GPT2-HLSQG |49.31 |33.95 |25.41| 19.69 |22.29 |48.82 | T5-HLSQG |54.29 |39.22 |30.43 |24.26 |25.56 |53.11 | ### SQuAD NQG Model |Bleu 1|Bleu 2|Bleu 3|Bleu 4|METEOR|ROUGE-L| ---------------------------------|------|------|------|------|------|-------| BERT-HLSQG (Chan et al.) |49.73 |34.60 |26.13 |20.33 |23.88 |48.23 | BART-HLSQG |54.12 |38.19 |28.84 |22.35 |24.55 |51.03 | GPT2-HLSQG |49.82 |33.69 |24.71 |18.63 |21.90 |47.60 | T5-HLSQG |53.13 |37.60 |28.62 |22.38 |24.48 |51.20 |
ArBert/roberta-base-finetuned-ner-gmm
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- datasets: - squad tags: - question-generation widget: - text: "Harry Potter is a series of seven fantasy novels written by British author, [HL]J. K. Rowling[HL]." --- # Transformer QG on SQuAD HLQG is Proposed by [Ying-Hong Chan & Yao-Chung Fan. (2019). A Re-current BERT-based Model for Question Generation.](https://www.aclweb.org/anthology/D19-5821/) **This is a Reproduce Version** More detail: [p208p2002/Transformer-QG-on-SQuAD](https://github.com/p208p2002/Transformer-QG-on-SQuAD) ## Usage ### Input Format ``` C' = [c1, c2, ..., [HL], a1, ..., a|A|, [HL], ..., c|C|] ``` ### Input Example ``` Harry Potter is a series of seven fantasy novels written by British author, [HL]J. K. Rowling[HL]. ``` > # Who wrote Harry Potter? ## Data setting We report two dataset setting as Follow ### SQuAD - train: 87599\\\\t - validation: 10570 > [SQuAD: 100,000+ Questions for Machine Comprehension of Text](https://arxiv.org/abs/1606.05250) ### SQuAD NQG - train: 75722 - dev: 10570 - test: 11877 > [Learning to Ask: Neural Question Generation for Reading Comprehension](https://arxiv.org/abs/1705.00106) ## Available models - BART - GPT2 - T5 ## Expriments We report score with `NQG Scorer` which is using in SQuAD NQG. If not special explanation, the size of the model defaults to "base". ### SQuAD Model |Bleu 1|Bleu 2|Bleu 3|Bleu 4|METEOR|ROUGE-L| ---------------------------------|------|------|------|------|------|-------| BART-HLSQG |54.67 |39.26 |30.34 |24.15 |25.43 |52.64 | GPT2-HLSQG |49.31 |33.95 |25.41| 19.69 |22.29 |48.82 | T5-HLSQG |54.29 |39.22 |30.43 |24.26 |25.56 |53.11 | ### SQuAD NQG Model |Bleu 1|Bleu 2|Bleu 3|Bleu 4|METEOR|ROUGE-L| ---------------------------------|------|------|------|------|------|-------| BERT-HLSQG (Chan et al.) |49.73 |34.60 |26.13 |20.33 |23.88 |48.23 | BART-HLSQG |54.12 |38.19 |28.84 |22.35 |24.55 |51.03 | GPT2-HLSQG |49.82 |33.69 |24.71 |18.63 |21.90 |47.60 | T5-HLSQG |53.13 |37.60 |28.62 |22.38 |24.48 |51.20 |
ArJakusz/DialoGPT-small-stark
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- title: Video Vision Transformer on medmnist emoji: 🧑‍⚕️ colorFrom: red colorTo: green sdk: gradio app_file: app.py pinned: false license: apache-2.0 library_name: keras --- # Configuration `title`: _string_ Display title for the Space `emoji`: _string_ Space emoji (emoji-only character allowed) `colorFrom`: _string_ Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) `colorTo`: _string_ Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) `sdk`: _string_ Can be either `gradio`, `streamlit`, or `static` `sdk_version` : _string_ Only applicable for `streamlit` SDK. See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. `app_file`: _string_ Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). Path is relative to the root of the repository. `models`: _List[string]_ HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space. Will be parsed automatically from your code if not specified here. `datasets`: _List[string]_ HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space. Will be parsed automatically from your code if not specified here. `pinned`: _boolean_ Whether the Space stays on top of your list.
ArashEsk95/bert-base-uncased-finetuned-cola
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2021-10-27T00:05:14Z
# BERT-STEM BERT model fine-tuned on Science Technology Engineering and Mathematics (STEM) lessons. ## Install: To install from pip: ``` pip install bertstem ``` ## Quickstart To encode sentences and get embedding matrix for embedding layers: ```python from BERT_STEM.BertSTEM import * bert = BertSTEM() # Example dataframe with text in spanish data = {'col_1': [3, 2, 1], 'col_2': ['hola como estan', 'alumnos queridos', 'vamos a hablar de matematicas']} df = pd.DataFrame.from_dict(data) # Encode sentences using BertSTEM: bert._encode_df(df, column='col_2', encoding='sum') # Get embedding matrix: embedding_matrix = bert.get_embedding_matrix() ``` To use it from HuggingFace: ```python from BERT_STEM.Encode import * import pandas as pd import transformers # Download spanish BERTSTEM: model = transformers.BertModel.from_pretrained("pablouribe/bertstem") # Download spanish tokenizer: tokenizer = transformers.BertTokenizerFast.from_pretrained("dccuchile/bert-base-spanish-wwm-uncased", do_lower_case=True, add_special_tokens = False) # Example dataframe with text in spanish data = {'col_1': [3, 2, 1], 'col_2': ['hola como estan', 'alumnos queridos', 'vamos a hablar de matematicas']} df = pd.DataFrame.from_dict(data) # Encode sentences using BertSTEM: sentence_encoder(df, model, tokenizer, column = 'col_2', encoding = 'sum') ```
ArenaGrenade/char-cnn
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - ab tags: - automatic-speech-recognition - common_voice - generated_from_trainer datasets: - common_voice model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the COMMON_VOICE - AB dataset. It achieves the following results on the evaluation set: - Loss: 133.2596 - Wer: 19.1571 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 15.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2.dev0 - Tokenizers 0.11.0
Arghyad/Loki_small
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - es license: apache-2.0 tags: - automatic-speech-recognition - generated_from_trainer - hf-asr-leaderboard - mozilla-foundation/common_voice_7_0 - robust-speech-event datasets: - mozilla-foundation/common_voice_7_0 model-index: - name: xls-r-spanish-test results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 7 type: mozilla-foundation/common_voice_7_0 args: es metrics: - name: Test WER type: wer value: 13.89 - name: Test CER type: cer value: 3.85 - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: es metrics: - name: Test WER type: wer value: 37.66 - name: Test CER type: cer value: 15.32 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Test Data type: speech-recognition-community-v2/eval_data args: es metrics: - name: Test WER type: wer value: 41.17 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - ES dataset. It achieves the following results on the evaluation set: - Loss: 0.1461 - Wer: 1.0063 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 5.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 2.953 | 0.15 | 1000 | 2.9528 | 1.0 | | 1.1519 | 0.3 | 2000 | 0.3735 | 1.0357 | | 1.0278 | 0.45 | 3000 | 0.2529 | 1.0390 | | 0.9922 | 0.61 | 4000 | 0.2208 | 1.0270 | | 0.9618 | 0.76 | 5000 | 0.2088 | 1.0294 | | 0.9364 | 0.91 | 6000 | 0.2019 | 1.0214 | | 0.9179 | 1.06 | 7000 | 0.1940 | 1.0294 | | 0.9154 | 1.21 | 8000 | 0.1915 | 1.0290 | | 0.8985 | 1.36 | 9000 | 0.1837 | 1.0211 | | 0.9055 | 1.51 | 10000 | 0.1838 | 1.0273 | | 0.8861 | 1.67 | 11000 | 0.1765 | 1.0139 | | 0.892 | 1.82 | 12000 | 0.1723 | 1.0188 | | 0.8778 | 1.97 | 13000 | 0.1735 | 1.0092 | | 0.8645 | 2.12 | 14000 | 0.1707 | 1.0106 | | 0.8595 | 2.27 | 15000 | 0.1713 | 1.0186 | | 0.8392 | 2.42 | 16000 | 0.1686 | 1.0053 | | 0.8436 | 2.57 | 17000 | 0.1653 | 1.0096 | | 0.8405 | 2.73 | 18000 | 0.1689 | 1.0077 | | 0.8382 | 2.88 | 19000 | 0.1645 | 1.0114 | | 0.8247 | 3.03 | 20000 | 0.1647 | 1.0078 | | 0.8219 | 3.18 | 21000 | 0.1611 | 1.0026 | | 0.8024 | 3.33 | 22000 | 0.1580 | 1.0062 | | 0.8087 | 3.48 | 23000 | 0.1578 | 1.0038 | | 0.8097 | 3.63 | 24000 | 0.1556 | 1.0057 | | 0.8094 | 3.79 | 25000 | 0.1552 | 1.0035 | | 0.7836 | 3.94 | 26000 | 0.1516 | 1.0052 | | 0.8042 | 4.09 | 27000 | 0.1515 | 1.0054 | | 0.7925 | 4.24 | 28000 | 0.1499 | 1.0031 | | 0.7855 | 4.39 | 29000 | 0.1490 | 1.0041 | | 0.7814 | 4.54 | 30000 | 0.1482 | 1.0068 | | 0.7859 | 4.69 | 31000 | 0.1460 | 1.0066 | | 0.7819 | 4.85 | 32000 | 0.1464 | 1.0062 | | 0.7784 | 5.0 | 33000 | 0.1460 | 1.0063 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3.dev0 - Tokenizers 0.11.0
AriakimTaiyo/DialoGPT-revised-Kumiko
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - clinc_oos metrics: - accuracy model-index: - name: distilbert-base-uncased-distilled-clinc results: - task: name: Text Classification type: text-classification dataset: name: clinc_oos type: clinc_oos args: plus metrics: - name: Accuracy type: accuracy value: 0.9467741935483871 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-distilled-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.2795 - Accuracy: 0.9468 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.4223 | 1.0 | 318 | 2.5556 | 0.7561 | | 1.9655 | 2.0 | 636 | 1.3075 | 0.8577 | | 1.0041 | 3.0 | 954 | 0.6970 | 0.9165 | | 0.5449 | 4.0 | 1272 | 0.4637 | 0.9339 | | 0.3424 | 5.0 | 1590 | 0.3630 | 0.9397 | | 0.247 | 6.0 | 1908 | 0.3225 | 0.9442 | | 0.1968 | 7.0 | 2226 | 0.2983 | 0.9458 | | 0.1693 | 8.0 | 2544 | 0.2866 | 0.9465 | | 0.1547 | 9.0 | 2862 | 0.2820 | 0.9468 | | 0.1477 | 10.0 | 3180 | 0.2795 | 0.9468 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
AriakimTaiyo/DialoGPT-small-Kumiko
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - clinc_oos metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-clinc results: - task: name: Text Classification type: text-classification dataset: name: clinc_oos type: clinc_oos args: plus metrics: - name: Accuracy type: accuracy value: 0.9174193548387096 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.7713 - Accuracy: 0.9174 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.2892 | 1.0 | 318 | 3.2831 | 0.7426 | | 2.6244 | 2.0 | 636 | 1.8739 | 0.8335 | | 1.5442 | 3.0 | 954 | 1.1525 | 0.8926 | | 1.0096 | 4.0 | 1272 | 0.8569 | 0.91 | | 0.793 | 5.0 | 1590 | 0.7713 | 0.9174 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
Arkadiusz/Test-model
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2020-12-10T09:57:14Z
--- language: id tags: - pipeline:summarization - summarization - t5 datasets: - indosum --- # Indonesian T5 Summarization Small Model Finetuned T5 small summarization model for Indonesian. ## Finetuning Corpus `t5-small-indonesian-summarization-cased` model is based on `t5-small-bahasa-summarization-cased` by [huseinzol05](https://huggingface.co/huseinzol05), finetuned using [indosum](https://github.com/kata-ai/indosum) dataset. ## Load Finetuned Model ```python from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("panggi/t5-small-indonesian-summarization-cased") model = T5ForConditionalGeneration.from_pretrained("panggi/t5-small-indonesian-summarization-cased") ``` ## Code Sample ```python from transformers import T5Tokenizer, T5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("panggi/t5-small-indonesian-summarization-cased") model = T5ForConditionalGeneration.from_pretrained("panggi/t5-small-indonesian-summarization-cased") # https://www.sehatq.com/artikel/apa-itu-dispepsia-fungsional-ketahui-gejala-dan-faktor-risikonya ARTICLE_TO_SUMMARIZE = "Secara umum, dispepsia adalah kumpulan gejala pada saluran pencernaan seperti nyeri, sensasi terbakar, dan rasa tidak nyaman pada perut bagian atas. Pada beberapa kasus, dispepsia yang dialami seseorang tidak dapat diketahui penyebabnya. Jenis dispepsia ini disebut dengan dispepsia fungsional. Apa saja gejala dispepsia fungsional? Apa itu dispepsia fungsional? Dispepsia fungsional adalah kumpulan gejala tanpa sebab pada saluran pencernaan bagian atas. Gejala tersebut dapat berupa rasa sakit, nyeri, dan tak nyaman pada perut bagian atas atau ulu hati. Penderita dispepsia fungsional juga akan merasakan kenyang lebih cepat dan sensasi perut penuh berkepanjangan. Gejala-gejala tersebut bisa berlangsung selama sebulan atau lebih. Dispepsia ini memiliki nama “fungsional” karena kumpulan gejalanya tidak memiliki penyebab yang jelas. Dilihat dari fungsi dan struktur saluran pencernaan, dokter tidak menemukan hal yang salah. Namun, gejalanya bisa sangat mengganggu dan menyiksa. Dispepsia fungsional disebut juga dengan dispepsia nonulkus. Diperkirakan bahwa 20% masyarakat dunia menderita dispepsia fungsional. Kondisi ini berisiko tinggi dialami oleh wanita, perokok, dan orang yang mengonsumsi obat anti-peradangan nonsteroid (NSAID). Dispepsia fungsional bisa bersifat kronis dan mengganggu kehidupan penderitanya. Namun beruntung, ada beberapa strategi yang bisa diterapkan untuk mengendalikan gejala dispepsia ini. Strategi tersebut termasuk perubahan gaya hidup, obat-obatan, dan terapi.Ragam gejala dispepsia fungsional Gejala dispepsia fungsional dapat bervariasi antara satu pasien dengan pasien lain. Beberapa tanda yang bisa dirasakan seseorang, yaitu: Sensasi terbakar atau nyeri di saluran pencernaan bagian atas Perut kembung Cepat merasa kenyang walau baru makan sedikit Mual Muntah Bersendawa Rasa asam di mulut Penurunan berat badan Tekanan psikologis terkait dengan kondisi yang dialami Apa sebenarnya penyebab dispepsia fungsional? Sebagai penyakit fungsional, dokter mengkategorikan dispepsia ini sebagai penyakit yang tidak diketahui penyebabnya. Hanya saja, beberapa faktor bisa meningkatkan risiko seseorang terkena dispepsia fungsional. Faktor risiko tersebut, termasuk: Alergi terhadap zat tertentu Perubahan mikrobioma usus Infeksi, seperti yang dipicu oleh bakteriHelicobacter pylori Sekresi asam lambung yang tidak normal Peradangan pada saluran pencernaan bagian atas Gangguan pada fungsi lambung untuk mencerna makanan Pola makan tertentu Gaya hidup tidak sehat Stres Kecemasan atau depresi Efek samping pemakaian obat seperti obat antiinflamasi nonsteroid Penanganan untuk dispepsia fungsional Ada banyak pilihan pengobatan untuk dispepsia fungsional. Seperti yang disampaikan di atas, tidak ada penyebab tunggal dispepsia ini yang bisa diketahui. Gejala yang dialami antara satu pasien juga mungkin amat berbeda dari orang lain. Dengan demikian, jenis pengobatan dispepsia fungsional juga akan bervariasi. Beberapa pilihan strategi penanganan untuk dispepsia fungsional, meliputi: 1. Obat-obatan Ada beberapa jenis obat yang mungkin akan diberikan dokter, seperti Obat penetral asam lambung yang disebut penghambat reseptor H2 Obat penghambat produksi asam lambung yang disebut proton pump inhibitors Obat untuk mengendalikan gas di perut yang mengandung simetikon Antidepresan seperti amitriptyline Obat penguat kerongkongan yang disebut agen prokinetik Obat untuk pengosongan isi lambung seperti metoclopramide Antibiotik jika dokter mendeteksi adanya infeksi bakteri H. pylori 2. Anjuran terkait perubahan gaya hidup Selain obat-obatan, dokter akan memberikan rekomendasi perubahan gaya hidup yang harus diterapkan pasien. Tips terkait perubahan gaya hidup termasuk: Makan lebih sering namun dengan porsi yang lebih sedikit Menjauhi makanan berlemak karena memperlambat pengosongan makanan di lambung Menjauhi jenis makanan lain yang memicu gejala dispepsia, seperti makanan pedas, makanan tinggi asam, produk susu, dan produk kafein Menjauhi rokok Dokter juga akan meminta pasien untuk mencari cara untuk mengendalikan stres, tidur dengan kepala lebih tinggi, dan menjalankan usaha untuk mengendalikan berat badan. Apakah penyakit dispepsia itu berbahaya? Dispepsia, termasuk dispepsia fungsional, dapat menjadi kronis dengan gejala yang menyiksa. Jika tidak ditangani, dispepsia tentu dapat berbahaya dan mengganggu kehidupan pasien. Segera hubungi dokter apabila Anda merasakan gejala dispepsia, terlebih jika tidak merespons obat-obatan yang dijual bebas. Catatan dari SehatQ Dispepsia fungsional adalah kumpulan gejala pada saluran pencernaan bagian atas yang tidak diketahui penyebabnya. Dispepsia fungsional dapat ditangani dengan kombinasi obat-obatan dan perubahan gaya hidup. Jika masih memiliki pertanyaan terkait dispepsia fungsional, Anda bisa menanyakan ke dokter di aplikasi kesehatan keluarga SehatQ. Aplikasi SehatQ bisa diunduh gratis di Appstore dan Playstore yang berikan informasi penyakit terpercaya." # generate summary input_ids = tokenizer.encode(ARTICLE_TO_SUMMARIZE, return_tensors='pt') summary_ids = model.generate(input_ids, max_length=100, num_beams=2, repetition_penalty=2.5, length_penalty=1.0, early_stopping=True, no_repeat_ngram_size=2, use_cache=True) summary_text = tokenizer.decode(summary_ids[0], skip_special_tokens=True) print(summary_text) ``` Output: ``` 'Dispepsia fungsional adalah kumpulan gejala tanpa sebab pada saluran pencernaan bagian atas. Gejala tersebut dapat berupa rasa sakit, nyeri, dan tak nyaman pada perut bagian atas. Penderita dispepsia fungsional juga akan merasakan kenyang lebih cepat dan sensasi perut penuh berkepanjangan. Gejala-gejala tersebut bisa berlangsung selama sebulan atau lebih. ``` ## Acknowledgement Thanks to Immanuel Drexel for his article [Text Summarization, Extractive, T5, Bahasa Indonesia, Huggingface’s Transformers](https://medium.com/analytics-vidhya/text-summarization-t5-bahasa-indonesia-huggingfaces-transformers-ee9bfe368e2f)
Arnold/wav2vec2-large-xlsr-hausa2-demo-colab
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "dataset:common_voice", "transformers", "generated_from_trainer", "license:apache-2.0" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
2021-11-23T09:28:18Z
--- language: - multilingual - ar - bg - de - el - en - es - fr - hi - it - ja - nl - pl - pt - ru - sw - th - tr - ur - vi - zh license: mit tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: xlm-roberta-base-language-detection results: [] --- # xlm-roberta-base-language-detection This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the [Language Identification](https://huggingface.co/datasets/papluca/language-identification#additional-information) dataset. ## Model description This model is an XLM-RoBERTa transformer model with a classification head on top (i.e. a linear layer on top of the pooled output). For additional information please refer to the [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) model card or to the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Conneau et al. ## Intended uses & limitations You can directly use this model as a language detector, i.e. for sequence classification tasks. Currently, it supports the following 20 languages: `arabic (ar), bulgarian (bg), german (de), modern greek (el), english (en), spanish (es), french (fr), hindi (hi), italian (it), japanese (ja), dutch (nl), polish (pl), portuguese (pt), russian (ru), swahili (sw), thai (th), turkish (tr), urdu (ur), vietnamese (vi), and chinese (zh)` ## Training and evaluation data The model was fine-tuned on the [Language Identification](https://huggingface.co/datasets/papluca/language-identification#additional-information) dataset, which consists of text sequences in 20 languages. The training set contains 70k samples, while the validation and test sets 10k each. The average accuracy on the test set is **99.6%** (this matches the average macro/weighted F1-score being the test set perfectly balanced). A more detailed evaluation is provided by the following table. | Language | Precision | Recall | F1-score | support | |:--------:|:---------:|:------:|:--------:|:-------:| |ar |0.998 |0.996 |0.997 |500 | |bg |0.998 |0.964 |0.981 |500 | |de |0.998 |0.996 |0.997 |500 | |el |0.996 |1.000 |0.998 |500 | |en |1.000 |1.000 |1.000 |500 | |es |0.967 |1.000 |0.983 |500 | |fr |1.000 |1.000 |1.000 |500 | |hi |0.994 |0.992 |0.993 |500 | |it |1.000 |0.992 |0.996 |500 | |ja |0.996 |0.996 |0.996 |500 | |nl |1.000 |1.000 |1.000 |500 | |pl |1.000 |1.000 |1.000 |500 | |pt |0.988 |1.000 |0.994 |500 | |ru |1.000 |0.994 |0.997 |500 | |sw |1.000 |1.000 |1.000 |500 | |th |1.000 |0.998 |0.999 |500 | |tr |0.994 |0.992 |0.993 |500 | |ur |1.000 |1.000 |1.000 |500 | |vi |0.992 |1.000 |0.996 |500 | |zh |1.000 |1.000 |1.000 |500 | ### Benchmarks As a baseline to compare `xlm-roberta-base-language-detection` against, we have used the Python [langid](https://github.com/saffsd/langid.py) library. Since it comes pre-trained on 97 languages, we have used its `.set_languages()` method to constrain the language set to our 20 languages. The average accuracy of langid on the test set is **98.5%**. More details are provided by the table below. | Language | Precision | Recall | F1-score | support | |:--------:|:---------:|:------:|:--------:|:-------:| |ar |0.990 |0.970 |0.980 |500 | |bg |0.998 |0.964 |0.981 |500 | |de |0.992 |0.944 |0.967 |500 | |el |1.000 |0.998 |0.999 |500 | |en |1.000 |1.000 |1.000 |500 | |es |1.000 |0.968 |0.984 |500 | |fr |0.996 |1.000 |0.998 |500 | |hi |0.949 |0.976 |0.963 |500 | |it |0.990 |0.980 |0.985 |500 | |ja |0.927 |0.988 |0.956 |500 | |nl |0.980 |1.000 |0.990 |500 | |pl |0.986 |0.996 |0.991 |500 | |pt |0.950 |0.996 |0.973 |500 | |ru |0.996 |0.974 |0.985 |500 | |sw |1.000 |1.000 |1.000 |500 | |th |1.000 |0.996 |0.998 |500 | |tr |0.990 |0.968 |0.979 |500 | |ur |0.998 |0.996 |0.997 |500 | |vi |0.971 |0.990 |0.980 |500 | |zh |1.000 |1.000 |1.000 |500 | ## Training procedure Fine-tuning was done via the `Trainer` API. Here is the [Colab notebook](https://colab.research.google.com/drive/15LJTckS6gU3RQOmjLqxVNBmbsBdnUEvl?usp=sharing) with the training code. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results The validation results on the `valid` split of the Language Identification dataset are summarised here below. | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.2492 | 1.0 | 1094 | 0.0149 | 0.9969 | 0.9969 | | 0.0101 | 2.0 | 2188 | 0.0103 | 0.9977 | 0.9977 | In short, it achieves the following results on the validation set: - Loss: 0.0101 - Accuracy: 0.9977 - F1: 0.9977 ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
ArpanZS/search_model
[ "joblib" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - conversational --- # Jake Peralta DialoGPT Model
Aruden/DialoGPT-medium-harrypotterall
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- tags: - conversational --- #iron man 1 DialoGPT Model
AryanLala/autonlp-Scientific_Title_Generator-34558227
[ "pytorch", "pegasus", "text2text-generation", "en", "dataset:AryanLala/autonlp-data-Scientific_Title_Generator", "transformers", "autonlp", "co2_eq_emissions", "autotrain_compatible", "has_space" ]
text2text-generation
{ "architectures": [ "PegasusForConditionalGeneration" ], "model_type": "pegasus", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
103
null
--- tags: - conversational --- #Harry Potter DialoGPT MOdel
Ashok/my-new-tokenizer
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: en license: apache-2.0 datasets: - glue --- # Bert-base-cased Fine Tuned Glue Mrpc Demo This checkpoint was initialized from the pre-trained checkpoint bert-base-cased and subsequently fine-tuned on GLUE task: mrpc using [this](https://colab.research.google.com/drive/162pW3wonGcMMrGxmA-jdxwy1rhqXd90x?usp=sharing) notebook. Training was conducted for 3 epochs, using a linear decaying learning rate of 2e-05, and a total batch size of 32. The model has a final training loss of 0.103 and a accuracy of 0.831.
Augustvember/WokkaBot99
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2021-11-04T09:55:33Z
--- language: - tr tags: - automatic-speech-recognition - common_voice - generated_from_trainer datasets: - common_voice model-index: - name: hello_2b_3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hello_2b_3 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-2b](https://huggingface.co/facebook/wav2vec2-xls-r-2b) on the COMMON_VOICE - TR dataset. It achieves the following results on the evaluation set: - Loss: 1.5615 - Wer: 0.9808 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.6389 | 0.92 | 100 | 3.6218 | 1.0 | | 1.6676 | 1.85 | 200 | 3.2655 | 1.0 | | 0.3067 | 2.77 | 300 | 3.2273 | 1.0 | | 0.1924 | 3.7 | 400 | 3.0238 | 0.9999 | | 0.1777 | 4.63 | 500 | 2.1606 | 0.9991 | | 0.1481 | 5.55 | 600 | 1.8742 | 0.9982 | | 0.1128 | 6.48 | 700 | 2.0114 | 0.9994 | | 0.1806 | 7.4 | 800 | 1.9032 | 0.9984 | | 0.0399 | 8.33 | 900 | 2.0556 | 0.9996 | | 0.0729 | 9.26 | 1000 | 2.0515 | 0.9987 | | 0.0847 | 10.18 | 1100 | 2.2121 | 0.9995 | | 0.0777 | 11.11 | 1200 | 1.7002 | 0.9923 | | 0.0476 | 12.04 | 1300 | 1.5262 | 0.9792 | | 0.0518 | 12.96 | 1400 | 1.5990 | 0.9832 | | 0.071 | 13.88 | 1500 | 1.6326 | 0.9875 | | 0.0333 | 14.81 | 1600 | 1.5955 | 0.9870 | | 0.0369 | 15.74 | 1700 | 1.5577 | 0.9832 | | 0.0689 | 16.66 | 1800 | 1.5415 | 0.9839 | | 0.0227 | 17.59 | 1900 | 1.5450 | 0.9878 | | 0.0472 | 18.51 | 2000 | 1.5642 | 0.9846 | | 0.0214 | 19.44 | 2100 | 1.6103 | 0.9846 | | 0.0289 | 20.37 | 2200 | 1.6467 | 0.9898 | | 0.0182 | 21.29 | 2300 | 1.5268 | 0.9780 | | 0.0439 | 22.22 | 2400 | 1.6001 | 0.9818 | | 0.06 | 23.15 | 2500 | 1.5481 | 0.9813 | | 0.0351 | 24.07 | 2600 | 1.5672 | 0.9820 | | 0.0198 | 24.99 | 2700 | 1.6303 | 0.9856 | | 0.0328 | 25.92 | 2800 | 1.5958 | 0.9831 | | 0.0245 | 26.85 | 2900 | 1.5745 | 0.9809 | | 0.0885 | 27.77 | 3000 | 1.5455 | 0.9809 | | 0.0224 | 28.7 | 3100 | 1.5378 | 0.9824 | | 0.0223 | 29.63 | 3200 | 1.5642 | 0.9810 | ### Framework versions - Transformers 4.13.0.dev0 - Pytorch 1.10.0 - Datasets 1.15.2.dev0 - Tokenizers 0.10.3
Augustvember/your-model-name
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2020-09-01T16:15:59Z
# Longformer2Roberta Summarization with 🤗 EncoderDecoder Framework This model is a Longformer2Roberta model fine-tuned on summarization. Longformer2Roberta is a `EncoderDecoderModel`, meaning that both the encoder is a `allenai/longformer-base-4096` model and the decoder is a `roberta-base` model. Leveraging the [EncoderDecoderFramework](https://huggingface.co/transformers/model_doc/encoderdecoder.html#encoder-decoder-models), the two pretrained models can simply be loaded into the framework via: ```python roberta2roberta = EncoderDecoderModel.from_encoder_decoder_pretrained("allenai/longformer-base-4096", "roberta-base") ``` The decoder of an `EncoderDecoder` model needs cross-attention layers and usually makes use of causal masking for auto-regressiv generation. Thus, ``longformer2roberta`` is consequently fined-tuned on the `CNN/Daily Mail`dataset and the resulting model `longformer2roberta-cnn_dailymail-fp16` is uploaded here. ## Example The model is by no means a state-of-the-art model, but nevertheless produces reasonable summarization results. It was mainly fine-tuned as a proof-of-concept for the 🤗 EncoderDecoder Framework. The model can be used as follows: ```python from transformers import LongformerTokenizer, EncoderDecoderModel model = EncoderDecoderModel.from_pretrained("patrickvonplaten/longformer2roberta-cnn_dailymail-fp16") tokenizer = LongformerTokenizer.from_pretrained("allenai/longformer-base-4096") article = """(CNN)James Holmes made his introduction to the world in a Colorado cinema filled with spectators watching a midnight showing of the new Batman movie, "The Dark Knight Rises," in June 2012. The moment became one of the deadliest shootings in U.S. history. Holmes is accused of opening fire on the crowd, killing 12 people and injuring or maiming 70 others in Aurora, a suburb of Denver. Holmes appeared like a comic book character: He resembled the Joker, with red-orange hair, similar to the late actor Heath Ledger\'s portrayal of the villain in an earlier Batman movie, authorities said. But Holmes was hardly a cartoon. Authorities said he wore body armor and carried several guns, including an AR-15 rifle, with lots of ammo. He also wore a gas mask. Holmes says he was insane at the time of the shootings, and that is his legal defense and court plea: not guilty by reason of insanity. Prosecutors aren\'t swayed and will seek the death penalty. Opening statements in his trial are scheduled to begin Monday. Holmes admits to the shootings but says he was suffering "a psychotic episode" at the time, according to court papers filed in July 2013 by the state public defenders, Daniel King and Tamara A. Brady. Evidence "revealed thus far in the case supports the defense\'s position that Mr. Holmes suffers from a severe mental illness and was in the throes of a psychotic episode when he committed the acts that resulted in the tragic loss of life and injuries sustained by moviegoers on July 20, 2012," the public defenders wrote. Holmes no longer looks like a dazed Joker, as he did in his first appearance before a judge in 2012. He appeared dramatically different in January when jury selection began for his trial: 9,000 potential jurors were summoned for duty, described as one of the nation\'s largest jury calls. Holmes now has a cleaner look, with a mustache, button-down shirt and khaki pants. In January, he had a beard and eyeglasses. If this new image sounds like one of an academician, it may be because Holmes, now 27, once was one. Just before the shooting, Holmes was a doctoral student in neuroscience, and he was studying how the brain works, with his schooling funded by a U.S. government grant. Yet for all his learning, Holmes apparently lacked the capacity to command his own mind, according to the case against him. A jury will ultimately decide Holmes\' fate. That panel is made up of 12 jurors and 12 alternates. They are 19 women and five men, and almost all are white and middle-aged. The trial could last until autumn. When jury summonses were issued in January, each potential juror stood a 0.2% chance of being selected, District Attorney George Brauchler told the final jury this month. He described the approaching trial as "four to five months of a horrible roller coaster through the worst haunted house you can imagine." The jury will have to render verdicts on each of the 165 counts against Holmes, including murder and attempted murder charges. Meanwhile, victims and their relatives are challenging all media outlets "to stop the gratuitous use of the name and likeness of mass killers, thereby depriving violent individuals the media celebrity and media spotlight they so crave," the No Notoriety group says. They are joined by victims from eight other mass shootings in recent U.S. history. Raised in central coastal California and in San Diego, James Eagan Holmes is the son of a mathematician father noted for his work at the FICO firm that provides credit scores and a registered nurse mother, according to the U-T San Diego newspaper. Holmes also has a sister, Chris, a musician, who\'s five years younger, the newspaper said. His childhood classmates remember him as a clean-cut, bespectacled boy with an "exemplary" character who "never gave any trouble, and never got in trouble himself," The Salinas Californian reported. His family then moved down the California coast, where Holmes grew up in the San Diego-area neighborhood of Rancho Peñasquitos, which a neighbor described as "kind of like Mayberry," the San Diego newspaper said. Holmes attended Westview High School, which says its school district sits in "a primarily middle- to upper-middle-income residential community." There, Holmes ran cross-country, played soccer and later worked at a biotechnology internship at the Salk Institute and Miramar College, which attracts academically talented students. By then, his peers described him as standoffish and a bit of a wiseacre, the San Diego newspaper said. Holmes attended college fairly close to home, in a neighboring area known as Southern California\'s "inland empire" because it\'s more than an hour\'s drive from the coast, in a warm, low-desert climate. He entered the University of California, Riverside, in 2006 as a scholarship student. In 2008 he was a summer camp counselor for disadvantaged children, age 7 to 14, at Camp Max Straus, run by Jewish Big Brothers Big Sisters of Los Angeles. He graduated from UC Riverside in 2010 with the highest honors and a bachelor\'s degree in neuroscience. "Academically, he was at the top of the top," Chancellor Timothy P. White said. He seemed destined for even higher achievement. By 2011, he had enrolled as a doctoral student in the neuroscience program at the University of Colorado Anschutz Medical Campus in Aurora, the largest academic health center in the Rocky Mountain region. The doctoral in neuroscience program attended by Holmes focuses on how the brain works, with an emphasis on processing of information, behavior, learning and memory. Holmes was one of six pre-thesis Ph.D. students in the program who were awarded a neuroscience training grant from the National Institutes of Health. The grant rewards outstanding neuroscientists who will make major contributions to neurobiology. A syllabus that listed Holmes as a student at the medical school shows he was to have delivered a presentation about microRNA biomarkers. But Holmes struggled, and his own mental health took an ominous turn. In March 2012, he told a classmate he wanted to kill people, and that he would do so "when his life was over," court documents said. Holmes was "denied access to the school after June 12, 2012, after he made threats to a professor," according to court documents. About that time, Holmes was a patient of University of Colorado psychiatrist Lynne Fenton. Fenton was so concerned about Holmes\' behavior that she mentioned it to her colleagues, saying he could be a danger to others, CNN affiliate KMGH-TV reported, citing sources with knowledge of the investigation. Fenton\'s concerns surfaced in early June, sources told the Denver station. Holmes began to fantasize about killing "a lot of people" in early June, nearly six weeks before the shootings, the station reported, citing unidentified sources familiar with the investigation. Holmes\' psychiatrist contacted several members of a "behavioral evaluation and threat assessment" team to say Holmes could be a danger to others, the station reported. At issue was whether to order Holmes held for 72 hours to be evaluated by mental health professionals, the station reported. "Fenton made initial phone calls about engaging the BETA team" in "the first 10 days" of June, but it "never came together" because in the period Fenton was having conversations with team members, Holmes began the process of dropping out of school, a source told KMGH. Defense attorneys have rejected the prosecution\'s assertions that Holmes was barred from campus. Citing statements from the university, Holmes\' attorneys have argued that his access was revoked because that\'s normal procedure when a student drops enrollment. What caused this turn for the worse for Holmes has yet to be clearly detailed. In the months before the shooting, he bought four weapons and more than 6,000 rounds of ammunition, authorities said. Police said he also booby-trapped his third-floor apartment with explosives, but police weren\'t fooled. After Holmes was caught in the cinema parking lot immediately after the shooting, bomb technicians went to the apartment and neutralized the explosives. No one was injured at the apartment building. Nine minutes before Holmes went into the movie theater, he called a University of Colorado switchboard, public defender Brady has said in court. The number he called can be used to get in contact with faculty members during off hours, Brady said. Court documents have also revealed that investigators have obtained text messages that Holmes exchanged with someone before the shooting. That person was not named, and the content of the texts has not been made public. According to The New York Times, Holmes sent a text message to a fellow graduate student, a woman, about two weeks before the shooting. She asked if he had left Aurora yet, reported the newspaper, which didn\'t identify her. No, he had two months left on his lease, Holmes wrote back, according to the Times. He asked if she had heard of "dysphoric mania," a form of bipolar disorder marked by the highs of mania and the dark and sometimes paranoid delusions of major depression. The woman asked if the disorder could be managed with treatment. "It was," Holmes wrote her, according to the Times. But he warned she should stay away from him "because I am bad news," the newspaper reported. It was her last contact with Holmes. After the shooting, Holmes\' family issued a brief statement: "Our hearts go out to those who were involved in this tragedy and to the families and friends of those involved," they said, without giving any information about their son. Since then, prosecutors have refused to offer a plea deal to Holmes. For Holmes, "justice is death," said Brauchler, the district attorney. In December, Holmes\' parents, who will be attending the trial, issued another statement: They asked that their son\'s life be spared and that he be sent to an institution for mentally ill people for the rest of his life, if he\'s found not guilty by reason of insanity. "He is not a monster," Robert and Arlene Holmes wrote, saying the death penalty is "morally wrong, especially when the condemned is mentally ill." "He is a human being gripped by a severe mental illness," the parents said. The matter will be settled by the jury. CNN\'s Ana Cabrera and Sara Weisfeldt contributed to this report from Denver.""" input_ids = tokenizer(article, return_tensors="pt").input_ids output_ids = model.generate(input_ids) print(tokenizer.decode(output_ids[0], skip_special_tokens=True)) # should produce # James Holmes, 27, is accused of opening fire on a Colorado theater. # He was a doctoral student at University of Colorado. # Holmes says he was suffering "a psychotic episode" at the time of the shooting. # Prosecutors won't say whether Holmes was barred from campus. ``` Such an article has a length of > 2000 tokens, which means that it cannot be handled correctly by Bert or Roberta encoders. ## Training script: **IMPORTANT**: In order for this code to work, make sure you checkout to the branch [more_general_trainer_metric](https://github.com/huggingface/transformers/tree/more_general_trainer_metric), which slightly adapts the `Trainer` for `EncoderDecoderModels` according to this PR: https://github.com/huggingface/transformers/pull/5840. The following code shows the complete training script that was used to fine-tune `longformer2roberta-cnn_dailymail-fp16 ` for reproducability. The training last ~90h on a standard GPU. ```python #!/usr/bin/env python3 import nlp import logging from transformers import LongformerTokenizer, EncoderDecoderModel, Trainer, TrainingArguments logging.basicConfig(level=logging.INFO) model = EncoderDecoderModel.from_encoder_decoder_pretrained("allenai/longformer-base-4096", "roberta-base") tokenizer = LongformerTokenizer.from_pretrained("allenai/longformer-base-4096") # load train and validation data train_dataset = nlp.load_dataset("cnn_dailymail", "3.0.0", split="train") val_dataset = nlp.load_dataset("cnn_dailymail", "3.0.0", split="validation[:5%]") # load rouge for validation rouge = nlp.load_metric("rouge", experiment_id=0) # enable gradient checkpointing for longformer encoder model.encoder.config.gradient_checkpointing = True # set decoding params model.config.decoder_start_token_id = tokenizer.bos_token_id model.config.eos_token_id = tokenizer.eos_token_id model.config.max_length = 142 model.config.min_length = 56 model.config.no_repeat_ngram_size = 3 model.early_stopping = True model.length_penalty = 2.0 model.num_beams = 4 encoder_length = 2048 decoder_length = 128 batch_size = 16 # map data correctly def map_to_encoder_decoder_inputs(batch): # Tokenizer will automatically set [BOS] <text> [EOS] # cut off at Longformer at 2048 inputs = tokenizer(batch["article"], padding="max_length", truncation=True, max_length=encoder_length) # force summarization <= 128 outputs = tokenizer(batch["highlights"], padding="max_length", truncation=True, max_length=decoder_length) batch["input_ids"] = inputs.input_ids batch["attention_mask"] = inputs.attention_mask # set 128 tokens to global attention batch["global_attention_mask"] = [[1 if i < 128 else 0 for i in range(sequence_length)] for sequence_length in len(inputs.input_ids) * [encoder_length]] batch["decoder_input_ids"] = outputs.input_ids batch["labels"] = outputs.input_ids.copy() # mask loss for padding batch["labels"] = [ [-100 if token == tokenizer.pad_token_id else token for token in labels] for labels in batch["labels"] ] batch["decoder_attention_mask"] = outputs.attention_mask assert all([len(x) == encoder_length for x in inputs.input_ids]) assert all([len(x) == decoder_length for x in outputs.input_ids]) return batch def compute_metrics(pred): labels_ids = pred.label_ids pred_ids = pred.predictions # all unnecessary tokens are removed pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True) labels_ids[labels_ids == -100] = tokenizer.eos_token_id label_str = tokenizer.batch_decode(labels_ids, skip_special_tokens=True) rouge_output = rouge.compute(predictions=pred_str, references=label_str, rouge_types=["rouge2"])["rouge2"].mid return { "rouge2_precision": round(rouge_output.precision, 4), "rouge2_recall": round(rouge_output.recall, 4), "rouge2_fmeasure": round(rouge_output.fmeasure, 4), } return { "rouge2_precision": round(rouge_output.precision, 4), "rouge2_recall": round(rouge_output.recall, 4), "rouge2_fmeasure": round(rouge_output.fmeasure, 4), } # make train dataset ready train_dataset = train_dataset.map( map_to_encoder_decoder_inputs, batched=True, batch_size=batch_size, remove_columns=["article", "highlights"], ) train_dataset.set_format( type="torch", columns=["input_ids", "attention_mask", "global_attention_mask", "decoder_input_ids", "decoder_attention_mask", "labels"], ) # same for validation dataset val_dataset = val_dataset.map( map_to_encoder_decoder_inputs, batched=True, batch_size=batch_size, remove_columns=["article", "highlights"], ) val_dataset.set_format( type="torch", columns=["input_ids", "global_attention_mask", "attention_mask", "decoder_input_ids", "decoder_attention_mask", "labels"], ) # set training arguments - these params are not really tuned, feel free to change training_args = TrainingArguments( output_dir="./", per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, predict_from_generate=True, evaluate_during_training=True, do_train=True, do_eval=True, logging_steps=1000, save_steps=1000, eval_steps=1000, overwrite_output_dir=True, warmup_steps=2000, save_total_limit=3, fp16=True, ) # instantiate trainer trainer = Trainer( model=model, args=training_args, compute_metrics=compute_metrics, train_dataset=train_dataset, eval_dataset=val_dataset, ) # start training trainer.train() ``` ## Evaluation The following script evaluates the model on the test set of CNN/Daily Mail. ```python #!/usr/bin/env python3 import nlp import torch from transformers import LongformerTokenizer, EncoderDecoderModel tokenizer = LongformerTokenizer.from_pretrained("allenai/longformer-base-4096") model = EncoderDecoderModel.from_pretrained("patrickvonplaten/longformer2roberta-cnn_dailymail-fp16") model.to("cuda") test_dataset = nlp.load_dataset("cnn_dailymail", "3.0.0", split="test") batch_size = 32 encoder_length = 2048 decoder_length = 128 # map data correctly def generate_summary(batch): # Tokenizer will automatically set [BOS] <text> [EOS] # cut off at BERT max length 512 inputs = tokenizer(batch["article"], padding="max_length", truncation=True, max_length=encoder_length, return_tensors="pt") input_ids = inputs.input_ids.to("cuda") attention_mask = inputs.attention_mask.to("cuda") global_attention_mask = torch.zeros_like(attention_mask) global_attention_mask[:, :decoder_length] = 1 outputs = model.generate(input_ids, attention_mask=attention_mask, global_attention_mask=global_attention_mask) # all special tokens including will be removed output_str = tokenizer.batch_decode(outputs, skip_special_tokens=True) batch["pred"] = output_str return batch results = test_dataset.map(generate_summary, batched=True, batch_size=batch_size, remove_columns=["article"]) # load rouge for validation rouge = nlp.load_metric("rouge") pred_str = results["pred"] label_str = results["highlights"] rouge_output = rouge.compute(predictions=pred_str, references=label_str, rouge_types=["rouge2"])["rouge2"].mid print(rouge_output) ``` The obtained results should be: | - | Rouge2 - mid -precision | Rouge2 - mid - recall | Rouge2 - mid - fmeasure | |----------|:-------------:|:------:|:------:| | **CNN/Daily Mail** | 12.39 | 15.05 | **13.21** | **Note** This model was trained to show how Longformer can be used as an Encoder model in a EncoderDecoder setup. Better results are obtained for datasets of much longer inputs.
Ayham/xlnet_gpt2_summarization_cnn_dailymail
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- tags: - automatic-speech-recognition - timit_asr - generated_from_trainer datasets: - timit_asr model-index: - name: unispeech-sat-base-timit-ft results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # unispeech-sat-base-timit-ft This model is a fine-tuned version of [microsoft/unispeech-sat-base](https://huggingface.co/microsoft/unispeech-sat-base) on the TIMIT_ASR - NA dataset. It achieves the following results on the evaluation set: - Loss: 0.6712 - Wer: 0.4101 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.2582 | 0.69 | 100 | 3.1651 | 1.0 | | 2.9542 | 1.38 | 200 | 2.9567 | 1.0 | | 2.9656 | 2.07 | 300 | 2.9195 | 1.0 | | 2.8946 | 2.76 | 400 | 2.8641 | 1.0 | | 1.9305 | 3.45 | 500 | 1.7680 | 1.0029 | | 1.0134 | 4.14 | 600 | 1.0184 | 0.6942 | | 0.8355 | 4.83 | 700 | 0.7769 | 0.6080 | | 0.8724 | 5.52 | 800 | 0.7182 | 0.6035 | | 0.5619 | 6.21 | 900 | 0.6823 | 0.5406 | | 0.4247 | 6.9 | 1000 | 0.6279 | 0.5237 | | 0.4257 | 7.59 | 1100 | 0.6056 | 0.5000 | | 0.5007 | 8.28 | 1200 | 0.5870 | 0.4918 | | 0.3854 | 8.97 | 1300 | 0.6200 | 0.4804 | | 0.264 | 9.66 | 1400 | 0.6030 | 0.4600 | | 0.1989 | 10.34 | 1500 | 0.6049 | 0.4588 | | 0.3196 | 11.03 | 1600 | 0.5946 | 0.4599 | | 0.2622 | 11.72 | 1700 | 0.6282 | 0.4422 | | 0.1697 | 12.41 | 1800 | 0.6559 | 0.4413 | | 0.1464 | 13.1 | 1900 | 0.6349 | 0.4328 | | 0.2277 | 13.79 | 2000 | 0.6133 | 0.4284 | | 0.221 | 14.48 | 2100 | 0.6617 | 0.4219 | | 0.1391 | 15.17 | 2200 | 0.6705 | 0.4235 | | 0.112 | 15.86 | 2300 | 0.6207 | 0.4218 | | 0.1717 | 16.55 | 2400 | 0.6749 | 0.4184 | | 0.2081 | 17.24 | 2500 | 0.6756 | 0.4169 | | 0.1244 | 17.93 | 2600 | 0.6750 | 0.4181 | | 0.0978 | 18.62 | 2700 | 0.6500 | 0.4115 | | 0.128 | 19.31 | 2800 | 0.6750 | 0.4106 | | 0.1791 | 20.0 | 2900 | 0.6712 | 0.4101 | ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.8.1 - Datasets 1.14.1.dev0 - Tokenizers 0.10.3
Ayham/xlnet_gpt2_summarization_xsum
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:xsum", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
2021-10-21T12:12:23Z
--- tags: - automatic-speech-recognition - timit_asr - generated_from_trainer datasets: - timit_asr model-index: - name: unispeech-sat-large-timit-ft results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # unispeech-sat-large-timit-ft This model is a fine-tuned version of [microsoft/unispeech-sat-large](https://huggingface.co/microsoft/unispeech-sat-large) on the TIMIT_ASR - NA dataset. It achieves the following results on the evaluation set: - Loss: 0.6074 - Wer: 0.3880 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 6.2516 | 0.69 | 100 | 5.8638 | 1.0 | | 2.9596 | 1.38 | 200 | 2.9550 | 1.0 | | 2.8831 | 2.07 | 300 | 2.8547 | 1.0 | | 2.3223 | 2.76 | 400 | 2.2044 | 1.0063 | | 1.2104 | 3.45 | 500 | 1.0845 | 0.7706 | | 0.6779 | 4.14 | 600 | 0.7342 | 0.5663 | | 0.6319 | 4.83 | 700 | 0.6054 | 0.4881 | | 0.664 | 5.52 | 800 | 0.5808 | 0.4913 | | 0.402 | 6.21 | 900 | 0.5647 | 0.4611 | | 0.3176 | 6.9 | 1000 | 0.5211 | 0.4440 | | 0.3392 | 7.59 | 1100 | 0.5187 | 0.4359 | | 0.3888 | 8.28 | 1200 | 0.5501 | 0.4391 | | 0.2874 | 8.97 | 1300 | 0.5249 | 0.4148 | | 0.208 | 9.66 | 1400 | 0.5407 | 0.4152 | | 0.1457 | 10.34 | 1500 | 0.5722 | 0.4155 | | 0.2375 | 11.03 | 1600 | 0.5780 | 0.4059 | | 0.2111 | 11.72 | 1700 | 0.5823 | 0.4094 | | 0.1422 | 12.41 | 1800 | 0.5754 | 0.3977 | | 0.125 | 13.1 | 1900 | 0.5784 | 0.4031 | | 0.1996 | 13.79 | 2000 | 0.5630 | 0.3956 | | 0.1747 | 14.48 | 2100 | 0.5880 | 0.3964 | | 0.1263 | 15.17 | 2200 | 0.5987 | 0.3951 | | 0.11 | 15.86 | 2300 | 0.5688 | 0.3964 | | 0.1411 | 16.55 | 2400 | 0.6223 | 0.3906 | | 0.1647 | 17.24 | 2500 | 0.6135 | 0.3960 | | 0.1162 | 17.93 | 2600 | 0.6224 | 0.3960 | | 0.098 | 18.62 | 2700 | 0.6017 | 0.3907 | | 0.1183 | 19.31 | 2800 | 0.6121 | 0.3885 | | 0.1717 | 20.0 | 2900 | 0.6074 | 0.3880 | ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.8.1 - Datasets 1.14.1.dev0 - Tokenizers 0.10.3
AyushPJ/ai-club-inductions-21-nlp-roBERTa-base-squad-v2
[ "pytorch", "roberta", "question-answering", "transformers", "generated_from_trainer", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- language: en datasets: - librispeech_asr tags: - speech license: apache-2.0 --- # Wav2Vec2-Base [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/) The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information. [Paper](https://arxiv.org/abs/2006.11477) Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli **Abstract** We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data. The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20. # Usage See [this notebook](https://colab.research.google.com/drive/1FjTsqbYKphl9kL-eILgUc-bl4zVThL8F?usp=sharing) for more information on how to fine-tune the model.
AyushPJ/test-squad-trained-finetuned-squad
[ "pytorch", "tensorboard", "distilbert", "question-answering", "dataset:squad", "transformers", "generated_from_trainer", "autotrain_compatible" ]
question-answering
{ "architectures": [ "DistilBertForQuestionAnswering" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- language: - ab license: apache-2.0 tags: - speech-recognition - common_voice - generated_from_trainer model-index: - name: wav2vec2-common_voice-ab-demo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-common_voice-ab-demo This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - AB dataset. It achieves the following results on the evaluation set: - Loss: 15.1812 - Wer: 1.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 32 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 15.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.0.dev0 - Pytorch 1.9.0+cu111 - Datasets 1.12.1 - Tokenizers 0.10.3
Azaghast/DistilBART-SCP-ParaSummarization
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "BartForConditionalGeneration" ], "model_type": "bart", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 142, "min_length": 56, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- language: - ta license: apache-2.0 tags: - automatic-speech-recognition - common_voice - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-common_voice-tamil results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-common_voice-tamil This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - TA dataset. It achieves the following results on the evaluation set: - Loss: 1.1172 - Wer: 1.0070 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 0.84 | 100 | 4.0148 | 1.0 | | No log | 1.69 | 200 | 3.1738 | 1.0 | | No log | 2.54 | 300 | 2.5980 | 1.0236 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.1+cu113 - Datasets 1.18.1.dev0 - Tokenizers 0.10.3
Azaghast/GPT2-SCP-ContainmentProcedures
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- language: - tr license: apache-2.0 tags: - speech-recognition - common_voice - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-common_voice-tr-demo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-common_voice-tr-demo This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - TR dataset. It achieves the following results on the evaluation set: - Loss: 0.3856 - Wer: 0.3556 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 15.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.7391 | 0.92 | 100 | 3.5760 | 1.0 | | 2.927 | 1.83 | 200 | 3.0796 | 0.9999 | | 0.9009 | 2.75 | 300 | 0.9278 | 0.8226 | | 0.6529 | 3.67 | 400 | 0.5926 | 0.6367 | | 0.3623 | 4.59 | 500 | 0.5372 | 0.5692 | | 0.2888 | 5.5 | 600 | 0.4407 | 0.4838 | | 0.285 | 6.42 | 700 | 0.4341 | 0.4694 | | 0.0842 | 7.34 | 800 | 0.4153 | 0.4302 | | 0.1415 | 8.26 | 900 | 0.4317 | 0.4136 | | 0.1552 | 9.17 | 1000 | 0.4145 | 0.4013 | | 0.1184 | 10.09 | 1100 | 0.4115 | 0.3844 | | 0.0556 | 11.01 | 1200 | 0.4182 | 0.3862 | | 0.0851 | 11.93 | 1300 | 0.3985 | 0.3688 | | 0.0961 | 12.84 | 1400 | 0.4030 | 0.3665 | | 0.0596 | 13.76 | 1500 | 0.3880 | 0.3631 | | 0.0917 | 14.68 | 1600 | 0.3878 | 0.3582 | ### Framework versions - Transformers 4.11.0.dev0 - Pytorch 1.9.0+cu111 - Datasets 1.12.1 - Tokenizers 0.10.3
Azizun/Geotrend-10-epochs
[ "pytorch", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
2021-10-08T12:39:10Z
https://wandb.ai/patrickvonplaten/pretraining-wav2vec2/reports/Wav2Vec2-Large--VmlldzoxMTAwODM4?accessToken=wm3qzcnldrwsa31tkvf2pdmilw3f63d4twtffs86ou016xjbyilh55uoi3mo1qzc
BOON/electra_qa
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xlsr-turkish-demo-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-turkish-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.4055 - Wer: 0.4800 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.0179 | 4.21 | 400 | 1.4935 | 1.0249 | | 0.7075 | 8.42 | 800 | 0.4546 | 0.6071 | | 0.3072 | 12.63 | 1200 | 0.3947 | 0.5401 | | 0.2145 | 16.84 | 1600 | 0.4049 | 0.5194 | | 0.1647 | 21.05 | 2000 | 0.4199 | 0.5003 | | 0.1338 | 25.26 | 2400 | 0.4144 | 0.4859 | | 0.116 | 29.47 | 2800 | 0.4055 | 0.4800 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.9.1+cu102 - Datasets 1.13.3 - Tokenizers 0.10.3
BSC-LT/roberta-base-biomedical-es
[ "pytorch", "roberta", "fill-mask", "es", "arxiv:2109.03570", "arxiv:2109.07765", "transformers", "biomedical", "spanish", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
161
null
--- tags: - automatic-speech-recognition - timit_asr - generated_from_trainer datasets: - timit_asr model-index: - name: wav2vec2-random results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-random This model is a fine-tuned version of [patrickvonplaten/wav2vec2-base-random](https://huggingface.co/patrickvonplaten/wav2vec2-base-random) on the TIMIT_ASR - NA dataset. It achieves the following results on the evaluation set: - Loss: 3.1593 - Wer: 0.8364 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.9043 | 0.69 | 100 | 2.9683 | 1.0 | | 2.8537 | 1.38 | 200 | 2.9281 | 0.9997 | | 2.7803 | 2.07 | 300 | 2.7330 | 0.9999 | | 2.6806 | 2.76 | 400 | 2.5792 | 1.0 | | 2.4136 | 3.45 | 500 | 2.4327 | 0.9948 | | 2.1682 | 4.14 | 600 | 2.3508 | 0.9877 | | 2.2577 | 4.83 | 700 | 2.2176 | 0.9773 | | 2.355 | 5.52 | 800 | 2.1753 | 0.9542 | | 1.8588 | 6.21 | 900 | 2.0650 | 0.8851 | | 1.6831 | 6.9 | 1000 | 2.0109 | 0.8618 | | 1.888 | 7.59 | 1100 | 1.9660 | 0.8418 | | 2.0066 | 8.28 | 1200 | 1.9847 | 0.8531 | | 1.7044 | 8.97 | 1300 | 1.9760 | 0.8527 | | 1.3168 | 9.66 | 1400 | 2.0708 | 0.8327 | | 1.2143 | 10.34 | 1500 | 2.0601 | 0.8419 | | 1.6189 | 11.03 | 1600 | 2.0960 | 0.8299 | | 1.13 | 11.72 | 1700 | 2.2540 | 0.8408 | | 0.8001 | 12.41 | 1800 | 2.4260 | 0.8306 | | 0.7769 | 13.1 | 1900 | 2.4182 | 0.8445 | | 1.2165 | 13.79 | 2000 | 2.3666 | 0.8284 | | 0.8026 | 14.48 | 2100 | 2.7118 | 0.8662 | | 0.5148 | 15.17 | 2200 | 2.7957 | 0.8526 | | 0.4921 | 15.86 | 2300 | 2.8244 | 0.8346 | | 0.7629 | 16.55 | 2400 | 2.8944 | 0.8370 | | 0.5762 | 17.24 | 2500 | 3.0335 | 0.8367 | | 0.4076 | 17.93 | 2600 | 3.0776 | 0.8358 | | 0.3395 | 18.62 | 2700 | 3.1572 | 0.8261 | | 0.4862 | 19.31 | 2800 | 3.1319 | 0.8414 | | 0.5061 | 20.0 | 2900 | 3.1593 | 0.8364 | ### Framework versions - Transformers 4.12.0.dev0 - Pytorch 1.8.1 - Datasets 1.14.1.dev0 - Tokenizers 0.10.3
BSC-LT/roberta-large-bne-capitel-ner
[ "pytorch", "roberta", "token-classification", "es", "dataset:bne", "dataset:capitel", "arxiv:1907.11692", "arxiv:2107.07253", "transformers", "national library of spain", "spanish", "bne", "capitel", "ner", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "RobertaForTokenClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
2021-12-01T16:28:44Z
--- language: - tr tags: - automatic-speech-recognition - common_voice - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-xls-r-phoneme-300m-tr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Wav2vec2-xls-r-phoneme-300m-tr This model is a fine-tuned version of [wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - TR dataset. It achieves the following results on the evaluation set: - Loss: 0.6380 - PER: 0.1664 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - total_train_batch_size: 32 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 20.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | PER | |:-------------:|:-----:|:----:|:---------------:|:------:| | 13.6687 | 0.92 | 100 | 12.4567 | 1.0 | | 3.4219 | 1.83 | 200 | 3.4704 | 1.0 | | 3.1846 | 2.75 | 300 | 3.2281 | 0.9935 | | 2.0076 | 3.67 | 400 | 1.7415 | 0.5222 | | 1.0244 | 4.59 | 500 | 1.0290 | 0.3323 | | 0.7095 | 5.5 | 600 | 0.8424 | 0.2859 | | 0.619 | 6.42 | 700 | 0.7389 | 0.2232 | | 0.3541 | 7.34 | 800 | 0.7049 | 0.2043 | | 0.2946 | 8.26 | 900 | 0.7065 | 0.2153 | | 0.2868 | 9.17 | 1000 | 0.6840 | 0.2115 | | 0.2245 | 10.09 | 1100 | 0.6714 | 0.1952 | | 0.1394 | 11.01 | 1200 | 0.6864 | 0.1954 | | 0.1288 | 11.93 | 1300 | 0.6696 | 0.2017 | | 0.1568 | 12.84 | 1400 | 0.6468 | 0.1843 | | 0.1269 | 13.76 | 1500 | 0.6736 | 0.1965 | | 0.1101 | 14.68 | 1600 | 0.6689 | 0.1915 | | 0.1388 | 15.6 | 1700 | 0.6690 | 0.1782 | | 0.0739 | 16.51 | 1800 | 0.6364 | 0.1734 | | 0.0897 | 17.43 | 1900 | 0.6480 | 0.1748 | | 0.0795 | 18.35 | 2000 | 0.6356 | 0.1695 | | 0.0823 | 19.27 | 2100 | 0.6382 | 0.1685 | ### Framework versions - Transformers 4.13.0.dev0 - Pytorch 1.8.1 - Datasets 1.16.2.dev0 - Tokenizers 0.10.3
BSC-LT/roberta-large-bne-capitel-pos
[ "pytorch", "roberta", "token-classification", "es", "dataset:bne", "dataset:capitel", "arxiv:1907.11692", "arxiv:2107.07253", "transformers", "national library of spain", "spanish", "bne", "capitel", "pos", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "RobertaForTokenClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
2021-11-05T11:49:01Z
--- license: apache-2.0 tags: - automatic-speech-recognition - multilingual_librispeech - generated_from_trainer datasets: - multilingual_librispeech model-index: - name: wav2vec2-xlsr-53-300m-mls-german-ft results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xlsr-53-300m-mls-german-ft This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the MULTILINGUAL_LIBRISPEECH - GERMAN 10h dataset. It achieves the following results on the evaluation set: - Loss: 0.2219 - Wer: 0.1288 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 200.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:-----:|:---------------:|:------:| | 2.9888 | 7.25 | 500 | 2.9192 | 1.0 | | 2.9313 | 14.49 | 1000 | 2.8698 | 1.0 | | 1.068 | 21.74 | 1500 | 0.2647 | 0.2565 | | 0.8151 | 28.99 | 2000 | 0.2067 | 0.1719 | | 0.764 | 36.23 | 2500 | 0.1975 | 0.1568 | | 0.7332 | 43.48 | 3000 | 0.1812 | 0.1463 | | 0.5952 | 50.72 | 3500 | 0.1923 | 0.1428 | | 0.6655 | 57.97 | 4000 | 0.1900 | 0.1404 | | 0.574 | 65.22 | 4500 | 0.1822 | 0.1370 | | 0.6211 | 72.46 | 5000 | 0.1937 | 0.1355 | | 0.5883 | 79.71 | 5500 | 0.1872 | 0.1335 | | 0.5666 | 86.96 | 6000 | 0.1874 | 0.1324 | | 0.5526 | 94.2 | 6500 | 0.1998 | 0.1368 | | 0.5671 | 101.45 | 7000 | 0.2054 | 0.1365 | | 0.5514 | 108.7 | 7500 | 0.1987 | 0.1340 | | 0.5382 | 115.94 | 8000 | 0.2104 | 0.1344 | | 0.5819 | 123.19 | 8500 | 0.2125 | 0.1334 | | 0.5277 | 130.43 | 9000 | 0.2063 | 0.1330 | | 0.4626 | 137.68 | 9500 | 0.2105 | 0.1310 | | 0.5842 | 144.93 | 10000 | 0.2087 | 0.1307 | | 0.535 | 152.17 | 10500 | 0.2137 | 0.1309 | | 0.5081 | 159.42 | 11000 | 0.2215 | 0.1302 | | 0.6033 | 166.67 | 11500 | 0.2162 | 0.1302 | | 0.5549 | 173.91 | 12000 | 0.2198 | 0.1286 | | 0.5389 | 181.16 | 12500 | 0.2241 | 0.1293 | | 0.4912 | 188.41 | 13000 | 0.2190 | 0.1290 | | 0.4671 | 195.65 | 13500 | 0.2218 | 0.1290 | ### Framework versions - Transformers 4.13.0.dev0 - Pytorch 1.10.0 - Datasets 1.15.2.dev0 - Tokenizers 0.10.3
BW/TEST
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
2021-05-03T09:03:02Z
--- language: en datasets: - librispeech_asr tags: - automatic-speech-recognition license: apache-2.0 --- ## Test model To test this model run the following code: ```python from datasets import load_dataset from transformers import Wav2Vec2ForCTC import torchaudio import torch ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") model = Wav2Vec2ForCTC.from_pretrained("patrickvonplaten/wav2vec2_tiny_random_robust") def load_audio(batch): batch["samples"], _ = torchaudio.load(batch["file"]) return batch ds = ds.map(load_audio) input_values = torch.nn.utils.rnn.pad_sequence([torch.tensor(x[0]) for x in ds["samples"][:10]], batch_first=True) # forward logits = model(input_values).logits pred_ids = torch.argmax(logits, dim=-1) # dummy loss dummy_labels = pred_ids.clone() dummy_labels[dummy_labels == model.config.pad_token_id] = 1 # can't have CTC blank token in label dummy_labels = dummy_labels[:, -(dummy_labels.shape[1] // 4):] # make sure labels are shorter to avoid "inf" loss (can still happen though...) loss = model(input_values, labels=dummy_labels).loss ```
Babelscape/rebel-large
[ "pytorch", "safetensors", "bart", "text2text-generation", "en", "dataset:Babelscape/rebel-dataset", "transformers", "seq2seq", "relation-extraction", "license:cc-by-nc-sa-4.0", "model-index", "autotrain_compatible", "has_space" ]
text2text-generation
{ "architectures": [ "BartForConditionalGeneration" ], "model_type": "bart", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9,458
2021-12-17T12:32:14Z
--- tags: - automatic-speech-recognition - librispeech_asr - generated_from_trainer - wavlm_libri_finetune model-index: - name: wavlm-libri-clean-100h-base-plus results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wavlm-libri-clean-100h-base-plus This model is a fine-tuned version of [microsoft/wavlm-base-plus](https://huggingface.co/microsoft/wavlm-base-plus) on the LIBRISPEECH_ASR - CLEAN dataset. It achieves the following results on the evaluation set: - Loss: 0.0819 - Wer: 0.0683 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 32 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.8877 | 0.34 | 300 | 2.8649 | 1.0 | | 0.2852 | 0.67 | 600 | 0.2196 | 0.1830 | | 0.1198 | 1.01 | 900 | 0.1438 | 0.1273 | | 0.0906 | 1.35 | 1200 | 0.1145 | 0.1035 | | 0.0729 | 1.68 | 1500 | 0.1055 | 0.0955 | | 0.0605 | 2.02 | 1800 | 0.0936 | 0.0859 | | 0.0402 | 2.35 | 2100 | 0.0885 | 0.0746 | | 0.0421 | 2.69 | 2400 | 0.0848 | 0.0700 | ### Framework versions - Transformers 4.15.0.dev0 - Pytorch 1.9.0+cu111 - Datasets 1.16.2.dev0 - Tokenizers 0.10.3
Babelscape/wikineural-multilingual-ner
[ "pytorch", "tensorboard", "safetensors", "bert", "token-classification", "de", "en", "es", "fr", "it", "nl", "pl", "pt", "ru", "multilingual", "dataset:Babelscape/wikineural", "transformers", "named-entity-recognition", "sequence-tagger-model", "license:cc-by-nc-sa-4.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
41,608
null
--- tags: - automatic-speech-recognition - librispeech_asr - generated_from_trainer - wavlm_libri_finetune model-index: - name: wavlm-libri-clean-100h-base results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wavlm-libri-clean-100h-base This model is a fine-tuned version of [microsoft/wavlm-base](https://huggingface.co/microsoft/wavlm-base) on the LIBRISPEECH_ASR - CLEAN dataset. It achieves the following results on the evaluation set: - Loss: 0.0829 - Wer: 0.0675 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 32 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.8805 | 0.34 | 300 | 2.8686 | 1.0 | | 0.2459 | 0.67 | 600 | 0.1858 | 0.1554 | | 0.1114 | 1.01 | 900 | 0.1379 | 0.1191 | | 0.0867 | 1.35 | 1200 | 0.1130 | 0.0961 | | 0.0698 | 1.68 | 1500 | 0.1032 | 0.0877 | | 0.0663 | 2.02 | 1800 | 0.0959 | 0.0785 | | 0.0451 | 2.35 | 2100 | 0.0887 | 0.0748 | | 0.0392 | 2.69 | 2400 | 0.0859 | 0.0698 | ### Framework versions - Transformers 4.15.0.dev0 - Pytorch 1.9.0+cu111 - Datasets 1.16.2.dev0 - Tokenizers 0.10.3
Badr/model1
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2021-12-20T12:01:59Z
--- tags: - automatic-speech-recognition - mozilla-foundation/common_voice_3_0 - generated_from_trainer model-index: - name: xls-r-300m-it-phoneme results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xls-r-300m-it-phoneme This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the mozilla-foundation/common_voice_3_0 - IT dataset. It achieves the following results on the evaluation set: - Loss: 0.3899 - Wer: 0.0770 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000075 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - distributed_type: multi-GPU - num_devices: 8 - total_train_batch_size: 32 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 150 - mixed_precision_training: Native AMP ### Training results See Training Metrics Tab. ### Framework versions - Transformers 4.15.0.dev0 - Pytorch 1.9.0+cu111 - Datasets 1.16.2.dev0 - Tokenizers 0.10.3
BatuhanYilmaz/code-search-net-tokenizer1
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: eu datasets: - common_voice metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Large 53 Basque by pcuenq results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice eu type: common_voice args: eu metrics: - name: Test WER type: wer value: 15.34 --- # Wav2Vec2-Large-XLSR-53-EU Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Basque using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "eu", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("pcuenq/wav2vec2-large-xlsr-53-eu") model = Wav2Vec2ForCTC.from_pretrained("pcuenq/wav2vec2-large-xlsr-53-eu") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the audio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Basque test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "eu", split="test") wer = load_metric("wer") model_name = "pcuenq/wav2vec2-large-xlsr-53-eu" processor = Wav2Vec2Processor.from_pretrained(model_name) model = Wav2Vec2ForCTC.from_pretrained(model_name) model.to("cuda") ## Text pre-processing chars_to_ignore_regex = '[\,\¿\?\.\¡\!\-\;\:\"\“\%\‘\”\\…\’\ː\'\‹\›\`\´\®\—\→]' chars_to_ignore_pattern = re.compile(chars_to_ignore_regex) def remove_special_characters(batch): batch["sentence"] = chars_to_ignore_pattern.sub('', batch["sentence"]).lower() + " " return batch ## Audio pre-processing import librosa def speech_file_to_array_fn(batch): speech_array, sample_rate = torchaudio.load(batch["path"]) batch["speech"] = librosa.resample(speech_array.squeeze().numpy(), sample_rate, 16_000) return batch # Text transformation and audio resampling def cv_prepare(batch): batch = remove_special_characters(batch) batch = speech_file_to_array_fn(batch) return batch # Number of CPUs or None num_proc = 16 test_dataset = test_dataset.map(cv_prepare, remove_columns=['path'], num_proc=num_proc) def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) # WER Metric computation print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 15.34 % ## Training The Common Voice `train` and `validation` datasets were used for training. Training was performed for 22 + 20 epochs with the following parameters: - Batch size 16, 2 gradient accumulation steps. - Learning rate: 2.5e-4 - Activation dropout: 0.05 - Attention dropout: 0.1 - Hidden dropout: 0.05 - Feature proj. dropout: 0.05 - Mask time probability: 0.08 - Layer dropout: 0.05
BatuhanYilmaz/marian-finetuned-kde4-en-to-fr
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2020-12-10T14:27:26Z
--- language: "nl" thumbnail: "https://github.com/iPieter/RobBERT/raw/master/res/robbert_logo.png" tags: - Dutch - Flemish - RoBERTa - RobBERT license: mit datasets: - oscar - oscar (NL) - dbrd - lassy-ud - europarl-mono - conll2002 widget: - text: "Mijn naam is RobBERT en ik ben een taalmodel van de KU Leuven." --- <p align="center"> <img src="https://github.com/iPieter/RobBERT/raw/master/res/robbert_logo_with_name.png" alt="RobBERT: A Dutch RoBERTa-based Language Model" width="75%"> </p> # RobBERT: Dutch RoBERTa-based Language Model. [RobBERT](https://github.com/iPieter/RobBERT) is the state-of-the-art Dutch BERT model. It is a large pre-trained general Dutch language model that can be fine-tuned on a given dataset to perform any text classification, regression or token-tagging task. As such, it has been successfully used by many [researchers](https://scholar.google.com/scholar?oi=bibs&hl=en&cites=7180110604335112086) and [practitioners](https://huggingface.co/models?search=robbert) for achieving state-of-the-art performance for a wide range of Dutch natural language processing tasks,
BatuhanYilmaz/mlm-finetuned-imdb
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-02-01T23:14:03Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion model-index: - name: distilbert-base-uncased-finetuned-emotion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.2+cu102 - Datasets 1.16.1 - Tokenizers 0.10.3
Baybars/debateGPT
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2021-09-05T17:29:30Z
--- tags: - conversational --- # Morty DialoGPT Model
Baybars/wav2vec2-xls-r-1b-turkish
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "tr", "dataset:common_voice", "transformers", "common_voice", "generated_from_trainer" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
2020-11-24T05:31:03Z
# Exo-Machina A deep language model, GPT-2, is trained on scientific manuscripts from NASA's Astrophysical Data System pertaining to extrasolar planets and the references therein. This pilot study uses the abstracts of each article as training data in order to explore correlations in scientific literature from a language perspective. A language model is a mathematical representation for an algorithm used to generate sequences in the same way a human would to form sentances. Each word or letter in a sentance is encoded to a numerical value (e.g. using word2vec) and is appended to a list forming sequences that represent up to a paragraph worth of text. The sequences are fed into the [GPT-2](https://openai.com/blog/better-language-models/) 117M model and trained for 500,000 steps with fine tuning. After training, the language model is used to generate new text from scratch and from user input. - ### [Browse samples](https://pearsonkyle.github.io/Exo-Machina/) - ### [Train a model on Google Colab](https://colab.research.google.com/drive/1Pur0rFi5YVdn7axYRacXWFMic4NxRexV?usp=sharing) ### Get started fast: ```python from transformers import pipeline exo = pipeline('text-generation',model='pearsonkyle/gpt2-exomachina', tokenizer='gpt2', config={'max_length':1600}) machina = lambda text: exo(text)[0]['generated_text'] print(machina("Transiting exoplanets are")) ``` ## Training Samples ~40,000 Abstracts from NASA's Astrophysical data system (ADS) and ArXiv. ![](https://huggingface.co/pearsonkyle/gpt2-exomachina/raw/main/exoplanet_keywords.png) A few generated samples are below: - *We can remotely sense an atmosphere by observing its reflected, transmitted, or emitted light in varying geometries. This light will contain information on the planetary conditions including* `temperature, pressure, composition, and cloud optical thickness. One such property that is important is...` - *The reflectance of Earth's vegetation suggests* `that large, deciduous forest fires are composed of mostly dry, unprocessed material that is distributed in a nearly patchy fashion. The distributions of these fires are correlated with temperature, and also with vegetation...` - *Directly imaged exoplanets probe* `key aspects of planet formation and evolution theory, as well as atmospheric and interior physics. These insights have led to numerous direct imaging instruments for exoplanets, many using polarimetry. However, current instruments take` Letting the scrape run for ~2 hours found articles from these publications: ``` 5364 - The Astrophysical Journal 3365 - Astronomy and Astrophysics 2704 - Monthly Notices of the Royal Astronomical Society 1355 - The Astronomical Journal 617 - arXiv e-prints 498 - Icarus 388 - Publications of the Astronomical Society of the Pacific 324 - The Astrophysical Journal Supplement Series 245 - Nature 187 - Journal of Geophysical Research 167 - Science 145 - Astronomische Nachrichten 129 - Planetary and Space Science 114 - Space Science Reviews 109 - Geophysical Research Letters ```
Bharathdamu/wav2vec2-large-xls-r-300m-hindi2-colab
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: no tags: - translation widget: - text: "moscow says deployments in eastern europe increase tensions nato says russia has moved troops to belarus" - text: "dette er en liten test som er laget av per egil kummervold han er en forsker som tidligere jobbet ved nasjonalbiblioteket" - text: "tirsdag var travel for ukrainas president volodymyr zelenskyj på morgenen tok han imot polens statsminister mateusz morawiecki" license: cc-by-4.0 --- # DeUnCaser The output from Automated Speak Recognition software is usually uncased and without any punctation. This does not make a very readable text. The DeUnCaser is a sequence-to-sequence byT5 model that is reversing this process. It adds punctation, and capitalises the correct words. In some languages this means adding capital letters at start of sentences and on all proper nouns, in other languages, like German, it means capitalising the first letter of all nouns. It will also make attempts at adding hyphens and parentheses if this is making the meaning clearer. It is using based on the multi-lingual base model. However the current finetuning is only done on Norwegian. For other languages this will be mainly experimental. I will update it with support for other languages if there is any demand.
Bhumika/roberta-base-finetuned-sst2
[ "pytorch", "tensorboard", "roberta", "text-classification", "dataset:glue", "transformers", "generated_from_trainer", "model-index" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
85
null
--- license: cc --- # Multi-Lingual DeUnCaser - Base byT5 Version The output from Automated Speak Recognition software is usually uncased and without any punctation. This does not make a very readable text. The DeUnCaser is a sequence-to-sequence model that is reversing this process. It adds punctation, and capitalises the correct words. In some languages this means adding capital letters at start of sentences and on all proper nouns, in other languages, like German, it means capitalising the first letter of all nouns. It will also make attempts at adding hyphens and parentheses if this is making the meaning clearer. It is using based on the multi-lingual T5 model. It is finetuned for 100,000 steps. The finetuning scripts is based on 100,000 training examples from each of the 44 languages with Latin alphabet that is both part of OSCAR and the mT5 training set: Afrikaans, Albanian, Basque, Catalan, Cebuano, Czech, Danish, Dutch, English, Esperanto, Estonian, Finnish, French, Galician, German, Haitian Creole, Hungarian, Icelandic, Indonesian, Irish, Italian, Kurdish, Latin, Latvian, Lithuanian, Luxembourgish, Malagasy, Malay, Maltese, Norwegian Bokmål, Norwegian Nynorsk, Polish, Portuguese, Romanian, Slovak, Spanish, Sundanese, Swahili, Swedish, Turkish, Uzbek, Vietnamese, Welsh, West Frisian. A Notebook for creating the training corpus is available [here](https://colab.research.google.com/drive/1bkH94z-0wIQP8Pz0qXFndhoQsokU-78x?usp=sharing).
Bia18/Beatriz
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: no license: cc-by-4.0 tags: - translation datasets: - oscar widget: - text: Skriv inn en tekst som du ønsker å oversette til en annen målform. --- # Norwegian mT5 - Translation Bokmål Nynorsk - Development ## Description This is the development version of the Bokmål-Nynorsk translator. If you want something that is stable, Please do run [this version](https://huggingface.co/pere/nb-nn-translation/) instead. Here is an example of how to use the model from Python ```python # Import libraries from transformers import T5ForConditionalGeneration, AutoTokenizer model = T5ForConditionalGeneration.from_pretrained('pere/nb-nn-dev',from_flax=True) tokenizer = AutoTokenizer.from_pretrained('pere/nb-nn-dev') #Encode the text text = "Hun vil ikke gi bort sine personlige data." inputs = tokenizer.encode(text, return_tensors="pt") outputs = model.generate(inputs, max_length=255, num_beams=4, early_stopping=True) #Decode and print the result print(tokenizer.decode(outputs[0])) ``` Or if you like to use the pipeline instead ```python # Set up the pipeline from transformers import pipeline translator = pipeline("translation", model='pere/nb-nn-dev') # Do the translation text = "Hun vil ikke gi bort sine personlige data." print(translator(text, max_length=255)) ```
Biasface/DDDC
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
null
--- language: no license: cc-by-4.0 tags: - translation datasets: - oscar widget: - text: Skriv inn en tekst som du ønsker å oversette til en annen målform. --- # Norwegian T5 - Translation Bokmål Nynorsk - Development ## Description This is the development version of the Bokmål-Nynorsk translator. If you want something that is stable, Please do run [this version](https://huggingface.co/pere/nb-nn-translation/) instead. Here is an example of how to use the model from Python ```python # Import libraries from transformers import T5ForConditionalGeneration, AutoTokenizer model = T5ForConditionalGeneration.from_pretrained('pere/nb-nn-dev',from_flax=True) tokenizer = AutoTokenizer.from_pretrained('pere/nb-nn-dev') #Encode the text text = "Hun vil ikke gi bort sine personlige data." inputs = tokenizer.encode(text, return_tensors="pt") outputs = model.generate(inputs, max_length=255, num_beams=4, early_stopping=True) #Decode and print the result print(tokenizer.decode(outputs[0])) ``` Or if you like to use the pipeline instead ```python # Set up the pipeline from transformers import pipeline translator = pipeline("translation", model='pere/nb-nn-dev') # Do the translation text = "Hun vil ikke gi bort sine personlige data." print(translator(text, max_length=255)) ```
Biasface/DDDC2
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- language: no license: cc-by-4.0 tags: - translation datasets: - oscar widget: - text: Skriv inn en tekst som du ønsker å oversette til en annen målform. --- # 🇳🇴 Bokmål ⇔ Nynorsk 🇳🇴 Norwegian has two relatively similar written languages; Bokmål and Nynorsk. Historically Nynorsk is a written norm based on dialects curated by the linguist Ivar Aasen in the mid-to-late 1800s, whereas Bokmål is a gradual 'Norwegization' of written Danish. The two written languages are considered equal and citizens have a right to receive public service information in their primary and prefered language. Even though this right has been around for a long time only between 5-10% of Norwegian texts are written in Nynorsk. Nynorsk is therefore a low-resource language within a low-resource language. Apart from some word-list based engines, there are not any working off-the-shelf machine learning-based translation models. Translation between Bokmål and Nynorsk is not available in Google Translate. ## Demo | | | |---|---| | Widget | Try the widget in the top right corner | | Huggingface Spaces | [Spaces Demo](https://huggingface.co/spaces/NbAiLab/nb2nn) | | | | ## Pretraining a T5-base There is an [mt5](https://huggingface.co/google/mt5-base) that includes Norwegian. Unfortunately a very small part of this is Nynorsk; there is only around 1GB Nynorsk text in mC4. Despite this, the mt5 also gives a BLEU score above 80. During the project we extracted all available Nynorsk text from the [Norwegian Colossal Corpus](https://github.com/NBAiLab/notram/blob/master/guides/corpus_v2_summary.md) at the National Library of Norway, and matched it (by material type i.e. book, newspapers and so on) with an equal amount of Bokmål. The corpus collection is described [here](https://github.com/NBAiLab/notram/blob/master/guides/nb_nn_balanced_corpus.md) and the total size is 19GB. ## Finetuning - BLEU-SCORE 88.17 🎉 The central finetuning data of the project have been 200k translation units (TU) i.e. aligned pairs of sentences in the respective languages extracted from textbooks of various subjects and newspapers. Training for [10] epochs with a learning rate of [7e-4], a batch size of [32] and a max source and target length of [512] fine tuning reached a SACREBLEU score of [88.03] at training and a test score of [**88.17**] after training. ## This is not a translator We found out that we were able to get almost identical BLEU-score with training it both ways, and letting the model decide if the input is in Bokmål or Nynorsk. This way we can train one model instead of two. We call it a language switcher. ## Future work The following Google Docs Add-on is currently pending approval. ![Add-on](bm2nn_demo.gif) ## How to use the model ```python # Set up the pipeline from transformers import pipeline translator = pipeline("translation", model='pere/nb-nn-translation') # Do the translation text = "Hun vil ikke gi bort sine personlige data." print(translator(text, max_length=255)) ```
BigDaddyNe1L/Hhaa
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: no license: cc-by-4.0 tags: - norwegian - GPT2 - casual language modeling --- # Norwegian GPT-2 - Social ## Description Experimental Norwegian GPT-2-model trained on a 37GB mainly social corpus. The following sub-corpora are used: ```bash wikipedia_download_nb.jsonl wikipedia_download_nn.jsonl newspapers_online_nb.jsonl newspapers_online_nn.jsonl twitter_2016_2018_no.jsonl twitter_news_2016_2018_no.jsonl open_subtitles_no.jsonl facebook_no.jsonl reddit_no.jsonl vgdebatt_no.jsonl ```
BigSalmon/BestMask2
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
2021-06-28T20:43:03Z
--- language: no license: cc-by-4.0 tags: - norwegian - GPT2 - casual language modeling datasets: - oscar --- # Norwegian GPT-2 - Oscar ## Description This is a sample reference model trained only on the Oscar Corpus for a day on a TPU v3-8. Pretrained model on Norwegian language using a causal language modeling (CLM) objective.
BigSalmon/DaBlank
[ "pytorch", "jax", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
4
null
# Norwegian GTPNeo Blue. The first Norwegian GPTNeo model. This one is trained only on a administrative corpus.
BigSalmon/FormalBerta3
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
Same as norwegian-roberta-base but with higher learning rate and batch size
BigSalmon/FormalRobertaaa
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- language: no license: cc-by-4.0 tags: - seq2seq datasets: - Norwegian Nynorsk/Bokmål --- # 🇳🇴 Norwegian T5 Base model Trained on the NCC🇳🇴 This is a Norwegian T5-base model trained on the Norwegian Colossal Corpus (NCC) on a TPU v3-8. It needs to be finetuned on a specific task before being used for anything. The following setting were used in training: ```bash ./run_t5_mlm_flax_streaming.py \ --output_dir="./" \ --model_type="t5" \ --config_name="./" \ --tokenizer_name="./" \ --dataset_name="pere/norwegian_colossal_corpus_v2_short100k" \ --max_seq_length="512" \ --weight_decay="0.01" \ --per_device_train_batch_size="32" \ --per_device_eval_batch_size="32" \ --learning_rate="8e-3" \ --warmup_steps="0" \ --overwrite_output_dir \ --cache_dir /mnt/disks/flaxdisk/cache/ \ --num_train_epochs="5" \ --adam_beta1="0.9" \ --adam_beta2="0.98" \ --logging_steps="500" \ --num_train_steps="1000000" \ --num_eval_samples="5000" \ --save_steps="5000" \ --eval_steps="5000" \ --preprocessing_num_workers 96 \ --adafactor \ --push_to_hub ```
BigSalmon/GPT2HardArticleEasyArticle
[ "pytorch", "jax", "tensorboard", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- language: no license: cc-by-4.0 tags: - seq2seq datasets: - Norwegian Nynorsk/Bokmål --- # 🇳🇴 Norwegian T5 Base model Trained on the NCC🇳🇴 This is a Norwegian T5-base model trained on the Norwegian Colossal Corpus (NCC) on a TPU v3-8. It needs to be finetuned on a specific task before being used for anything. Currently the model is training. It is expected that it should be finished by the end of August 2021. The following setting were used in training: ```bash ./run_t5_mlm_flax.py \ --output_dir="./" \ --model_type="t5" \ --config_name="./" \ --tokenizer_name="./" \ --train_file /mnt/disks/flaxdisk/corpus/norwegian_colossal_corpus_train.json \ --validation_file /mnt/disks/flaxdisk/corpus/norwegian_colossal_corpus_validation.json \ --max_seq_length="128" \ --weight_decay="0.01" \ --per_device_train_batch_size="128" \ --per_device_eval_batch_size="128" \ --learning_rate="8e-3" \ --warmup_steps="2000" \ --overwrite_output_dir \ --cache_dir /mnt/disks/flaxdisk/cache/ \ --num_train_epochs="3" \ --adam_beta1="0.9" \ --adam_beta2="0.98" \ --logging_steps="100" \ --save_steps="2500" \ --eval_steps="2500" \ --preprocessing_num_workers 96 \ --adafactor \ --push_to_hub ```
BigSalmon/GPTHeHe
[ "pytorch", "gpt2", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- language: no license: cc-by-4.0 tags: - summary datasets: - oscar widget: - text: 'translate Bokmål to Nynorsk: Dette er en test!' --- # Norwegian T5 - small - Oscar ## Description This is a sample reference model trained only on the Oscar Corpus for a day on a TPU v3-8. Do not use this model as anything other than a simple reference point.
BigSalmon/GPTNeo350MInformalToFormalLincoln3
[ "pytorch", "gpt_neo", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "GPTNeoForCausalLM" ], "model_type": "gpt_neo", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- language: - ab tags: - automatic-speech-recognition - mozilla-foundation/common_voice_7_0 - generated_from_trainer datasets: - common_voice model-index: - name: '' results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # This model is a fine-tuned version of [hf-test/xls-r-dummy](https://huggingface.co/hf-test/xls-r-dummy) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - AB dataset. It achieves the following results on the evaluation set: - Loss: 156.8789 - Wer: 1.3456 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
BigSalmon/GPTT
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- language: - fa - multilingual thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg tags: - multiple-choice - mbert - persian - farsi pipeline_tag: text-classification license: cc-by-nc-sa-4.0 datasets: - parsinlu metrics: - accuracy --- # Multiple-Choice Question Answering (مدل برای پاسخ به سوالات چهار جوابی) This is a mbert-based model for multiple-choice question answering. Here is an example of how you can run this model: ```python from typing import List import torch from transformers import AutoConfig, AutoModelForMultipleChoice, AutoTokenizer model_name = "persiannlp/mbert-base-parsinlu-multiple-choice" tokenizer = AutoTokenizer.from_pretrained(model_name) config = AutoConfig.from_pretrained(model_name) model = AutoModelForMultipleChoice.from_pretrained(model_name, config=config) def run_model(question: str, candicates: List[str]): assert len(candicates) == 4, "you need four candidates" choices_inputs = [] for c in candicates: text_a = "" # empty context text_b = question + " " + c inputs = tokenizer( text_a, text_b, add_special_tokens=True, max_length=128, padding="max_length", truncation=True, return_overflowing_tokens=True, ) choices_inputs.append(inputs) input_ids = torch.LongTensor([x["input_ids"] for x in choices_inputs]) output = model(input_ids=input_ids) print(output) return output run_model(question="وسیع ترین کشور جهان کدام است؟", candicates=["آمریکا", "کانادا", "روسیه", "چین"]) run_model(question="طامع یعنی ؟", candicates=["آزمند", "خوش شانس", "محتاج", "مطمئن"]) run_model( question="زمینی به ۳۱ قطعه متساوی مفروض شده است و هر روز مساحت آماده شده برای احداث، دو برابر مساحت روز قبل است.اگر پس از (۵ روز) تمام زمین آماده شده باشد، در چه روزی یک قطعه زمین آماده شده ", candicates=["روز اول", "روز دوم", "روز سوم", "هیچکدام"]) ``` For more details, visit this page: https://github.com/persiannlp/parsinlu/
BigSalmon/InfillFormalLincoln
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- language: - fa - multilingual thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg tags: - multiple-choice - mt5 - persian - farsi license: cc-by-nc-sa-4.0 datasets: - parsinlu metrics: - accuracy --- # Multiple-Choice Question Answering (مدل برای پاسخ به سوالات چهار جوابی) This is a mT5-based model for multiple-choice question answering. Here is an example of how you can run this model: ```python from transformers import MT5ForConditionalGeneration, MT5Tokenizer model_size = "base" model_name = f"persiannlp/mt5-{model_size}-parsinlu-multiple-choice" tokenizer = MT5Tokenizer.from_pretrained(model_name) model = MT5ForConditionalGeneration.from_pretrained(model_name) def run_model(input_string, **generator_args): input_ids = tokenizer.encode(input_string, return_tensors="pt") res = model.generate(input_ids, **generator_args) output = tokenizer.batch_decode(res, skip_special_tokens=True) print(output) return output run_model("وسیع ترین کشور جهان کدام است؟ <sep> آمریکا <sep> کانادا <sep> روسیه <sep> چین") run_model("طامع یعنی ؟ <sep> آزمند <sep> خوش شانس <sep> محتاج <sep> مطمئن") run_model( "زمینی به ۳۱ قطعه متساوی مفروض شده است و هر روز مساحت آماده شده برای احداث، دو برابر مساحت روز قبل است.اگر پس از (۵ روز) تمام زمین آماده شده باشد، در چه روزی یک قطعه زمین آماده شده <sep> روز اول <sep> روز دوم <sep> روز سوم <sep> هیچکدام") ``` For more details, visit this page: https://github.com/persiannlp/parsinlu/
BigSalmon/InformalToFormalLincoln14
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- language: - fa - multilingual thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg tags: - machine-translation - mt5 - persian - farsi license: cc-by-nc-sa-4.0 datasets: - parsinlu metrics: - sacrebleu --- # Machine Translation (ترجمه‌ی ماشینی) This is an mT5-based model for machine translation (Persian -> English). Here is an example of how you can run this model: ```python from transformers import MT5ForConditionalGeneration, MT5Tokenizer model_size = "base" model_name = f"persiannlp/mt5-{model_size}-parsinlu-opus-translation_fa_en" tokenizer = MT5Tokenizer.from_pretrained(model_name) model = MT5ForConditionalGeneration.from_pretrained(model_name) def run_model(input_string, **generator_args): input_ids = tokenizer.encode(input_string, return_tensors="pt") res = model.generate(input_ids, **generator_args) output = tokenizer.batch_decode(res, skip_special_tokens=True) print(output) return output run_model("ستایش خدای را که پروردگار جهانیان است.") run_model("در هاید پارک کرنر بر گلدانی ایستاده موعظه می‌کند؛") run_model("وی از تمامی بلاگرها، سازمان‌ها و افرادی که از وی پشتیبانی کرده‌اند، تشکر کرد.") run_model("مشابه سال ۲۰۰۱، تولید آمونیاک بی آب در ایالات متحده در سال ۲۰۰۰ تقریباً ۱۷،۴۰۰،۰۰۰ تن (معادل بدون آب) با مصرف ظاهری ۲۲،۰۰۰،۰۰۰ تن و حدود ۴۶۰۰۰۰۰ با واردات خالص مواجه شد. ") run_model("می خواهم دکترای علوم کامپیوتر راجع به شبکه های اجتماعی را دنبال کنم، چالش حل نشده در شبکه های اجتماعی چیست؟") ``` which should give the following: ``` ['the admiration of God, which is the Lord of the world.'] ['At the Ford Park, the Crawford Park stands on a vase;'] ['He thanked all the bloggers, the organizations, and the people who supported him'] ['similar to the year 2001, the economy of ammonia in the United States in the'] ['I want to follow the computer experts on social networks, what is the unsolved problem in'] ``` which should give the following: ``` ['Adoration of God, the Lord of the world.'] ['At the High End of the Park, Conrad stands on a vase preaching;'] ['She thanked all the bloggers, organizations, and men who had supported her.'] ['In 2000, the lack of water ammonia in the United States was almost'] ['I want to follow the computer science doctorate on social networks. What is the unsolved challenge'] ``` Which should produce the following: ``` ['the praise of God, the Lord of the world.'] ['At the Hyde Park Corner, Carpenter is preaching on a vase;'] ['He thanked all the bloggers, organizations, and people who had supported him.'] ['Similarly in 2001, the production of waterless ammonia in the United States was'] ['I want to pursue my degree in Computer Science on social networks, what is the'] ``` For more details, visit this page: https://github.com/persiannlp/parsinlu/
BigSalmon/InformalToFormalLincoln15
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
--- language: - fa - multilingual thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg tags: - query-paraphrasing - mt5 - persian - farsi license: cc-by-nc-sa-4.0 datasets: - parsinlu - qqp metrics: - accuracy --- # Detection of Paraphrased Queries (تشخصیص سوالات هم‌معنی) This is a model for detection of paraphrased queries. Here is an example of how you can run this model: ```python from transformers import MT5Config, MT5ForConditionalGeneration, MT5Tokenizer model_name = "persiannlp/mt5-base-parsinlu-qqp-query-paraphrasing" tokenizer = MT5Tokenizer.from_pretrained(model_name) model = MT5ForConditionalGeneration.from_pretrained(model_name) def run_model(q1, q2, **generator_args): input_ids = tokenizer.encode(f"{q1}<sep>{q2}", return_tensors="pt") res = model.generate(input_ids, **generator_args) output = tokenizer.batch_decode(res, skip_special_tokens=True) print(output) return output run_model("چه چیزی باعث پوکی استخوان می شود؟", "چه چیزی باعث مقاومت استخوان در برابر ضربه می شود؟") run_model("من دارم به این فکر میکنم چرا ساعت هفت نمیشه؟", "چرا من ساده فکر میکردم به عشقت پابندی؟") run_model("دعای کمیل در چه روزهایی خوانده می شود؟", "دعای جوشن کبیر در چه شبی خوانده می شود؟") run_model("دعای کمیل در چه روزهایی خوانده می شود؟", "دعای جوشن کبیر در چه شبی خوانده می شود؟") run_model("شناسنامه در چه سالی وارد ایران شد؟", "سیب زمینی در چه سالی وارد ایران شد؟") run_model("سیب زمینی چه زمانی وارد ایران شد؟", "سیب زمینی در چه سالی وارد ایران شد؟") ``` For more details, visit this page: https://github.com/persiannlp/parsinlu/
BigSalmon/InformalToFormalLincoln17
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
2021-03-06T00:07:13Z
--- language: - fa - multilingual thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg tags: - entailment - mt5 - persian - farsi license: cc-by-nc-sa-4.0 datasets: - parsinlu - snli metrics: - accuracy --- # Textual Entailment (مدل برای پاسخ به استلزام منطقی) This is a model for textual entailment problems. Here is an example of how you can run this model: ```python from transformers import MT5ForConditionalGeneration, MT5Tokenizer model_size="base" model_name = f"persiannlp/mt5-{model_size}-parsinlu-snli-entailment" tokenizer = MT5Tokenizer.from_pretrained(model_name) model = MT5ForConditionalGeneration.from_pretrained(model_name) def run_model(premise, hypothesis, **generator_args): input_ids = tokenizer.encode(f"{premise}<sep>{hypothesis}", return_tensors="pt") res = model.generate(input_ids, **generator_args) output = tokenizer.batch_decode(res, skip_special_tokens=True) print(output) return output run_model( "این مسابقات بین آوریل و دسامبر در هیپودروم ولیفندی در نزدیکی باکرکی ، ۱۵ کیلومتری (۹ مایل) غرب استانبول برگزار می شود.", "در ولیفندی هیپودروم، مسابقاتی از آوریل تا دسامبر وجود دارد." ) run_model( "آیا کودکانی وجود دارند که نیاز به سرگرمی دارند؟", "هیچ کودکی هرگز نمی خواهد سرگرم شود.", ) run_model( "ما به سفرهایی رفته ایم که در نهرهایی شنا کرده ایم", "علاوه بر استحمام در نهرها ، ما به اسپا ها و سونا ها نیز رفته ایم." ) ``` For more details, visit this page: https://github.com/persiannlp/parsinlu/
BigSalmon/InformalToFormalLincoln18
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
2021-03-10T09:11:15Z
--- language: - fa - multilingual thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg tags: - reading-comprehension - mt5 - persian - farsi license: cc-by-nc-sa-4.0 datasets: - parsinlu - squad metrics: - f1 --- # Reading Comprehension (مدل برای پاسخ به درک مطلب) This is a mT5-based model for reading comprehension. Here is an example of how you can run this model: ```python from transformers import MT5ForConditionalGeneration, MT5Tokenizer model_size = "base" model_name = f"persiannlp/mt5-{model_size}-parsinlu-squad-reading-comprehension" tokenizer = MT5Tokenizer.from_pretrained(model_name) model = MT5ForConditionalGeneration.from_pretrained(model_name) def run_model(paragraph, question, **generator_args): input_ids = tokenizer.encode(question + "\n" + paragraph, return_tensors="pt") res = model.generate(input_ids, **generator_args) output = tokenizer.batch_decode(res, skip_special_tokens=True) print(output) return output run_model( "یک شی را دارای تقارن می‌نامیم زمانی که ان شی را بتوان به دو یا چند قسمت تقسیم کرد که آن‌ها قسمتی از یک طرح سازمان یافته باشند یعنی بر روی شکل تنها جابجایی و چرخش و بازتاب و تجانس انجام شود و در اصل شکل تغییری به وجود نیایید آنگاه ان را تقارن می‌نامیم مرکز تقارن:اگر در یک شکل نقطه‌ای مانندA وجود داشته باشد که هر نقطهٔ روی شکل (محیط) نسبت به نقطه یAمتقارن یک نقطهٔ دیگر شکل (محیط) باشد، نقطهٔ Aمرکز تقارن است. یعنی هر نقطه روی شکل باید متقارنی داشته باشد شکل‌های که منتظم هستند و زوج ضلع دارند دارای مرکز تقارند ولی شکل‌های فرد ضلعی منتظم مرکز تقارن ندارند. متوازی‌الأضلاع و دایره یک مرکز تقارن دارند ممکن است یک شکل خط تقارن نداشته باشد ولی مرکز تقارن داشته باشد. (منبع:س. گ)", "اشکالی که یک مرکز تقارن دارند" ) run_model( "شُتُر یا اُشتر را که در زبان پهلوی (ushtar)[نیازمند منبع] می‌گفتند حیوانی است نیرومند و تنومند با توش و توان بالا از خانواده شتران؛ شبه نشخوارکننده و با دست و گردنی دراز. بر پشت خود یک یا دو کوهان دارد که ساختارش از پیه و چربی است. در دین اسلام گوشت او حلال است. اما ذبح آن با دیگر جانوران حلال گوشت متفاوت است و آن را نحر (بریدن گلو) می‌کنند و اگر سر آن را مانند گوسفند پیش از نحر ببرند گوشت آن حلال نیست. شیرش نیز نوشیده می‌شود ولی بیشتر کاربرد بارکشی دارد. پشم و پوستش نیز برای ریسندگی و پارچه‌بافی و کفش‌دوزی کاربرد دارد. گونه‌های دیگری از شتران نیز در آمریکای جنوبی زندگی می‌کنند، به نام‌های لاما، آلپاکا، گواناکو که دارای کوهان نیستند. شتر ویژگی‌های خاصّی دارد که مهم‌ترین آن‌ها تحمّل شرایط سخت صحرا و دماهای گوناگون و به‌ویژه گرمای شدید تابستان و کمبود آب و علوفه است. ترکیب جسمانی شتر با دیگر جانوران اختلاف زیادی دارد، و این اختلاف انگیزه شده که شتر در درازا روزهای سال در بیابان زندگی کند و از بوته‌ها و درختچه‌های گوناگون صحرایی و کویری و حتی از بوته‌های شور و خاردار تغذیه کند. عرب‌ها از زمان‌های بسیار دور از شتر استفاده کرده و می‌کنند. آن‌ها به این حیوان اهلی لقب کشتی صحرا (به عربی: سفینةالصحراء) داده‌اند.", "غذای شترچیست؟" ) run_model( """حسین میرزایی می‌گوید مرحله اول پرداخت وام حمایتی کرونا به همگی خانوارهای یارانه‌بگیر متقاضی تکمیل شده است و حال چهار میلیون خانوار که به عنوان "اقشار خاص" و "آسیب‌پذیر" شناسایی شدند، می‌توانند برای یک میلیون تومان وام دیگر درخواست بدهند. آقای میرزایی گفته خانوارهای "آسیب‌پذیر" که شرایط گرفتن وام یک میلیونی اضافی را دارند با پیامک از این امکان مطلع شده‌اند. بنا به گزارش‌های رسمی با شیوع کرونا در ایران یک میلیون نفر بیکار شده‌اند و درآمد کارکنان مشاغل غیررسمی نیز ضربه قابل توجهی خورده است. ارزش ریال هم در هفته‌های اخیر در برابر ارزهای خارجی سقوط کرده است. اقتصاد ایران پیش از شیوع کرونا نیز با مشکلات مزمن رکود، تورم، تحریم و فساد روبرو بود.""", "وام یارانه به چه کسانی میدهند؟" ) run_model( "در ۲۲ ژوئن ۱۹۴۱ نیروهای محور در عملیات بارباروسا حمله سنگینی به اتحاد شوروی کرده و یکی از بزرگترین نبردهای زمینی تاریخ بشر را رقم زدند. همچنین جبهه شرقی باعث به دام افتادن نیروهای محور شد و بیش از همه ارتش آلمان نازی را درگیر جنگ فرسایشی کرد. در دسامبر ۱۹۴۱ ژاپن یک در عملیاتی ناگهانی با نام نبرد پرل هاربر به پایگاه دریایی ایالات متحده آمریکا حمله کرد. به دنبال این اتفاق آمریکا نیز بلافاصله علیه ژاپن اعلان جنگ کرد که با حمایت بریتانیا همراه شد. پس از آن متحدین (نیروهای محور در اروپا) نیز با اتحاد ژاپن علیه آمریکا اعلام جنگ کردند. دست‌آوردهای ژاپن در یورش به آمریکا باعث ایجاد این احساس در آسیا شد که آسیا از تسلط غرب خارج شده‌است از این رو بسیاری از ارتش‌های شکست خورده با آنها همراهی کردند.", "چرا امریکا وارد جنگ جهانی دوم شد؟" ) ``` For more details, visit this page: https://github.com/persiannlp/parsinlu/
BigSalmon/MrLincoln
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- language: - fa - multilingual thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg tags: - multiple-choice - mt5 - persian - farsi license: cc-by-nc-sa-4.0 datasets: - parsinlu - commonsenseqa - arc - openbookqa metrics: - accuracy --- # Multiple-Choice Question Answering (مدل برای پاسخ به سوالات چهار جوابی) This is a mT5-based model for multiple-choice question answering. Here is an example of how you can run this model: ```python from transformers import MT5ForConditionalGeneration, MT5Tokenizer model_size = "small" model_name = f"persiannlp/mt5-{model_size}-parsinlu-arc-comqa-obqa-multiple-choice" tokenizer = MT5Tokenizer.from_pretrained(model_name) model = MT5ForConditionalGeneration.from_pretrained(model_name) def run_model(input_string, **generator_args): input_ids = tokenizer.encode(input_string, return_tensors="pt") res = model.generate(input_ids, **generator_args) output = tokenizer.batch_decode(res, skip_special_tokens=True) print(output) return output run_model("وسیع ترین کشور جهان کدام است؟ <sep> آمریکا <sep> کانادا <sep> روسیه <sep> چین") run_model("طامع یعنی ؟ <sep> آزمند <sep> خوش شانس <sep> محتاج <sep> مطمئن") run_model( "زمینی به ۳۱ قطعه متساوی مفروض شده است و هر روز مساحت آماده شده برای احداث، دو برابر مساحت روز قبل است.اگر پس از (۵ روز) تمام زمین آماده شده باشد، در چه روزی یک قطعه زمین آماده شده <sep> روز اول <sep> روز دوم <sep> روز سوم <sep> هیچکدام") ``` For more details, visit this page: https://github.com/persiannlp/parsinlu/
BigSalmon/MrLincoln10
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- language: - fa - multilingual thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg tags: - multiple-choice - mt5 - persian - farsi license: cc-by-nc-sa-4.0 datasets: - parsinlu metrics: - accuracy --- # Multiple-Choice Question Answering (مدل برای پاسخ به سوالات چهار جوابی) This is a mT5-based model for multiple-choice question answering. Here is an example of how you can run this model: ```python from transformers import MT5ForConditionalGeneration, MT5Tokenizer model_size = "small" model_name = f"persiannlp/mt5-{model_size}-parsinlu-multiple-choice" tokenizer = MT5Tokenizer.from_pretrained(model_name) model = MT5ForConditionalGeneration.from_pretrained(model_name) def run_model(input_string, **generator_args): input_ids = tokenizer.encode(input_string, return_tensors="pt") res = model.generate(input_ids, **generator_args) output = tokenizer.batch_decode(res, skip_special_tokens=True) print(output) return output run_model("وسیع ترین کشور جهان کدام است؟ <sep> آمریکا <sep> کانادا <sep> روسیه <sep> چین") run_model("طامع یعنی ؟ <sep> آزمند <sep> خوش شانس <sep> محتاج <sep> مطمئن") run_model( "زمینی به ۳۱ قطعه متساوی مفروض شده است و هر روز مساحت آماده شده برای احداث، دو برابر مساحت روز قبل است.اگر پس از (۵ روز) تمام زمین آماده شده باشد، در چه روزی یک قطعه زمین آماده شده <sep> روز اول <sep> روز دوم <sep> روز سوم <sep> هیچکدام") ``` For more details, visit this page: https://github.com/persiannlp/parsinlu/
BigSalmon/MrLincoln12
[ "pytorch", "gpt2", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- language: - fa - multilingual thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg tags: - query-paraphrasing - mt5 - persian - farsi license: cc-by-nc-sa-4.0 datasets: - parsinlu - qqp metrics: - accuracy --- # Detection of Paraphrased Queries (تشخصیص سوالات هم‌معنی) This is a model for detection of paraphrased queries. Here is an example of how you can run this model: ```python from transformers import MT5Config, MT5ForConditionalGeneration, MT5Tokenizer model_name = "persiannlp/mt5-small-parsinlu-qqp-query-paraphrasing" tokenizer = MT5Tokenizer.from_pretrained(model_name) model = MT5ForConditionalGeneration.from_pretrained(model_name) def run_model(q1, q2, **generator_args): input_ids = tokenizer.encode(f"{q1}<sep>{q2}", return_tensors="pt") res = model.generate(input_ids, **generator_args) output = tokenizer.batch_decode(res, skip_special_tokens=True) print(output) return output run_model("چه چیزی باعث پوکی استخوان می شود؟", "چه چیزی باعث مقاومت استخوان در برابر ضربه می شود؟") run_model("من دارم به این فکر میکنم چرا ساعت هفت نمیشه؟", "چرا من ساده فکر میکردم به عشقت پابندی؟") run_model("دعای کمیل در چه روزهایی خوانده می شود؟", "دعای جوشن کبیر در چه شبی خوانده می شود؟") run_model("دعای کمیل در چه روزهایی خوانده می شود؟", "دعای جوشن کبیر در چه شبی خوانده می شود؟") run_model("شناسنامه در چه سالی وارد ایران شد؟", "سیب زمینی در چه سالی وارد ایران شد؟") run_model("سیب زمینی چه زمانی وارد ایران شد؟", "سیب زمینی در چه سالی وارد ایران شد؟") ``` For more details, visit this page: https://github.com/persiannlp/parsinlu/
BigSalmon/MrLincoln125MNeo
[ "pytorch", "tensorboard", "gpt_neo", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPTNeoForCausalLM" ], "model_type": "gpt_neo", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
2021-03-10T20:46:29Z
--- language: - fa - multilingual thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg tags: - sentiment - sentiment-analysis - mt5 - persian - farsi license: cc-by-nc-sa-4.0 datasets: - parsinlu metrics: - accuracy --- # Sentiment Analysis (آنالیز احساسات) This is a mT5 model for sentiment analysis. Here is an example of how you can run this model: ```python import torch from transformers import MT5ForConditionalGeneration, MT5Tokenizer import numpy as np model_name_or_path = "persiannlp/mt5-small-parsinlu-sentiment-analysis" tokenizer = MT5Tokenizer.from_pretrained(model_name) model = MT5ForConditionalGeneration.from_pretrained(model_name) def model_predict(text_a, text_b): features = tokenizer( [(text_a, text_b)], padding="max_length", truncation=True, return_tensors='pt') output = model(**features) logits = output[0] probs = torch.nn.functional.softmax(logits, dim=1).tolist() idx = np.argmax(np.array(probs)) print(labels[idx], probs) def run_model(context, query, **generator_args): input_ids = tokenizer.encode(context + "<sep>" + query, return_tensors="pt") res = model.generate(input_ids, **generator_args) output = tokenizer.batch_decode(res, skip_special_tokens=True) print(output) return output run_model( "یک فیلم ضعیف بی محتوا بدون فیلمنامه . شوخی های سخیف .", "نظر شما در مورد داستان، فیلمنامه، دیالوگ ها و موضوع فیلم لونه زنبور چیست؟" ) run_model( "فیلم تا وسط فیلم یعنی دقیقا تا جایی که معلوم میشه بچه های املشی دنبال رضان خیلی خوب و جذاب پیش میره ولی دقیقا از همونجاش سکته میزنه و خلاص...", "نظر شما به صورت کلی در مورد فیلم ژن خوک چیست؟" ) run_model( "اصلا به هیچ عنوان علاقه نداشتم اجرای می سی سی پی نشسته میمیرد روی پرده سینما ببینم دیالوگ های تکراری هلیکوپتر ماشین آلندلون لئون پاپیون آخه چرااااااااااااااا همون حسی که توی تالار وحدت بعد از نیم ساعت به سرم اومد امشب توی سالن سینما تجربه کردم ،حس گریز از سالن.......⁦ ⁦(ノಠ益ಠ)ノ⁩ ", " نظر شما در مورد صداگذاری و جلوه های صوتی فیلم مسخره‌باز چیست؟" ) run_model( " گول نخورید این رنگارنگ مینو نیست برای شرکت گرجیه و متاسفانه این محصولش اصلا مزه رنگارنگی که انتظار دارید رو نمیده ", " نظر شما در مورد عطر، بو، و طعم این بیسکویت و ویفر چیست؟" ) run_model( "در مقایسه با سایر برندهای موجود در بازار با توجه به حراجی که داشت ارزانتر ب", " شما در مورد قیمت و ارزش خرید این حبوبات و سویا چیست؟" ) run_model( "من پسرم عاشق ایناس ولی دیگه به خاطر حفظ محیط زیست فقط زمانهایی که مجبور باشم شیر دونه ای میخرم و سعی میکنم دیگه کمتر شیر با بسته بندی تتراپک استفاده کنم ", "نظر شما به صورت کلی در مورد این شیر چیست؟" ) ``` For more details, visit this page: https://github.com/persiannlp/parsinlu/
BigSalmon/MrLincoln13
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
2021-03-06T00:06:58Z
--- language: - fa - multilingual thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg tags: - entailment - mt5 - persian - farsi license: cc-by-nc-sa-4.0 datasets: - parsinlu - snli metrics: - accuracy --- # Textual Entailment (مدل برای پاسخ به استلزام منطقی) This is a model for textual entailment problems. Here is an example of how you can run this model: ```python from transformers import MT5ForConditionalGeneration, MT5Tokenizer model_size="small" model_name = f"persiannlp/mt5-{model_size}-parsinlu-snli-entailment" tokenizer = MT5Tokenizer.from_pretrained(model_name) model = MT5ForConditionalGeneration.from_pretrained(model_name) def run_model(premise, hypothesis, **generator_args): input_ids = tokenizer.encode(f"{premise}<sep>{hypothesis}", return_tensors="pt") res = model.generate(input_ids, **generator_args) output = tokenizer.batch_decode(res, skip_special_tokens=True) print(output) return output run_model( "این مسابقات بین آوریل و دسامبر در هیپودروم ولیفندی در نزدیکی باکرکی ، ۱۵ کیلومتری (۹ مایل) غرب استانبول برگزار می شود.", "در ولیفندی هیپودروم، مسابقاتی از آوریل تا دسامبر وجود دارد." ) run_model( "آیا کودکانی وجود دارند که نیاز به سرگرمی دارند؟", "هیچ کودکی هرگز نمی خواهد سرگرم شود.", ) run_model( "ما به سفرهایی رفته ایم که در نهرهایی شنا کرده ایم", "علاوه بر استحمام در نهرها ، ما به اسپا ها و سونا ها نیز رفته ایم." ) ``` For more details, visit this page: https://github.com/persiannlp/parsinlu/
BigSalmon/MrLincoln14
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2021-03-10T09:10:59Z
--- language: - fa - multilingual thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg tags: - reading-comprehension - mt5 - persian - farsi license: cc-by-nc-sa-4.0 datasets: - parsinlu - squad metrics: - f1 --- # Reading Comprehension (مدل برای پاسخ به درک مطلب) This is a mT5-based model for reading comprehension. Here is an example of how you can run this model: ```python from transformers import MT5ForConditionalGeneration, MT5Tokenizer model_size = "small" model_name = f"persiannlp/mt5-{model_size}-parsinlu-squad-reading-comprehension" tokenizer = MT5Tokenizer.from_pretrained(model_name) model = MT5ForConditionalGeneration.from_pretrained(model_name) def run_model(paragraph, question, **generator_args): input_ids = tokenizer.encode(question + "\n" + paragraph, return_tensors="pt") res = model.generate(input_ids, **generator_args) output = tokenizer.batch_decode(res, skip_special_tokens=True) print(output) return output run_model( "یک شی را دارای تقارن می‌نامیم زمانی که ان شی را بتوان به دو یا چند قسمت تقسیم کرد که آن‌ها قسمتی از یک طرح سازمان یافته باشند یعنی بر روی شکل تنها جابجایی و چرخش و بازتاب و تجانس انجام شود و در اصل شکل تغییری به وجود نیایید آنگاه ان را تقارن می‌نامیم مرکز تقارن:اگر در یک شکل نقطه‌ای مانندA وجود داشته باشد که هر نقطهٔ روی شکل (محیط) نسبت به نقطه یAمتقارن یک نقطهٔ دیگر شکل (محیط) باشد، نقطهٔ Aمرکز تقارن است. یعنی هر نقطه روی شکل باید متقارنی داشته باشد شکل‌های که منتظم هستند و زوج ضلع دارند دارای مرکز تقارند ولی شکل‌های فرد ضلعی منتظم مرکز تقارن ندارند. متوازی‌الأضلاع و دایره یک مرکز تقارن دارند ممکن است یک شکل خط تقارن نداشته باشد ولی مرکز تقارن داشته باشد. (منبع:س. گ)", "اشکالی که یک مرکز تقارن دارند" ) run_model( "شُتُر یا اُشتر را که در زبان پهلوی (ushtar)[نیازمند منبع] می‌گفتند حیوانی است نیرومند و تنومند با توش و توان بالا از خانواده شتران؛ شبه نشخوارکننده و با دست و گردنی دراز. بر پشت خود یک یا دو کوهان دارد که ساختارش از پیه و چربی است. در دین اسلام گوشت او حلال است. اما ذبح آن با دیگر جانوران حلال گوشت متفاوت است و آن را نحر (بریدن گلو) می‌کنند و اگر سر آن را مانند گوسفند پیش از نحر ببرند گوشت آن حلال نیست. شیرش نیز نوشیده می‌شود ولی بیشتر کاربرد بارکشی دارد. پشم و پوستش نیز برای ریسندگی و پارچه‌بافی و کفش‌دوزی کاربرد دارد. گونه‌های دیگری از شتران نیز در آمریکای جنوبی زندگی می‌کنند، به نام‌های لاما، آلپاکا، گواناکو که دارای کوهان نیستند. شتر ویژگی‌های خاصّی دارد که مهم‌ترین آن‌ها تحمّل شرایط سخت صحرا و دماهای گوناگون و به‌ویژه گرمای شدید تابستان و کمبود آب و علوفه است. ترکیب جسمانی شتر با دیگر جانوران اختلاف زیادی دارد، و این اختلاف انگیزه شده که شتر در درازا روزهای سال در بیابان زندگی کند و از بوته‌ها و درختچه‌های گوناگون صحرایی و کویری و حتی از بوته‌های شور و خاردار تغذیه کند. عرب‌ها از زمان‌های بسیار دور از شتر استفاده کرده و می‌کنند. آن‌ها به این حیوان اهلی لقب کشتی صحرا (به عربی: سفینةالصحراء) داده‌اند.", "غذای شترچیست؟" ) run_model( """حسین میرزایی می‌گوید مرحله اول پرداخت وام حمایتی کرونا به همگی خانوارهای یارانه‌بگیر متقاضی تکمیل شده است و حال چهار میلیون خانوار که به عنوان "اقشار خاص" و "آسیب‌پذیر" شناسایی شدند، می‌توانند برای یک میلیون تومان وام دیگر درخواست بدهند. آقای میرزایی گفته خانوارهای "آسیب‌پذیر" که شرایط گرفتن وام یک میلیونی اضافی را دارند با پیامک از این امکان مطلع شده‌اند. بنا به گزارش‌های رسمی با شیوع کرونا در ایران یک میلیون نفر بیکار شده‌اند و درآمد کارکنان مشاغل غیررسمی نیز ضربه قابل توجهی خورده است. ارزش ریال هم در هفته‌های اخیر در برابر ارزهای خارجی سقوط کرده است. اقتصاد ایران پیش از شیوع کرونا نیز با مشکلات مزمن رکود، تورم، تحریم و فساد روبرو بود.""", "وام یارانه به چه کسانی میدهند؟" ) run_model( "در ۲۲ ژوئن ۱۹۴۱ نیروهای محور در عملیات بارباروسا حمله سنگینی به اتحاد شوروی کرده و یکی از بزرگترین نبردهای زمینی تاریخ بشر را رقم زدند. همچنین جبهه شرقی باعث به دام افتادن نیروهای محور شد و بیش از همه ارتش آلمان نازی را درگیر جنگ فرسایشی کرد. در دسامبر ۱۹۴۱ ژاپن یک در عملیاتی ناگهانی با نام نبرد پرل هاربر به پایگاه دریایی ایالات متحده آمریکا حمله کرد. به دنبال این اتفاق آمریکا نیز بلافاصله علیه ژاپن اعلان جنگ کرد که با حمایت بریتانیا همراه شد. پس از آن متحدین (نیروهای محور در اروپا) نیز با اتحاد ژاپن علیه آمریکا اعلام جنگ کردند. دست‌آوردهای ژاپن در یورش به آمریکا باعث ایجاد این احساس در آسیا شد که آسیا از تسلط غرب خارج شده‌است از این رو بسیاری از ارتش‌های شکست خورده با آنها همراهی کردند.", "چرا امریکا وارد جنگ جهانی دوم شد؟" ) ``` For more details, visit this page: https://github.com/persiannlp/parsinlu/
BigSalmon/MrLincoln4
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
2021-02-28T00:30:56Z
--- language: - fa - multilingual thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg tags: - multiple-choice - parsbert - persian - farsi pipeline_tag: text-classification license: cc-by-nc-sa-4.0 datasets: - parsinlu metrics: - accuracy --- # Multiple-Choice Question Answering (مدل برای پاسخ به سوالات چهار جوابی) This is a parsbert-based model for multiple-choice question answering. Here is an example of how you can run this model: ```python from typing import List import torch from transformers import AutoConfig, AutoModelForMultipleChoice, AutoTokenizer model_name = "persiannlp/parsbert-base-parsinlu-multiple-choice" tokenizer = AutoTokenizer.from_pretrained(model_name) config = AutoConfig.from_pretrained(model_name) model = AutoModelForMultipleChoice.from_pretrained(model_name, config=config) def run_model(question: str, candicates: List[str]): assert len(candicates) == 4, "you need four candidates" choices_inputs = [] for c in candicates: text_a = "" # empty context text_b = question + " " + c inputs = tokenizer( text_a, text_b, add_special_tokens=True, max_length=128, padding="max_length", truncation=True, return_overflowing_tokens=True, ) choices_inputs.append(inputs) input_ids = torch.LongTensor([x["input_ids"] for x in choices_inputs]) output = model(input_ids=input_ids) print(output) return output run_model(question="وسیع ترین کشور جهان کدام است؟", candicates=["آمریکا", "کانادا", "روسیه", "چین"]) run_model(question="طامع یعنی ؟", candicates=["آزمند", "خوش شانس", "محتاج", "مطمئن"]) run_model( question="زمینی به ۳۱ قطعه متساوی مفروض شده است و هر روز مساحت آماده شده برای احداث، دو برابر مساحت روز قبل است.اگر پس از (۵ روز) تمام زمین آماده شده باشد، در چه روزی یک قطعه زمین آماده شده ", candicates=["روز اول", "روز دوم", "روز سوم", "هیچکدام"]) ``` For more details, visit this page: https://github.com/persiannlp/parsinlu/
BigSalmon/MrLincoln5
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- language: - fa - multilingual thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg tags: - entailment - wikibert - persian - farsi license: cc-by-nc-sa-4.0 datasets: - parsinlu metrics: - accuracy --- # Textual Entailment (مدل برای پاسخ به استلزام منطقی) This is a model for textual entailment problems. Here is an example of how you can run this model: ```python import torch from transformers import AutoModelForSequenceClassification, AutoTokenizer import numpy as np labels = ["entails", "contradicts", "neutral"] model_name_or_path = "persiannlp/wikibert-base-parsinlu-entailment" model = AutoModelForSequenceClassification.from_pretrained(model_name_or_path) tokenizer = AutoTokenizer.from_pretrained(model_name_or_path,) def model_predict(text_a, text_b): features = tokenizer( [(text_a, text_b)], padding="max_length", truncation=True, return_tensors='pt') output = model(**features) logits = output[0] probs = torch.nn.functional.softmax(logits, dim=1).tolist() idx = np.argmax(np.array(probs)) print(labels[idx], probs) model_predict( "این مسابقات بین آوریل و دسامبر در هیپودروم ولیفندی در نزدیکی باکرکی ، ۱۵ کیلومتری (۹ مایل) غرب استانبول برگزار می شود.", "در ولیفندی هیپودروم، مسابقاتی از آوریل تا دسامبر وجود دارد." ) model_predict( "آیا کودکانی وجود دارند که نیاز به سرگرمی دارند؟", "هیچ کودکی هرگز نمی خواهد سرگرم شود.", ) model_predict( "ما به سفرهایی رفته ایم که در نهرهایی شنا کرده ایم", "علاوه بر استحمام در نهرها ، ما به اسپا ها و سونا ها نیز رفته ایم." ) ``` For more details, visit this page: https://github.com/persiannlp/parsinlu/
BigSalmon/MrLincoln6
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- language: - fa - multilingual thumbnail: https://upload.wikimedia.org/wikipedia/commons/a/a2/Farsi.svg tags: - multiple-choice - wikibert - persian - farsi pipeline_tag: text-classification license: cc-by-nc-sa-4.0 datasets: - parsinlu metrics: - accuracy --- # Multiple-Choice Question Answering (مدل برای پاسخ به سوالات چهار جوابی) This is a wikibert-based model for multiple-choice question answering. Here is an example of how you can run this model: ```python from typing import List import torch from transformers import AutoConfig, AutoModelForMultipleChoice, AutoTokenizer model_name = "persiannlp/wikibert-base-parsinlu-multiple-choice" tokenizer = AutoTokenizer.from_pretrained(model_name) config = AutoConfig.from_pretrained(model_name) model = AutoModelForMultipleChoice.from_pretrained(model_name, config=config) def run_model(question: str, candicates: List[str]): assert len(candicates) == 4, "you need four candidates" choices_inputs = [] for c in candicates: text_a = "" # empty context text_b = question + " " + c inputs = tokenizer( text_a, text_b, add_special_tokens=True, max_length=128, padding="max_length", truncation=True, return_overflowing_tokens=True, ) choices_inputs.append(inputs) input_ids = torch.LongTensor([x["input_ids"] for x in choices_inputs]) output = model(input_ids=input_ids) print(output) return output run_model(question="وسیع ترین کشور جهان کدام است؟", candicates=["آمریکا", "کانادا", "روسیه", "چین"]) run_model(question="طامع یعنی ؟", candicates=["آزمند", "خوش شانس", "محتاج", "مطمئن"]) run_model( question="زمینی به ۳۱ قطعه متساوی مفروض شده است و هر روز مساحت آماده شده برای احداث، دو برابر مساحت روز قبل است.اگر پس از (۵ روز) تمام زمین آماده شده باشد، در چه روزی یک قطعه زمین آماده شده ", candicates=["روز اول", "روز دوم", "روز سوم", "هیچکدام"]) ``` For more details, visit this page: https://github.com/persiannlp/parsinlu/
BigSalmon/Robertsy
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
2021-08-19T20:10:23Z
--- language: - Tagalog thumbnail: tags: - Tagalog - Mang Bert license: apache-2.0 datasets: - OSCAR tl --- # Mang Bert ## Model description Fine-Tuned Roberta Model using RobertaForMaskedLM Tagalog Dataset from OSCAR tl ## Training data 458206 text dataset from OSCAR
BinksSachary/ShaxxBot2
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- license: apache-2.0 tags: - translation - generated_from_trainer datasets: - kde4 metrics: - bleu model-index: - name: marian-finetuned-kde4-en-to-zh_TW results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: kde4 type: kde4 args: en-zh_TW metrics: - name: Bleu type: bleu value: 39.086345838465 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-zh_TW This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-zh](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 1.0047 - Bleu: 39.0863 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.6
BitanBiswas/mbert-bengali-ner-finetuned-ner
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: apache-2.0 tags: - translation - generated_from_trainer metrics: - rouge model-index: - name: mt5-small-finetuned-amazon-en-es results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-small-finetuned-amazon-en-es This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0255 - Rouge1: 17.5202 - Rouge2: 8.4634 - Rougel: 17.0175 - Rougelsum: 17.0528 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:| | 8.094 | 1.0 | 1209 | 3.2933 | 12.7563 | 5.2606 | 12.4786 | 12.4961 | | 3.9263 | 2.0 | 2418 | 3.1487 | 16.2314 | 8.4716 | 15.6854 | 15.7506 | | 3.599 | 3.0 | 3627 | 3.0789 | 16.9233 | 8.1928 | 16.2596 | 16.2522 | | 3.429 | 4.0 | 4836 | 3.0492 | 17.2679 | 8.7561 | 16.6685 | 16.7399 | | 3.3279 | 5.0 | 6045 | 3.0384 | 17.6081 | 8.6721 | 17.0546 | 17.0368 | | 3.2518 | 6.0 | 7254 | 3.0343 | 17.2271 | 8.504 | 16.6285 | 16.6209 | | 3.2084 | 7.0 | 8463 | 3.0255 | 16.7859 | 8.054 | 16.2574 | 16.2853 | | 3.1839 | 8.0 | 9672 | 3.0255 | 17.5202 | 8.4634 | 17.0175 | 17.0528 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.11.6
Blazeolmo/Scrabunzi
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit tags: - generated_from_keras_callback model-index: - name: tf-dummy-model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # tf-dummy-model This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.15.0 - TensorFlow 2.7.0 - Datasets 1.17.0 - Tokenizers 0.10.3
BlightZz/DialoGPT-medium-Kurisu
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
19
2022-02-20T07:00:57Z
--- license: apache-2.0 tags: - translation - generated_from_keras_callback model-index: - name: tf-marian-finetuned-kde4-en-to-zh_TW results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # tf-marian-finetuned-kde4-en-to-zh_TW This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-zh](https://huggingface.co/Helsinki-NLP/opus-mt-en-zh) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.7752 - Validation Loss: 0.9022 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 11973, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.7752 | 0.9022 | 0 | | 0.7749 | 0.9022 | 1 | | 0.7752 | 0.9022 | 2 | ### Framework versions - Transformers 4.16.2 - TensorFlow 2.8.0 - Datasets 1.18.3 - Tokenizers 0.11.0
BobBraico/bert-finetuned-ner
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - amazon_reviews_multi metrics: - accuracy model-index: - name: roberta-base-bne-finetuned-amazon_reviews_multi results: - task: name: Text Classification type: text-classification dataset: name: amazon_reviews_multi type: amazon_reviews_multi args: es metrics: - name: Accuracy type: accuracy value: 0.93125 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-bne-finetuned-amazon_reviews_multi This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.2259 - Accuracy: 0.9313 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1996 | 1.0 | 1250 | 0.1736 | 0.9297 | | 0.1031 | 2.0 | 2500 | 0.2259 | 0.9313 | ### Framework versions - Transformers 4.12.2 - Pytorch 1.9.0+cu111 - Datasets 1.14.0 - Tokenizers 0.10.3
BobBraico/distilbert-base-uncased-finetuned-imdb
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-yahd-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-yahd-2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.3850 - Accuracy: 0.2652 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 2.2738 | 1.0 | 9556 | 2.2228 | 0.1996 | | 1.9769 | 2.0 | 19112 | 2.1378 | 0.2321 | | 1.6624 | 3.0 | 28668 | 2.1897 | 0.2489 | | 1.3682 | 4.0 | 38224 | 2.2863 | 0.2538 | | 1.1975 | 5.0 | 47780 | 2.3850 | 0.2652 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.9.0+cu102 - Datasets 1.15.1 - Tokenizers 0.10.3
Boondong/Wandee
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-yahd-twval-hptune results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-yahd-twval-hptune This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 6.3727 - Accuracy: 0.2039 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 2.1638 | 1.0 | 10106 | 2.1944 | 0.3646 | | 1.7982 | 2.0 | 20212 | 2.6390 | 0.3333 | | 1.3279 | 3.0 | 30318 | 3.1526 | 0.3095 | | 0.8637 | 4.0 | 40424 | 4.8368 | 0.2470 | | 0.5727 | 5.0 | 50530 | 6.3727 | 0.2039 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.9.0+cu102 - Datasets 1.15.1 - Tokenizers 0.10.3
Bosio/full-sentence-distillroberta3-finetuned-wikitext2
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-yahd-twval results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-yahd-twval This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.2540 - Accuracy: 0.2664 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 2.1967 | 1.0 | 10086 | 2.9662 | 0.2068 | | 1.865 | 2.0 | 20172 | 2.9499 | 0.3229 | | 1.5135 | 3.0 | 30258 | 3.3259 | 0.3036 | | 1.2077 | 4.0 | 40344 | 3.8351 | 0.2902 | | 1.0278 | 5.0 | 50430 | 4.2540 | 0.2664 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.9.0+cu102 - Datasets 1.15.1 - Tokenizers 0.10.3
BossLee/t5-gec
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
6
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-yahd results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-yahd This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 5.7685 - Accuracy: 0.4010 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 16 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:------:|:---------------:|:--------:| | 2.2439 | 1.0 | 9142 | 2.1898 | 0.2130 | | 1.9235 | 2.0 | 18284 | 2.1045 | 0.2372 | | 1.5915 | 3.0 | 27426 | 2.1380 | 0.2550 | | 1.3262 | 4.0 | 36568 | 2.2544 | 0.2758 | | 1.0529 | 5.0 | 45710 | 2.5662 | 0.2955 | | 0.8495 | 6.0 | 54852 | 2.8731 | 0.3078 | | 0.6779 | 7.0 | 63994 | 3.1980 | 0.3218 | | 0.5546 | 8.0 | 73136 | 3.6289 | 0.3380 | | 0.4738 | 9.0 | 82278 | 3.9732 | 0.3448 | | 0.412 | 10.0 | 91420 | 4.2945 | 0.3565 | | 0.3961 | 11.0 | 100562 | 4.6127 | 0.3772 | | 0.3292 | 12.0 | 109704 | 4.9586 | 0.3805 | | 0.318 | 13.0 | 118846 | 5.2615 | 0.3887 | | 0.2936 | 14.0 | 127988 | 5.4567 | 0.3931 | | 0.2671 | 15.0 | 137130 | 5.6902 | 0.3965 | | 0.2301 | 16.0 | 146272 | 5.7685 | 0.4010 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.9.0+cu102 - Datasets 1.15.1 - Tokenizers 0.10.3
Botslity/Bot
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - conversational --- #Harry Style dialoGPT Model
Branex/gpt-neo-2.7B
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - ar thumbnail: wav2vec2-large-xls-r fine tuned on common voice data for Modern Standard Arabic tags: - automatic-speech-recognition - hf-asr-leaderboard - robust-speech-event license: apache-2.0 datasets: - mozilla-foundation/common_voice_7_0 metrics: - WER model-index: - name: wav2vec2-large-xls-r-300m-arabic-colab results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 7.0 type: mozilla-foundation/common_voice_7_0 args: ar metrics: - name: Test WER type: wer value: 64.38 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Dev Data type: speech-recognition-community-v2/dev_data args: ar metrics: - name: Test WER type: wer value: 96.15 - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Robust Speech Event - Test Data type: speech-recognition-community-v2/eval_data args: ar metrics: - name: Test WER type: wer value: 94.96 ---
Brona/model1
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2021-08-04T19:51:05Z
--- language: - en tags: - simplification license: apache-2.0 datasets: - cnn_dailymail widget: - text: "A capsule containing asteroid soil samples landed in the Australian Outback. The precision required to carry out the mission thrilled many.<|endoftext|>" example_title: "Example 1" --- # Try out in the Hosted inference API In the right panel, you can try the model (although it only handles a short sequence length). Feel free to try Example 1, and modify it to inspect model ability. # Model Loading The model can be loaded in the following way: ``` from transformers import AutoTokenizer, AutoModelForCausalLM import torch tokenizer = AutoTokenizer.from_pretrained("philippelaban/keep_it_simple") kis_model = AutoModelForCausalLM.from_pretrained("philippelaban/keep_it_simple") ``` # Example use And then used by first inputting a paragraph for simplification, followed by a `bos_token` to indicate to the model to start simplifying. Imagine we want to simplify the following paragraph: ``` A small capsule containing asteroid soil samples that was dropped from 136,700 miles in space by Japan's Hayabusa2 spacecraft landed as planned in the Australian Outback on December 6. The extremely high precision required to carry out the mission thrilled many in Japan, who said they took pride in its success. ``` The following code can be run: ``` paragraph = """A small capsule containing asteroid soil samples that was dropped from 136,700 miles in space by Japan's Hayabusa2 spacecraft landed as planned in the Australian Outback on December 6. The extremely high precision required to carry out the mission thrilled many in Japan, who said they took pride in its success.""" start_id = tokenizer.bos_token_id tokenized_paragraph = [(tokenizer.encode(text=paragraph) + [start_id])] input_ids = torch.LongTensor(tokenized_paragraph) output_ids = kis_model.generate(input_ids, max_length=150, num_beams=4, do_sample=True, num_return_sequences=8) output_ids = output_ids[:, input_ids.shape[1]:] output = tokenizer.batch_decode(output_ids) output = [o.replace(tokenizer.eos_token, "") for o in output] for o in output: print("----") print(o) ``` # Example output When run, an output similar to the following should be obtained: A small capsule containing samples of asteroid soil that was dropped from 136,700 miles, Japan's Hayabusa2 space probe, landed as planned on December 6. The mission was extremely precise, said many in Japan, and they took pride in its success. A small capsule containing samples of asteroid soil that was dropped from 136,700 miles, Japan's Hayabusa2 space probe, landed as planned on December 6. The mission was extremely precise and well thought-out, said many in Japan, who took pride in the mission. A small capsule containing soil samples that was dropped from 136,700 miles, Japan's Hayabusa2 space probe, landed as planned on December 6. The mission was designed to test the performance of the country's space fleet, which many said took pride in its success. A small capsule containing soil samples that was dropped from 136,700 miles in space by Japan's Hayabusa2 probe was followed by a landing on the Outback. The precise timing of the mission thrilled many in Japan, who said they took pride in its success. # Github repo You can access more information, access to the scoring function, the training script, or an example training log on the Github repo: https://github.com/tingofurro/keep_it_simple
Brona/poc_de
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-01-26T17:32:31Z
--- language: - en tags: - summarization license: apache-2.0 datasets: - cnn_dailymail --- # Try out in the Hosted inference API In the right panel, you can try to the model (although it only handles a short sequence length). Enter the document you want to summarize in the panel on the right. # Model Loading The model (based on a GPT2 base architecture) can be loaded in the following way: ``` from transformers import GPT2LMHeadModel, GPT2TokenizerFast model = GPT2LMHeadModel.from_pretrained("philippelaban/summary_loop10") tokenizer = GPT2TokenizerFast.from_pretrained("philippelaban/summary_loop10") ``` # Example Use ``` document = "Bouncing Boulders Point to Quakes on Mars. A preponderance of boulder tracks on the red planet may be evidence of recent seismic activity. If a rock falls on Mars, and no one is there to see it, does it leave a trace? Yes, and it's a beautiful herringbone-like pattern, new research reveals. Scientists have now spotted thousands of tracks on the red planet created by tumbling boulders. Delicate chevron-shaped piles of Martian dust and sand frame the tracks, the team showed, and most fade over the course of a few years. Rockfalls have been spotted elsewhere in the solar system, including on the moon and even a comet. But a big open question is the timing of these processes on other worlds — are they ongoing or did they predominantly occur in the past?" tokenized_document = tokenizer([document], max_length=300, truncation=True, return_tensors="pt")["input_ids"].cuda() input_shape = tokenized_document.shape outputs = model.generate(tokenized_document, do_sample=False, max_length=500, num_beams=4, num_return_sequences=4, no_repeat_ngram_size=6, return_dict_in_generate=True, output_scores=True) candidate_sequences = outputs.sequences[:, input_shape[1]:] # Remove the encoded text, keep only the summary candidate_scores = outputs.sequences_scores.tolist() for candidate_tokens, score in zip(candidate_sequences, candidate_scores): summary = tokenizer.decode(candidate_tokens) print("[Score: %.3f] %s" % (score, summary[:summary.index("END")])) ``` # Example output ``` [Score: -0.084] Here's what you need to know about rockfalls [Score: -0.087] Here's what you need to know about these tracks [Score: -0.091] Here's what we know so far about these tracks [Score: -0.101] Here's what you need to know about rockfall ``` # Github repo You can access more information, access to the scoring function, the training script, or an example training log on the Github repo: https://github.com/CannyLab/summary_loop
Brunomezenga/NN
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-01-22T19:53:14Z
--- language: - en tags: - summarization license: apache-2.0 datasets: - cnn_dailymail metrics: - rouge --- # Try out in the Hosted inference API In the right panel, you can try to the model (although it only handles a short sequence length). Enter the document you want to summarize in the panel on the right. # Model Loading The model (based on a GPT2 base architecture) can be loaded in the following way: ``` from transformers import GPT2LMHeadModel, GPT2TokenizerFast model = GPT2LMHeadModel.from_pretrained("philippelaban/summary_loop46") tokenizer = GPT2TokenizerFast.from_pretrained("philippelaban/summary_loop46") ``` # Example Use ``` document = "Bouncing Boulders Point to Quakes on Mars. A preponderance of boulder tracks on the red planet may be evidence of recent seismic activity. If a rock falls on Mars, and no one is there to see it, does it leave a trace? Yes, and it's a beautiful herringbone-like pattern, new research reveals. Scientists have now spotted thousands of tracks on the red planet created by tumbling boulders. Delicate chevron-shaped piles of Martian dust and sand frame the tracks, the team showed, and most fade over the course of a few years. Rockfalls have been spotted elsewhere in the solar system, including on the moon and even a comet. But a big open question is the timing of these processes on other worlds — are they ongoing or did they predominantly occur in the past?" tokenized_document = tokenizer([document], max_length=300, truncation=True, return_tensors="pt")["input_ids"].cuda() input_shape = tokenized_document.shape outputs = model.generate(tokenized_document, do_sample=False, max_length=500, num_beams=4, num_return_sequences=4, no_repeat_ngram_size=6, return_dict_in_generate=True, output_scores=True) candidate_sequences = outputs.sequences[:, input_shape[1]:] # Remove the encoded text, keep only the summary candidate_scores = outputs.sequences_scores.tolist() for candidate_tokens, score in zip(candidate_sequences, candidate_scores): summary = tokenizer.decode(candidate_tokens) print("[Score: %.3f] %s" % (score, summary[:summary.index("END")])) ``` # Example output ``` [Score: -0.153] These tracks have been spotted elsewhere on Mars. If a rockfalls on Mars has been spotted elsewhere on the red planet. Scientists have spotted thousands of tracks on Mars. A rockfalls on Mars have been spotted elsewhere on the Red Planet. [Score: -0.154] These tracks have been spotted elsewhere on Mars. If a rockfalls on Mars has been spotted elsewhere on the red planet. Scientists have spotted thousands of tracks on Mars. A rockfalls on Mars have been spotted elsewhere on the planet. [Score: -0.154] These tracks have been spotted elsewhere on Mars. If a rockfalls on Mars has been spotted elsewhere on the red planet. Scientists have spotted thousands of tracks on Mars. A rockfalls have been spotted elsewhere on the Red Planet. [Score: -0.195] These tracks have been spotted elsewhere on Mars. If a rockfalls on Mars has been spotted elsewhere on the red planet. Scientists have spotted thousands of tracks on Mars. A rockfalls on Mars have been spotted elsewhere on the Red Planet. A rockfalls have been spotted everywhere on the red planet. ``` # Github repo You can access more information, access to the scoring function, the training script, or an example training log on the Github repo: https://github.com/CannyLab/summary_loop
Bryan190/Aguy190
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: autonlp language: en widget: - text: "Worry is a down payment on a problem you may never have'. Joyce Meyer. #motivation #leadership #worry" datasets: - tweet_eval model-index: - name: BERT-tweet-eval-emotion results: - task: name: Sentiment Analysis type: sentiment-analysis dataset: name: "tweeteval" type: tweet-eval metrics: - name: Accuracy type: accuracy value: 81.00 - name: Macro F1 type: macro-f1 value: 77.37 - name: Weighted F1 type: weighted-f1 value: 80.63 --- # `BERT-tweet-eval-emotion` trained using autoNLP - Problem type: Multi-class Classification ## Validation Metrics - Loss: 0.5408923625946045 - Accuracy: 0.8099929627023223 - Macro F1: 0.7737195387641751 - Micro F1: 0.8099929627023222 - Weighted F1: 0.8063100677512649 - Macro Precision: 0.8083955817268176 - Micro Precision: 0.8099929627023223 - Weighted Precision: 0.8104009668394634 - Macro Recall: 0.7529197049888299 - Micro Recall: 0.8099929627023223 - Weighted Recall: 0.8099929627023223 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "Worry is a down payment on a problem you may never have'. Joyce Meyer. #motivation #leadership #worry"}' https://api-inference.huggingface.co/models/philschmid/BERT-tweet-eval-emotion ``` Or Python API: ```py from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline model_id = 'philschmid/BERT-tweet-eval-emotion' tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForSequenceClassification.from_pretrained(model_id) classifier = pipeline('text-classification', tokenizer=tokenizer, model=model) classifier("Worry is a down payment on a problem you may never have'. Joyce Meyer. #motivation #leadership #worry") ```