modelId
stringlengths 4
112
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 21
values | files
list | publishedBy
stringlengths 2
37
| downloads_last_month
int32 0
9.44M
| library
stringclasses 15
values | modelCard
large_stringlengths 0
100k
|
---|---|---|---|---|---|---|---|---|
azkamovie/yuoituo | 2021-05-20T13:44:40.000Z | []
| [
".gitattributes",
"README.md"
]
| azkamovie | 0 | |||
azkamovie/yuotyotouoo | 2021-05-24T06:44:02.000Z | []
| [
".gitattributes",
"README.md"
]
| azkamovie | 0 | |||
azkamovie/yyyyyyyyyyyyyyyyyy | 2021-05-20T13:50:26.000Z | []
| [
".gitattributes",
"README.md"
]
| azkamovie | 0 | |||
azkamovie/yyyyyyyyyyyyyyyyyyyy | 2021-05-20T13:49:02.000Z | []
| [
".gitattributes",
"urtu"
]
| azkamovie | 0 | |||
azunre/wav2vec2large-xlsr-akan | 2021-03-30T22:38:15.000Z | [
"pytorch",
"wav2vec2",
"tw",
"dataset:common_voice",
"transformers",
"speech",
"audio",
"automatic-speech-recognition"
]
| automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| azunre | 10 | transformers | ---
language: tw
datasets:
- common_voice
tags:
- speech
- audio
- automatic-speech-recognition
---
|
baboubou/test | 2021-03-21T10:18:34.000Z | []
| [
".gitattributes"
]
| baboubou | 0 | |||
bada/test | 2021-05-19T12:06:17.000Z | [
"pytorch",
"jax",
"bert",
"pretraining",
"transformers"
]
| [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"training_args.bin",
"vocab.txt"
]
| bada | 9 | transformers | "hello"
|
|
bada/test_gpt | 2021-05-21T13:52:48.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"training_args.bin",
"vocab.txt"
]
| bada | 7 | transformers | |
bagdaebhishek/IndianPoliticalTweetsLM | 2021-05-21T13:54:43.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"en",
"dataset:Twitter",
"dataset:IndianPolitics",
"transformers",
"India",
"politics",
"tweets",
"BJP",
"Congress",
"AAP",
"text-generation",
"license:apache"
]
| text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| bagdaebhishek | 38 | transformers | ---
language: "en"
thumbnail: "https://bagdeabhishek.github.io/twitterAnalysis_files/networkfin.jpg"
tags:
- India
- politics
- tweets
- BJP
- Congress
- AAP
- pytorch
- gpt2
- lm-head
- text-generation
license: "Apache"
datasets:
- Twitter
- IndianPolitics
---
# Model name
Indian Political Tweets LM
## Model description
Note: This model is based on GPT2, if you want a bigger model based on GPT2-medium and finetuned on the same data please take a look at the [IndianPoliticalTweetsLMMedium](https://huggingface.co/bagdaebhishek/IndianPoliticalTweetsLMMedium) model.
This is a GPT2 Language model with LM head fine-tuned on tweets crawled from handles which belong predominantly to Indian Politics. For more information about the crawled data, you can go through this [blog](https://bagdeabhishek.github.io/twitterAnalysis) post.
## Intended uses & limitations
This finetuned model can be used to generate tweets which are related to Indian politics.
#### How to use
```python
from transformers import AutoTokenizer,AutoModelWithLMHead,pipeline
tokenizer = AutoTokenizer.from_pretrained("bagdaebhishek/IndianPoliticalTweetsLM")
model = AutoModelWithLMHead.from_pretrained("bagdaebhishek/IndianPoliticalTweetsLM")
text_generator = pipeline("text-generation",model=model, tokenizer=tokenizer)
init_sentence = "India will always be"
print(text_generator(init_sentence))
```
#### Limitations and bias
1. The tweets used to train the model were not manually labelled, so the generated text may not always be in English. I've cleaned the data to remove non-English tweets but the model may generate "Hinglish" text and hence no assumptions should be made about the language of the generated text.
2. I've taken enough care to remove tweets from twitter handles which are not very influential but since it's not curated by hand there might be some artefacts like "-sent via NamoApp" etc.
3. Like any language model trained on real-world data this model also exhibits some biases which unfortunately are a part of the political discourse on Twitter. Please keep this in mind while using the output from this model.
## Training data
I used the pre-trained gpt2 model from Huggingface transformers repository and fine-tuned it on custom data set crawled from twitter. The method used to identify the political handles is mentioned in detail in a [blog](https://bagdeabhishek.github.io/twitterAnalysis) post. I used tweets from both the Pro-BJP and Anti-BJP clusters mentioned in the blog.
## Training procedure
For pre-processing, I removed tweets from handles which are not very influential in their cluster. I removed them by calculating Eigenvector centrality on the twitter graph and pruning handles which have this measure below a certain threshold. This threshold was set manually after experimenting with different values.
I then separated tweets by these handles based on their language. I trained the LM with English tweets from both handles.
### Hardware
1. GPU: GTX 1080Ti
2. CPU: Ryzen 3900x
3. RAM: 32GB
This model took roughly 36 hours to fine-tune.
|
bagdaebhishek/IndianPoliticalTweetsLMMedium | 2021-05-21T13:57:23.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"en",
"dataset:Twitter",
"dataset:IndianPolitics",
"transformers",
"India",
"politics",
"tweets",
"BJP",
"Congress",
"AAP",
"text-generation",
"license:apache"
]
| text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| bagdaebhishek | 16 | transformers | ---
language: "en"
thumbnail: "https://bagdeabhishek.github.io/twitterAnalysis_files/networkfin.jpg"
tags:
- India
- politics
- tweets
- BJP
- Congress
- AAP
- pytorch
- gpt2
- lm-head
- text-generation
license: "Apache"
datasets:
- Twitter
- IndianPolitics
---
# Model name
Indian Political Tweets LM Medium (Based on GPT2-Medium)
## Model description
This is a GPT2 Language model with LM head fine-tuned on tweets crawled from handles which belong predominantly to Indian Politics. For more information about the crawled data, you can go through this [blog](https://bagdeabhishek.github.io/twitterAnalysis) post.
This model is finetuned using GPT2-medium instead of the vanilla GPT2 implementation. This model has more parameters but it is able to model language slightly better.
## Intended uses & limitations
This finetuned model can be used to generate tweets which are related to Indian politics.
#### How to use
```python
from transformers import AutoTokenizer,AutoModelWithLMHead,pipeline
tokenizer = AutoTokenizer.from_pretrained("bagdaebhishek/IndianPoliticalTweetsLM")
model = AutoModelWithLMHead.from_pretrained("bagdaebhishek/IndianPoliticalTweetsLM")
text_generator = pipeline("text-generation",model=model, tokenizer=tokenizer)
init_sentence = "India will always be"
print(text_generator(init_sentence))
```
#### Limitations and bias
1. The tweets used to train the model were not manually labelled, so the generated text may not always be in English. I've cleaned the data to remove non-English tweets but the model may generate "Hinglish" text and hence no assumptions should be made about the language of the generated text.
2. I've taken enough care to remove tweets from twitter handles which are not very influential but since it's not curated by hand there might be some artefacts like "-sent via NamoApp" etc.
3. Like any language model trained on real-world data this model also exhibits some biases which unfortunately are a part of the political discourse on Twitter. Please keep this in mind while using the output from this model.
## Training data
I used the pre-trained gpt2-medium model from Huggingface transformers repository and fine-tuned it on custom data set crawled from twitter. The method used to identify the political handles is mentioned in detail in a [blog](https://bagdeabhishek.github.io/twitterAnalysis) post. I used tweets from both the Pro-BJP and Anti-BJP clusters mentioned in the blog.
## Training procedure
For pre-processing, I removed tweets from handles which are not very influential in their cluster. I removed them by calculating Eigenvector centrality on the twitter graph and pruning handles which have this measure below a certain threshold. This threshold was set manually after experimenting with different values.
I then separated tweets by these handles based on their language. I trained the LM with English tweets from both handles.
### Hardware
1. GPU: GTX 1080Ti
2. CPU: Ryzen 3900x
3. RAM: 32GB
This model took roughly 36 hours to fine-tune.
|
baihaisheng/bert_finetuning_test | 2021-05-19T12:07:08.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"eval_results_mrpc.txt",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| baihaisheng | 14 | transformers | |
bakrianoo/sinai-voice-ar-stt | 2021-03-30T13:08:16.000Z | [
"pytorch",
"wav2vec2",
"ar",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
]
| automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"optimizer.pt",
"preprocessor_config.json",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin",
"vocab.json"
]
| bakrianoo | 55 | transformers | ---
language: ar
datasets:
- common_voice
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: Sinai Voice Arabic Speech Recognition Model
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice ar
type: common_voice
args: ar
metrics:
- name: Test WER
type: wer
value: 23.80
---
# Sinai Voice Arabic Speech Recognition Model
# نموذج **صوت سيناء** للتعرف على الأصوات العربية الفصحى و تحويلها إلى نصوص
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53)
on Arabic using the [Common Voice](https://huggingface.co/datasets/common_voice)
Most of evaluation codes in this documentation are INSPIRED by [elgeish/wav2vec2-large-xlsr-53-arabic](https://huggingface.co/elgeish/wav2vec2-large-xlsr-53-arabic)
Please install:
- [PyTorch](https://pytorch.org/)
- `$ pip3 install jiwer lang_trans torchaudio datasets transformers pandas tqdm`
## Benchmark
We evaluated the model against different Arabic-STT Wav2Vec models.
[**WER**: Word Error Rate] The Lowest score you get, the best model you have
| | Model | [using transliteration](https://pypi.org/project/lang-trans/) | WER | Training Datasets |
|---:|:--------------------------------------|:---------------------|---------:|---------:|
| 1 | bakrianoo/sinai-voice-ar-stt | True | 0.238001 |Common Voice 6|
| 2 | elgeish/wav2vec2-large-xlsr-53-arabic | True | 0.266527 |Common Voice 6 + Arabic Speech Corpus|
| 3 | othrif/wav2vec2-large-xlsr-arabic | True | 0.298122 |Common Voice 6|
| 4 | bakrianoo/sinai-voice-ar-stt | False | 0.448987 |Common Voice 6|
| 5 | othrif/wav2vec2-large-xlsr-arabic | False | 0.464004 |Common Voice 6|
| 6 | anas/wav2vec2-large-xlsr-arabic | True | 0.506191 |Common Voice 4|
| 7 | anas/wav2vec2-large-xlsr-arabic | False | 0.622288 |Common Voice 4|
<details>
<summary>We used the following <b>CODE</b> to generate the above results</summary>
```python
import jiwer
import torch
from tqdm.auto import tqdm
import torchaudio
from datasets import load_dataset
from lang_trans.arabic import buckwalter
from transformers import set_seed, Wav2Vec2ForCTC, Wav2Vec2Processor
import pandas as pd
# load test dataset
set_seed(42)
test_split = load_dataset("common_voice", "ar", split="test")
# init sample rate resamplers
resamplers = { # all three sampling rates exist in test split
48000: torchaudio.transforms.Resample(48000, 16000),
44100: torchaudio.transforms.Resample(44100, 16000),
32000: torchaudio.transforms.Resample(32000, 16000),
}
# WER composer
transformation = jiwer.Compose([
# normalize some diacritics, remove punctuation, and replace Persian letters with Arabic ones
jiwer.SubstituteRegexes({
r'[auiFNKo\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\~_،؟»\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\?;:\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\-,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\.؛«!"]': "", "\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\u06D6": "",
r"[\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\|\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\{]": "A", "p": "h", "ک": "k", "ی": "y"}),
# default transformation below
jiwer.RemoveMultipleSpaces(),
jiwer.Strip(),
jiwer.SentencesToListOfWords(),
jiwer.RemoveEmptyStrings(),
])
def prepare_example(example):
speech, sampling_rate = torchaudio.load(example["path"])
if sampling_rate in resamplers:
example["speech"] = resamplers[sampling_rate](speech).squeeze().numpy()
else:
example["speech"] = resamplers[4800](speech).squeeze().numpy()
return example
def predict(batch):
inputs = processor(batch["speech"], sampling_rate=16000, return_tensors="pt", padding=True)
with torch.no_grad():
predicted = torch.argmax(model(inputs.input_values.to("cuda")).logits, dim=-1)
predicted[predicted == -100] = processor.tokenizer.pad_token_id # see fine-tuning script
batch["predicted"] = processor.batch_decode(predicted)
return batch
# prepare the test dataset
test_split = test_split.map(prepare_example)
stt_models = [
"elgeish/wav2vec2-large-xlsr-53-arabic",
"othrif/wav2vec2-large-xlsr-arabic",
"anas/wav2vec2-large-xlsr-arabic",
"bakrianoo/sinai-voice-ar-stt"
]
stt_results = []
for model_path in tqdm(stt_models):
processor = Wav2Vec2Processor.from_pretrained(model_path)
model = Wav2Vec2ForCTC.from_pretrained(model_path).to("cuda").eval()
test_split_preds = test_split.map(predict, batched=True, batch_size=56, remove_columns=["speech"])
orig_metrics = jiwer.compute_measures(
truth=[s for s in test_split_preds["sentence"]],
hypothesis=[s for s in test_split_preds["predicted"]],
truth_transform=transformation,
hypothesis_transform=transformation,
)
trans_metrics = jiwer.compute_measures(
truth=[buckwalter.trans(s) for s in test_split_preds["sentence"]], # Buckwalter transliteration
hypothesis=[buckwalter.trans(s) for s in test_split_preds["predicted"]], # Buckwalter transliteration
truth_transform=transformation,
hypothesis_transform=transformation,
)
stt_results.append({
"model": model_path,
"using_transliation": True,
"WER": trans_metrics["wer"]
})
stt_results.append({
"model": model_path,
"using_transliation": False,
"WER": orig_metrics["wer"]
})
del model
del processor
stt_results_df = pd.DataFrame(stt_results)
stt_results_df = stt_results_df.sort_values('WER', axis=0, ascending=True)
stt_results_df.head(n=50)
```
</details>
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from lang_trans.arabic import buckwalter
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
dataset = load_dataset("common_voice", "ar", split="test[:10]")
resamplers = { # all three sampling rates exist in test split
48000: torchaudio.transforms.Resample(48000, 16000),
44100: torchaudio.transforms.Resample(44100, 16000),
32000: torchaudio.transforms.Resample(32000, 16000),
}
def prepare_example(example):
speech, sampling_rate = torchaudio.load(example["path"])
if sampling_rate in resamplers:
example["speech"] = resamplers[sampling_rate](speech).squeeze().numpy()
else:
example["speech"] = resamplers[4800](speech).squeeze().numpy()
return example
dataset = dataset.map(prepare_example)
processor = Wav2Vec2Processor.from_pretrained("bakrianoo/sinai-voice-ar-stt")
model = Wav2Vec2ForCTC.from_pretrained("bakrianoo/sinai-voice-ar-stt").eval()
def predict(batch):
inputs = processor(batch["speech"], sampling_rate=16000, return_tensors="pt", padding=True)
with torch.no_grad():
predicted = torch.argmax(model(inputs.input_values).logits, dim=-1)
predicted[predicted == -100] = processor.tokenizer.pad_token_id # see fine-tuning script
batch["predicted"] = processor.tokenizer.batch_decode(predicted)
return batch
dataset = dataset.map(predict, batched=True, batch_size=1, remove_columns=["speech"])
for reference, predicted in zip(dataset["sentence"], dataset["predicted"]):
print("reference:", reference)
print("predicted:", predicted)
print("--")
```
Here's the output:
```
reference: ألديك قلم ؟
predicted: ألديك قلم
--
reference: ليست هناك مسافة على هذه الأرض أبعد من يوم أمس.
predicted: ليست نارك مسافة على هذه الأرض أبعد من يوم أمس
--
reference: إنك تكبر المشكلة.
predicted: إنك تكبر المشكلة
--
reference: يرغب أن يلتقي بك.
predicted: يرغب أن يلتقي بك
--
reference: إنهم لا يعرفون لماذا حتى.
predicted: إنهم لا يعرفون لماذا حتى
--
reference: سيسعدني مساعدتك أي وقت تحب.
predicted: سيسعدن مساعثتك أي وقد تحب
--
reference: أَحَبُّ نظريّة علمية إليّ هي أن حلقات زحل مكونة بالكامل من الأمتعة المفقودة.
predicted: أحب نظرية علمية إلي هي أن أحلقتز حلم كوينا بالكامل من الأمت عن المفقودة
--
reference: سأشتري له قلماً.
predicted: سأشتري له قلما
--
reference: أين المشكلة ؟
predicted: أين المشكل
--
reference: وَلِلَّهِ يَسْجُدُ مَا فِي السَّمَاوَاتِ وَمَا فِي الْأَرْضِ مِنْ دَابَّةٍ وَالْمَلَائِكَةُ وَهُمْ لَا يَسْتَكْبِرُونَ
predicted: ولله يسجد ما في السماوات وما في الأرض من دابة والملائكة وهم لا يستكبرون
```
## Evaluation
The model can be evaluated as follows on the Arabic test data of Common Voice:
```python
import jiwer
import torch
import torchaudio
from datasets import load_dataset
from lang_trans.arabic import buckwalter
from transformers import set_seed, Wav2Vec2ForCTC, Wav2Vec2Processor
set_seed(42)
test_split = load_dataset("common_voice", "ar", split="test")
resamplers = { # all three sampling rates exist in test split
48000: torchaudio.transforms.Resample(48000, 16000),
44100: torchaudio.transforms.Resample(44100, 16000),
32000: torchaudio.transforms.Resample(32000, 16000),
}
def prepare_example(example):
speech, sampling_rate = torchaudio.load(example["path"])
if sampling_rate in resamplers:
example["speech"] = resamplers[sampling_rate](speech).squeeze().numpy()
else:
example["speech"] = resamplers[4800](speech).squeeze().numpy()
return example
test_split = test_split.map(prepare_example)
processor = Wav2Vec2Processor.from_pretrained("bakrianoo/sinai-voice-ar-stt")
model = Wav2Vec2ForCTC.from_pretrained("bakrianoo/sinai-voice-ar-stt").to("cuda").eval()
def predict(batch):
inputs = processor(batch["speech"], sampling_rate=16000, return_tensors="pt", padding=True)
with torch.no_grad():
predicted = torch.argmax(model(inputs.input_values.to("cuda")).logits, dim=-1)
predicted[predicted == -100] = processor.tokenizer.pad_token_id # see fine-tuning script
batch["predicted"] = processor.batch_decode(predicted)
return batch
test_split = test_split.map(predict, batched=True, batch_size=16, remove_columns=["speech"])
transformation = jiwer.Compose([
# normalize some diacritics, remove punctuation, and replace Persian letters with Arabic ones
jiwer.SubstituteRegexes({
r'[auiFNKo\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\~_،؟»\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\?;:\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\-,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\.؛«!"]': "", "\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\u06D6": "",
r"[\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\|\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\{]": "A", "p": "h", "ک": "k", "ی": "y"}),
# default transformation below
jiwer.RemoveMultipleSpaces(),
jiwer.Strip(),
jiwer.SentencesToListOfWords(),
jiwer.RemoveEmptyStrings(),
])
metrics = jiwer.compute_measures(
truth=[buckwalter.trans(s) for s in test_split["sentence"]], # Buckwalter transliteration
hypothesis=[buckwalter.trans(s) for s in test_split["predicted"]],
truth_transform=transformation,
hypothesis_transform=transformation,
)
print(f"WER: {metrics['wer']:.2%}")
```
**Test Result**: 23.80%
[**WER**: Word Error Rate] The Lowest score you get, the best model you have
## Other Arabic Voice recognition Models
الكلمات لا تكفى لشكر أولئك الذين يؤمنون أن هنالك أمل, و يسعون من أجله
- [elgeish/wav2vec2-large-xlsr-53-arabic](https://huggingface.co/elgeish/wav2vec2-large-xlsr-53-arabic)
- [othrif/wav2vec2-large-xlsr-arabic](https://huggingface.co/othrif/wav2vec2-large-xlsr-arabic)
- [anas/wav2vec2-large-xlsr-arabic](https://huggingface.co/anas/wav2vec2-large-xlsr-arabic)
|
bala1802/model_1_test | 2021-05-21T13:59:23.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json",
"Checkpoints_1_6_M/gpt2-python-language-model/checkpoint-5130/config.json",
"Checkpoints_1_6_M/gpt2-python-language-model/checkpoint-5130/merges.txt",
"Checkpoints_1_6_M/gpt2-python-language-model/checkpoint-5130/optimizer.pt",
"Checkpoints_1_6_M/gpt2-python-language-model/checkpoint-5130/pytorch_model.bin",
"Checkpoints_1_6_M/gpt2-python-language-model/checkpoint-5130/scheduler.pt",
"Checkpoints_1_6_M/gpt2-python-language-model/checkpoint-5130/special_tokens_map.json",
"Checkpoints_1_6_M/gpt2-python-language-model/checkpoint-5130/tokenizer_config.json",
"Checkpoints_1_6_M/gpt2-python-language-model/checkpoint-5130/trainer_state.json",
"Checkpoints_1_6_M/gpt2-python-language-model/checkpoint-5130/training_args.bin",
"Checkpoints_1_6_M/gpt2-python-language-model/checkpoint-5130/vocab.json"
]
| bala1802 | 6 | transformers | |
balamariannmt/LanguageModel_Trial_2 | 2021-05-21T14:01:08.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json",
"Checkpoints_1_6_M/gpt2-python-language-model/checkpoint-5130/config.json",
"Checkpoints_1_6_M/gpt2-python-language-model/checkpoint-5130/merges.txt",
"Checkpoints_1_6_M/gpt2-python-language-model/checkpoint-5130/optimizer.pt",
"Checkpoints_1_6_M/gpt2-python-language-model/checkpoint-5130/pytorch_model.bin",
"Checkpoints_1_6_M/gpt2-python-language-model/checkpoint-5130/scheduler.pt",
"Checkpoints_1_6_M/gpt2-python-language-model/checkpoint-5130/special_tokens_map.json",
"Checkpoints_1_6_M/gpt2-python-language-model/checkpoint-5130/tokenizer_config.json",
"Checkpoints_1_6_M/gpt2-python-language-model/checkpoint-5130/trainer_state.json",
"Checkpoints_1_6_M/gpt2-python-language-model/checkpoint-5130/training_args.bin",
"Checkpoints_1_6_M/gpt2-python-language-model/checkpoint-5130/vocab.json"
]
| balamariannmt | 19 | transformers | |
balawmt/LanguageModel_Trial_1 | 2021-05-21T14:03:49.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json",
"Checkpoints_1_6_M/gpt2-python-language-model/checkpoint-5130/config.json",
"Checkpoints_1_6_M/gpt2-python-language-model/checkpoint-5130/merges.txt",
"Checkpoints_1_6_M/gpt2-python-language-model/checkpoint-5130/optimizer.pt",
"Checkpoints_1_6_M/gpt2-python-language-model/checkpoint-5130/pytorch_model.bin",
"Checkpoints_1_6_M/gpt2-python-language-model/checkpoint-5130/scheduler.pt",
"Checkpoints_1_6_M/gpt2-python-language-model/checkpoint-5130/special_tokens_map.json",
"Checkpoints_1_6_M/gpt2-python-language-model/checkpoint-5130/tokenizer_config.json",
"Checkpoints_1_6_M/gpt2-python-language-model/checkpoint-5130/trainer_state.json",
"Checkpoints_1_6_M/gpt2-python-language-model/checkpoint-5130/training_args.bin",
"Checkpoints_1_6_M/gpt2-python-language-model/checkpoint-5130/vocab.json"
]
| balawmt | 75 | transformers | |
balu/absamodel | 2021-05-28T10:18:55.000Z | []
| [
".gitattributes"
]
| balu | 0 | |||
balu/aus | 2021-05-25T06:51:16.000Z | []
| [
".gitattributes",
"README.md"
]
| balu | 0 | |||
balu/auscura | 2021-05-19T13:25:58.000Z | []
| [
".gitattributes"
]
| balu | 0 | |||
balu/fff | 2021-05-26T05:40:12.000Z | []
| [
".gitattributes"
]
| balu | 0 | |||
balu/ffff | 2021-05-26T05:44:26.000Z | []
| [
".gitattributes",
"samplt.plk"
]
| balu | 0 | |||
balu/sample | 2021-05-24T10:56:34.000Z | []
| [
".gitattributes"
]
| balu | 0 | |||
bandainamco-mirai/distilbert-base-japanese | 2020-11-19T13:17:22.000Z | [
"pytorch",
"distilbert",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"vocab.txt"
]
| bandainamco-mirai | 1,099 | transformers | ||
barissayil/bert-sentiment-analysis-sst | 2021-06-11T09:47:14.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| barissayil | 1,032 | transformers | |
bashar-talafha/multi-dialect-bert-base-arabic | 2021-05-19T12:08:22.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"ar",
"dataset:nadi",
"arxiv:2007.05612",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"vocab.txt"
]
| bashar-talafha | 445 | transformers | ---
language: ar
thumbnail: https://raw.githubusercontent.com/mawdoo3/Multi-dialect-Arabic-BERT/master/multidialct_arabic_bert.png
datasets:
- nadi
---
# Multi-dialect-Arabic-BERT
This is a repository of Multi-dialect Arabic BERT model.
By [Mawdoo3-AI](https://ai.mawdoo3.com/).
<p align="center">
<br>
<img src="https://raw.githubusercontent.com/mawdoo3/Multi-dialect-Arabic-BERT/master/multidialct_arabic_bert.png" alt="Background reference: http://www.qfi.org/wp-content/uploads/2018/02/Qfi_Infographic_Mother-Language_Final.pdf" width="500"/>
<br>
<p>
### About our Multi-dialect-Arabic-BERT model
Instead of training the Multi-dialect Arabic BERT model from scratch, we initialized the weights of the model using [Arabic-BERT](https://github.com/alisafaya/Arabic-BERT) and trained it on 10M arabic tweets from the unlabled data of [The Nuanced Arabic Dialect Identification (NADI) shared task](https://sites.google.com/view/nadi-shared-task).
### To cite this work
```
@misc{talafha2020multidialect,
title={Multi-Dialect Arabic BERT for Country-Level Dialect Identification},
author={Bashar Talafha and Mohammad Ali and Muhy Eddin Za'ter and Haitham Seelawi and Ibraheem Tuffaha and Mostafa Samir and Wael Farhan and Hussein T. Al-Natsheh},
year={2020},
eprint={2007.05612},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Usage
The model weights can be loaded using `transformers` library by HuggingFace.
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("bashar-talafha/multi-dialect-bert-base-arabic")
model = AutoModel.from_pretrained("bashar-talafha/multi-dialect-bert-base-arabic")
```
Example using `pipeline`:
```python
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model="bashar-talafha/multi-dialect-bert-base-arabic ",
tokenizer="bashar-talafha/multi-dialect-bert-base-arabic "
)
fill_mask(" سافر الرحالة من مطار [MASK] ")
```
```
[{'sequence': '[CLS] سافر الرحالة من مطار الكويت [SEP]', 'score': 0.08296813815832138, 'token': 3226},
{'sequence': '[CLS] سافر الرحالة من مطار دبي [SEP]', 'score': 0.05123933032155037, 'token': 4747},
{'sequence': '[CLS] سافر الرحالة من مطار مسقط [SEP]', 'score': 0.046838656067848206, 'token': 13205},
{'sequence': '[CLS] سافر الرحالة من مطار القاهرة [SEP]', 'score': 0.03234650194644928, 'token': 4003},
{'sequence': '[CLS] سافر الرحالة من مطار الرياض [SEP]', 'score': 0.02606341242790222, 'token': 2200}]
```
### Repository
Please check the [original repository](https://github.com/mawdoo3/Multi-dialect-Arabic-BERT) for more information.
|
bayartsogt/albert-mongolian | 2021-03-17T19:01:07.000Z | [
"pytorch",
"tf",
"albert",
"masked-lm",
"mn",
"arxiv:1904.00962",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tf_model.h5",
"tokenizer_config.json"
]
| bayartsogt | 77 | transformers | ---
language: mn
---
# ALBERT-Mongolian
[pretraining repo link](https://github.com/bayartsogt-ya/albert-mongolian)
## Model description
Here we provide pretrained ALBERT model and trained SentencePiece model for Mongolia text. Training data is the Mongolian wikipedia corpus from Wikipedia Downloads and Mongolian News corpus.
## Evaluation Result:
```
loss = 1.7478163
masked_lm_accuracy = 0.6838185
masked_lm_loss = 1.6687671
sentence_order_accuracy = 0.998125
sentence_order_loss = 0.007942731
```
## Fine-tuning Result on Eduge Dataset:
```
precision recall f1-score support
байгал орчин 0.85 0.83 0.84 999
боловсрол 0.80 0.80 0.80 873
спорт 0.98 0.98 0.98 2736
технологи 0.88 0.93 0.91 1102
улс төр 0.92 0.85 0.89 2647
урлаг соёл 0.93 0.94 0.94 1457
хууль 0.89 0.87 0.88 1651
эдийн засаг 0.83 0.88 0.86 2509
эрүүл мэнд 0.89 0.92 0.90 1159
accuracy 0.90 15133
macro avg 0.89 0.89 0.89 15133
weighted avg 0.90 0.90 0.90 15133
```
## Reference
1. [ALBERT - official repo](https://github.com/google-research/albert)
2. [WikiExtrator](https://github.com/attardi/wikiextractor)
3. [Mongolian BERT](https://github.com/tugstugi/mongolian-bert)
4. [ALBERT - Japanese](https://github.com/alinear-corp/albert-japanese)
5. [Mongolian Text Classification](https://github.com/sharavsambuu/mongolian-text-classification)
6. [You's paper](https://arxiv.org/abs/1904.00962)
## Citation
```
@misc{albert-mongolian,
author = {Bayartsogt Yadamsuren},
title = {ALBERT Pretrained Model on Mongolian Datasets},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/bayartsogt-ya/albert-mongolian/}}
}
```
## For More Information
Please contact by [email protected]
|
bayartsogt/wav2vec2-large-xlsr-mongolian-v1 | 2021-04-05T15:03:51.000Z | [
"pytorch",
"wav2vec2",
"mn",
"dataset:common_voice mn",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
]
| automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| bayartsogt | 94 | transformers | ---
language: mn
datasets:
- common_voice mn
metrics:
- wer
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Mongolian V1 by Bayartsogt
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice mn
type: common_voice
args: mn
metrics:
- name: Test WER
type: wer
value: 34.64
---
# Wav2Vec2-Large-XLSR-53-Mongolian-v1
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Mongolian using the [Common Voice](https://huggingface.co/datasets/common_voice).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "mn", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("bayartsogt/wav2vec2-large-xlsr-mongolian-v1")
model = Wav2Vec2ForCTC.from_pretrained("bayartsogt/wav2vec2-large-xlsr-mongolian-v1")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Mongolian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "mn", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("bayartsogt/wav2vec2-large-xlsr-mongolian-v1")
model = Wav2Vec2ForCTC.from_pretrained("bayartsogt/wav2vec2-large-xlsr-mongolian-v1")
model.to("cuda")
chars_to_ignore_regex = '[\!\"\'\,\.\«\»\?\-]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 34.64 %
## Training
The Common Voice `train` dataset was used for training as well as ... and ... |
bayartsogt/wav2vec2-large-xlsr-mongolian | 2021-03-24T01:47:34.000Z | [
"pytorch",
"wav2vec2",
"mn",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
]
| automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| bayartsogt | 27 | transformers | ---
language: mn
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Mongolian by Bayartsogt
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice mn
type: common_voice
args: mn
metrics:
- name: Test WER
type: wer
value: 45.82
---
# Wav2Vec2-Large-XLSR-53-Mongolian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Mongolian using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "mn", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("bayartsogt/wav2vec2-large-xlsr-mongolian")
model = Wav2Vec2ForCTC.from_pretrained("bayartsogt/wav2vec2-large-xlsr-mongolian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Mongolian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "mn", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("bayartsogt/wav2vec2-large-xlsr-mongolian")
model = Wav2Vec2ForCTC.from_pretrained("bayartsogt/wav2vec2-large-xlsr-mongolian")
model.to("cuda")
chars_to_ignore_regex = '[\\\\,\\\\?\\\\.\\\\!\\\\-\\\\;\\\\:\\\\"\\\\“\\\\%\\\\‘\\\\”\\\\�\\\\'h\\\\«\\\\»]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\\twith torch.no_grad():
\\t\\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
\\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 45.82%
## Training
❌ The Common Voice `train`, `validation`, and ... datasets were used for training as well as ... and ... # TODO: adapt to state all the datasets that were used for training.
❌ The script used for training can be found [here](...) # TODO: fill in a link to your training script here. If you trained your model in a colab, simply fill in the link here. If you trained the model locally, it would be great if you could upload the training script on github and paste the link here.
|
baykenney/bert-base-gpt2detector-random | 2021-05-19T12:09:16.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| baykenney | 16 | transformers | |
baykenney/bert-base-gpt2detector-topk40 | 2021-05-19T12:10:12.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| baykenney | 23 | transformers | |
baykenney/bert-base-gpt2detector-topp92 | 2021-05-19T12:11:13.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| baykenney | 21 | transformers | |
baykenney/bert-base-gpt2detector-topp96 | 2021-05-19T12:12:07.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| baykenney | 11 | transformers | |
baykenney/bert-large-gpt2detector-random | 2021-05-19T12:14:50.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| baykenney | 18 | transformers | |
baykenney/bert-large-gpt2detector-topk40 | 2021-05-19T12:19:13.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| baykenney | 19 | transformers | |
baykenney/bert-large-gpt2detector-topp92 | 2021-05-19T12:23:59.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| baykenney | 11 | transformers | |
baykenney/bert-large-gpt2detector-topp96 | 2021-05-19T12:26:23.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| baykenney | 16 | transformers | |
bburnskm/frog-and-toad | 2021-05-30T20:37:06.000Z | []
| [
".gitattributes"
]
| bburnskm | 0 | |||
bclee232/custombert | 2021-01-12T09:46:42.000Z | [
"albert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json"
]
| bclee232 | 7 | transformers | |
begimayk/Semantir2 | 2021-06-14T10:42:11.000Z | []
| [
".gitattributes"
]
| begimayk | 0 | |||
begimayk/task | 2021-06-11T14:11:22.000Z | []
| [
".gitattributes"
]
| begimayk | 0 | |||
begimayk/try1 | 2021-06-14T13:09:54.000Z | []
| [
".gitattributes",
"README.md"
]
| begimayk | 0 | from transformers import pipeline
import json
import requests
API_URL = "https://api-inference.huggingface.co/models/EleutherAI/gpt-neo-2.7B"
headers = {"Authorization": "Bearer api_hwKbAMoHAzOVDdCxgfpPxMjjcrdKHMakhg"}
def query(payload):
\tdata = json.dumps(payload)
\tresponse = requests.request("POST", API_URL, headers=headers, data=data)
\treturn json.loads(response.content.decode("utf-8"))
data = query("Can you please let us know more details about your ") |
||
bella/bert_finetuning_test | 2021-05-19T12:27:44.000Z | [
"pytorch",
"jax",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"eval_results_mrpc.txt",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| bella | 10 | transformers | |
bemanningssitua/pwpndhdfgb | 2021-03-28T15:36:55.000Z | []
| [
".gitattributes",
"README.md"
]
| bemanningssitua | 0 | <a href="https://sites.google.com/view/fullwatchgodzillavskong2021wat">free robux generator</a>
<a href="https://sites.google.com/view/vvvbbbvvv">free robux generator</a>
<a href="https://sites.google.com/view/cccuuuccc">free robux generator</a>
<a href="https://sites.google.com/view/fullwatchgodzillavskong2021fre">free robux generator</a>
<a href="https://sites.google.com/view/gggwwppp">free robux generator</a>
<a href="https://sites.google.com/view/fffrrrfff">free robux generator</a>
<a href="https://sites.google.com/view/mahmoedjamil">free robux generator</a>
<a href="https://sites.google.com/view/freegodzilla">free robux generator</a>
<a href="https://sites.google.com/view/fullwatch123moviesgodzillavsko">free robux generator</a>
<a href="https://sites.google.com/view/watchgodzillavskong2021fullfre">free robux generator</a>
<a href="https://sites.google.com/view/sdkjfsdgf">free robux generator</a>
<a href="https://sites.google.com/view/kseryhfdg">free robux generator</a>
<a href="https://sites.google.com/view/sdfdgergdfg">free robux generator</a>
<a href="https://sites.google.com/view/watchgodzillavskongfull2021fre">free robux generator</a>
<a href="https://sites.google.com/view/gfergtdfg">free robux generator</a>
<a href="https://sites.google.com/view/sasdasdwed">free robux generator</a>
<a href="https://sites.google.com/view/sdfsdfwe">free robux generator</a>
<a href="https://sites.google.com/view/free-download-godzilla-vs-kong">free robux generator</a>
<a href="https://sites.google.com/view/dasdertdfg">free robux generator</a>
<a href="https://sites.google.com/view/sdfiuysdgfdg">free robux generator</a>
<a href="https://sites.google.com/view/yyfdfsdff">free robux generator</a>
<a href="https://sites.google.com/view/asdertyddv">free robux generator</a>
<a href="https://sites.google.com/view/dffghydcfr">free robux generator</a>
<a href="https://sites.google.com/view/sdgfdsgfd">free robux generator</a>
<a href="https://sites.google.com/view/fre-watch-godzilla-vs-kong-202">free robux generator</a>
<a href="https://sites.google.com/view/freewatchgodzillavskong2021onl">free robux generator</a>
<a href="https://sites.google.com/view/ttgjhjs">free robux generator</a>
<a href="https://sites.google.com/view/free-godzilla-vs-kong-2021-wat">free robux generator</a>
<a href="https://sites.google.com/view/uudfdfggf">free robux generator</a>
<a href="https://sites.google.com/view/onlinewatch-godzillavskong2021">free robux generator</a>
<a href="https://sites.google.com/view/ftysduig">free robux generator</a>
<a href="https://sites.google.com/view/dfwerffy">free robux generator</a>
<a href="https://sites.google.com/view/poek">free robux generator</a>
<a href="https://sites.google.com/view/eeddun">free robux generator</a>
<a href="https://sites.google.com/view/fullgodzillavskong2021watchonl">free robux generator</a>
<a href="https://sites.google.com/view/livewatchmotogpqatar2021watchl/">free robux generator</a>
<a href="https://sites.google.com/view/watch-motogp-qatar-2021-live/">free robux generator</a>
<a href="https://sites.google.com/view/live-watch-motogp-qatar-2021-f/">free robux generator</a>
<a href="https://sites.google.com/view/watch-motogpqatar2021live/">free robux generator</a>
<a href="https://sites.google.com/view/watchmotogpqatar2021live/">free robux generator</a>
<a href="https://sites.google.com/view/acascasc/">free robux generator</a>
<a href="https://sites.google.com/view/freewatchmotogpqatar2021live/">free robux generator</a>
<a href="https://sites.google.com/view/live-watch-streaming-motogp-qa/">free robux generator</a>
<a href="https://sites.google.com/view/watch-motogp-qatar-2021-live-f/">free robux generator</a>
<a href="https://sites.google.com/view/livewatchmotogpqatar2021freeon/">free robux generator</a>
<a href="https://sites.google.com/view/ascascascascasc/">free robux generator</a>
<a href="https://sites.google.com/view/watch-motogp-qatar-2021-online/">free robux generator</a>
<a href="https://sites.google.com/view/watch-motogp-qatar-live-2021-f/">free robux generator</a>
<a href="https://sites.google.com/view/free-watch-motogp-qatar-2021-l/">free robux generator</a>
<a href="https://sites.google.com/view/acadwqdwq/">free robux generator</a>
<a href="https://sites.google.com/view/wqqeweqwe/">free robux generator</a>
<a href="https://sites.google.com/view/freestreamingmotogpqatar2021wa/">free robux generator</a>
<a href="https://sites.google.com/view/watchmotogpqatar2021online/">free robux generator</a>
<a href="https://sites.google.com/view/live-watch-motogp-qatar-2021-o/">free robux generator</a>
<a href="https://sites.google.com/view/dsdfgfdhg/">free robux generator</a>
<a href="https://sites.google.com/view/zxczxcxzczxcsa/">free robux generator</a>
<a href="https://sites.google.com/view/fre-watch-motogp-qatar-2021-li/">free robux generator</a>
<a href="https://sites.google.com/view/free-watch-motogp-qatar-2021-o/">free robux generator</a>
<a href="https://sites.google.com/view/qwdwefrgr/">free robux generator</a>
<a href="https://sites.google.com/view/free-motogp-qatar-2021-watch-l/">free robux generator</a>
<a href="https://sites.google.com/view/freemotogpqatar2021watchlive/">free robux generator</a>
<a href="https://sites.google.com/view/freemotogpqatar2021watchlive/">free robux generator</a>
<a href="https://sites.google.com/view/online-watch-motogp-qatar-2021/">free robux generator</a>
<a href="https://sites.google.com/view/streamwatchmotogpqatar2021live/">free robux generator</a>
<a href="https://sites.google.com/view/ascsacxzqwqe/">free robux generator</a>
<a href="https://sites.google.com/view/livewatchmotogpqatar2021freest/">free robux generator</a>
<a href="https://sites.google.com/view/watch-motogp-qatar-2021-free-o/">free robux generator</a>
<a href="https://sites.google.com/view/live-motogp-qatar-2021-watch-o/">free robux generator</a>
<a href="https://sites.google.com/view/watch-365-days-2020-f-u-l-l-f-/">free robux generator</a>
<a href="https://sites.google.com/view/watchanotherround2020fullfree/">free robux generator</a>
<a href="https://sites.google.com/view/watcharmyofthedead2021fullfree/">free robux generator</a>
<a href="https://sites.google.com/view/watchavengersendgame2019fullfr/">free robux generator</a>
<a href="https://sites.google.com/view/watchblackwidow2021fullfree/">free robux generator</a>
<a href="https://sites.google.com/view/shydturf/">free robux generator</a>
<a href="https://sites.google.com/view/watchbosslevel2021fullfree/">free robux generator</a>
<a href="https://sites.google.com/view/watchcaptainmarvel2019fullfree/">free robux generator</a>
<a href="https://sites.google.com/view/fjkghgk/">free robux generator</a>
<a href="https://sites.google.com/view/watch-cherry-2021-f-u-l-l-f-r-/">free robux generator</a>
<a href="https://sites.google.com/view/watch-come-true-2020-f-u-l-l-f/">free robux generator</a>
<a href="https://sites.google.com/view/watchcoming2america2021fullfre/">free robux generator</a>
<a href="https://sites.google.com/view/watch-concrete-cowboy-2020-f-u/">free robux generator</a>
<a href="https://sites.google.com/view/watchcosmicsin2021fullfree/">free robux generator</a>
<a href="https://sites.google.com/view/watchcrisis2021fullfree/">free robux generator</a>
<a href="https://sites.google.com/view/watchcruella2021fullfree/">free robux generator</a>
<a href="https://sites.google.com/view/watchdeadlyillusions2021fullfr/">free robux generator</a>
<a href="https://sites.google.com/view/watch-dune-2021-f-u-l-l-f-r-e-/">free robux generator</a>
<a href="https://sites.google.com/view/watchf92021fullfree/">free robux generator</a>
<a href="https://sites.google.com/view/watchgirlinthebasement2021full/">free robux generator</a>
<a href="https://sites.google.com/view/hfhkdfkfd/">free robux generator</a>
<a href="https://sites.google.com/view/fhfgjfg/">free robux generator</a>
<a href="https://sites.google.com/view/hdfhdhs/">free robux generator</a>
<a href="https://sites.google.com/view/dhftgfgj/">free robux generator</a>
<a href="https://sites.google.com/view/fhfdjhfj/">free robux generator</a>
<a href="https://sites.google.com/view/djfjfjdj/">free robux generator</a>
<a href="https://sites.google.com/view/dfujfjgjkg/">free robux generator</a>
<a href="https://sites.google.com/view/fhfjhfjed/">free robux generator</a>
<a href="https://sites.google.com/view/watchgodzillavskong2021fulldow/">free robux generator</a>
<a href="https://sites.google.com/view/sdfhyturyi/">free robux generator</a>
<a href="https://sites.google.com/view/urutyiutyi/">free robux generator</a>
<a href="https://sites.google.com/view/dujryirit/">free robux generator</a>
<a href="https://sites.google.com/view/full-watch-123movies-godzilla-/">free robux generator</a>
<a href="https://sites.google.com/view/dfjutujry/">free robux generator</a>
<a href="https://sites.google.com/view/fgjiyrfikr/">free robux generator</a>
<a href="https://sites.google.com/view/djdgjdjd/">free robux generator</a>
<a href="https://sites.google.com/view/dfujdtjdrr/">free robux generator</a>
<a href="https://sites.google.com/view/jggogogo/">free robux generator</a>
<a href="https://sites.google.com/view/hfgufgut/">free robux generator</a>
<a href="https://sites.google.com/view/fdjytrtyik/">free robux generator</a>
<a href="https://sites.google.com/view/hcfhjfkifi/">free robux generator</a>
<a href="https://sites.google.com/view/watch-godzilla-vs-kong-full-20/">free robux generator</a>
<a href="https://sites.google.com/view/hfkfuuou/">free robux generator</a>
<a href="https://sites.google.com/view/fkfkffkf/">free robux generator</a>
<a href="https://sites.google.com/view/watchgodzillavskong2021fullmp4/">free robux generator</a>
<a href="https://sites.google.com/view/fhedtjh/">free robux generator</a>
<a href="https://sites.google.com/view/freedownloadgodzillavskong2021/">free robux generator</a>
<a href="https://sites.google.com/view/dfhdjhdgj/">free robux generator</a>
<a href="https://sites.google.com/view/full-watch-godzilla-vs-kong-20/">free robux generator</a>
<a href="https://sites.google.com/view/djdgjjkdjd/">free robux generator</a>
<a href="https://sites.google.com/view/zcbhszfhbs/">free robux generator</a>
<a href="https://sites.google.com/view/fjdsfjsd/">free robux generator</a>
<a href="https://sites.google.com/view/gjdfkjkfyk/">free robux generator</a>
<a href="https://sites.google.com/view/hsfhdsdjh/">free robux generator</a>
<a href="https://sites.google.com/view/fhdsfjdjd/">free robux generator</a>
<a href="https://sites.google.com/view/fjsdjsdjsjs/">free robux generator</a>
<a href="https://sites.google.com/view/frewatchgodzillavskong2021full/">free robux generator</a>
<a href="https://sites.google.com/view/fgyifgi/">free robux generator</a>
<a href="https://sites.google.com/view/zhbfsjsjs/">free robux generator</a>
<a href="https://sites.google.com/view/freegodzillavskong2021watchful/">free robux generator</a>
<a href="https://sites.google.com/view/ggjglgjg/">free robux generator</a>
<a href="https://sites.google.com/view/dfshdsjfdgj/">free robux generator</a>
<a href="https://sites.google.com/view/online-watch-godzilla-vs-kong-/">free robux generator</a>
<a href="https://sites.google.com/view/shfsjsj/">free robux generator</a>
<a href="https://sites.google.com/view/dfhfsjjss/">free robux generator</a>
<a href="https://sites.google.com/view/wydj/">free robux generator</a>
<a href="https://sites.google.com/view/zhsfsjs/">free robux generator</a>
<a href="https://sites.google.com/view/full-godzilla-vs-kong-2021-wat/">free robux generator</a>
<a href="https://dispenst.medium.com/cases-rising-in-the-us-vaccines-safe-and-effective-for-babies-study-shows-covid-19-updates-d8388bcb368e">free robux generator</a>
<a href="https://dispenst.medium.com/40-million-vaccine-doses-thus-far-unused-35-of-american-adults-have-had-a-shot-live-covid-19-f04f701ca5b2">free robux generator</a>
<a href="https://dispenst.medium.com/southern-states-brace-for-another-round-of-severe-weather-including-tornadoes-471a7b050226">free robux generator</a>
<a href="https://www.posts123.com/post/1450870/cases-rising-again-in-the-us-vaccines-safe-and-effective-for-babies-study-shows-covid-19-updates">free robux generator</a>
<a href="https://www.reddit.com/user/anankastic55/comments/mf2mr5/study_shows_covid19_updates/">free robux generator</a>
<a href="https://m.mydigoo.com/forums-topicdetail-250476.html">free robux generator</a>
<a href="https://paiza.io/projects/ZJmZi54caJXny2NGQnuqYw">free robux generator</a>
<a href="https://onlinegdb.com/S1gd3afAEu">free robux generator</a> |
||
benfrancis314/amath_563_project | 2021-06-05T21:30:28.000Z | []
| [
".gitattributes"
]
| benfrancis314 | 0 | |||
benjamin/gerpt2-large | 2021-05-21T14:07:25.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"de",
"transformers",
"license:mit",
"text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| benjamin | 299 | transformers | ---
language: de
widget:
- text: "In einer schockierenden Entdeckung fanden Wissenschaftler eine Herde Einhörner, die in einem abgelegenen, zuvor unerforschten Tal in den Anden lebten."
license: mit
---
# GerPT2-large
A large German GPT2.
Also check out [GerPT2](https://huggingface.co/benjamin/gerpt2), a small version of this model.
See the [GPT2 model card](https://huggingface.co/gpt2) for considerations on limitations and bias. See the [GPT2 documentation](https://huggingface.co/transformers/model_doc/gpt2.html) for details on GPT2.
## Comparison to [dbmdz/german-gpt2](https://huggingface.co/dbmdz/german-gpt2)
I evaluated both GerPT2-large and the other German GPT2, [dbmdz/german-gpt2](https://huggingface.co/dbmdz/german-gpt2) on the [CC-100](http://data.statmt.org/cc-100/) dataset and on the German Wikipedia:
| | CC-100 (PPL) | Wikipedia (PPL) |
|-------------------|--------------|-----------------|
| dbmdz/german-gpt2 | 49.47 | 62.92 |
| GerPT2 | 24.78 | 35.33 |
| GerPT2-large | 16.08 | 23.26 |
| | | |
See the script `evaluate.py` in the [GerPT2 Github repository](https://github.com/bminixhofer/gerpt2) for the code.
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("benjamin/gerpt2-large")
model = AutoModelForCausalLM.from_pretrained("benjamin/gerpt2-large")
prompt = "<your prompt>"
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
print(pipe(prompt)[0]["generated_text"])
```
Also, two tricks might improve the generated text:
```python
output = model.generate(
# during training an EOS token was used to mark the beginning of each text
# so it can help to insert it at the start
torch.tensor(
[tokenizer.eos_token_id] + tokenizer.encode(prompt)
).unsqueeze(0),
do_sample=True,
# try setting bad_words_ids=[[0]] to disallow generating an EOS token, without this the model is
# prone to ending generation early because a significant number of texts from the training corpus
# is quite short
bad_words_ids=[[0]],
max_length=max_length,
)[0]
print(tokenizer.decode(output))
```
## Training details
GerPT2-large is trained on the entire German data (67GB) from the [CC-100 Corpus](http://data.statmt.org/cc-100/) and weights were initialized from the [English GPT2 model](https://huggingface.co/gpt2-large).
GerPT2-large was trained with:
- a batch size of 256
- using OneCycle learning rate with a maximum of 5e-3
- with AdamW with a weight decay of 0.01
- for 2 epochs
Training took roughly 12 days on 8 TPUv3 cores.
To train GerPT2-large, follow these steps. Scripts are located in the [Github repository](https://github.com/bminixhofer/gerpt2):
0. Download and unzip training data from http://data.statmt.org/cc-100/.
1. Train a tokenizer using `prepare/train_tokenizer.py`. As training data for the tokenizer I used a random subset of 5% of the CC-100 data.
2. (optionally) generate a German input embedding matrix with `prepare/generate_aligned_wte.py`. This uses a neat trick to semantically map tokens from the English tokenizer to tokens from the German tokenizer using aligned word embeddings. E. g.:
```
ĠMinde -> Ġleast
Ġjed -> Ġwhatsoever
flughafen -> Air
vermittlung -> employment
teilung -> ignment
ĠInterpretation -> Ġinterpretation
Ġimport -> Ġimported
hansa -> irl
genehmigungen -> exempt
ĠAuflist -> Ġlists
Ġverschwunden -> Ġdisappeared
ĠFlyers -> ĠFlyers
Kanal -> Channel
Ġlehr -> Ġteachers
Ġnahelie -> Ġconvenient
gener -> Generally
mitarbeiter -> staff
```
This helps a lot on a trial run I did, although I wasn't able to do a full comparison due to budget and time constraints. To use this WTE matrix it can be passed via the `wte_path` to the training script. Credit to [this blogpost](https://medium.com/@pierre_guillou/faster-than-training-from-scratch-fine-tuning-the-english-gpt-2-in-any-language-with-hugging-f2ec05c98787) for the idea of initializing GPT2 from English weights.
3. Tokenize the corpus using `prepare/tokenize_text.py`. This generates files for train and validation tokens in JSON Lines format.
4. Run the training script `train.py`! `run.sh` shows how this was executed for the full run with config `configs/tpu_large.json`.
## License
GerPT2-large is licensed under the MIT License.
## Acknowledgements
Thanks to [Hugging Face](https://huggingface.co) for awesome tools and infrastructure.
Huge thanks to [Artus Krohn-Grimberghe](https://twitter.com/artuskg) at [LYTiQ](https://www.lytiq.de/) for making this possible by sponsoring the resources used for training. |
benjamin/gerpt2 | 2021-05-21T14:09:52.000Z | [
"pytorch",
"tf",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"de",
"transformers",
"license:mit",
"text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tf_model.h5",
"tokenizer_config.json",
"vocab.json"
]
| benjamin | 1,367 | transformers | ---
language: de
widget:
- text: "In einer schockierenden Entdeckung fanden Wissenschaftler eine Herde Einhörner, die in einem abgelegenen, zuvor unerforschten Tal in den Anden lebten."
license: mit
---
# GerPT2
A small German GPT2.
Also check out [GerPT2-large](https://huggingface.co/benjamin/gerpt2-large), a large version of this model.
See the [GPT2 model card](https://huggingface.co/gpt2) for considerations on limitations and bias. See the [GPT2 documentation](https://huggingface.co/transformers/model_doc/gpt2.html) for details on GPT2.
## Comparison to [dbmdz/german-gpt2](https://huggingface.co/dbmdz/german-gpt2)
I evaluated both GerPT2 and the other German GPT2, [dbmdz/german-gpt2](https://huggingface.co/dbmdz/german-gpt2) on the [CC-100](http://data.statmt.org/cc-100/) dataset and on the German Wikipedia:
| | CC-100 (PPL) | Wikipedia (PPL) |
|-------------------|--------------|-----------------|
| dbmdz/german-gpt2 | 49.47 | 62.92 |
| GerPT2 | 24.78 | 35.33 |
| GerPT2-large | 16.08 | 23.26 |
| | | |
See the script `evaluate.py` in the [GerPT2 Github repository](https://github.com/bminixhofer/gerpt2) for the code.
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("benjamin/gerpt2")
model = AutoModelForCausalLM.from_pretrained("benjamin/gerpt2")
prompt = "<your prompt>"
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
print(pipe(prompt)[0]["generated_text"])
```
Also, two tricks might improve the generated text:
```python
output = model.generate(
# during training an EOS token was used to mark the beginning of each text
# so it can help to insert it at the start
torch.tensor(
[tokenizer.eos_token_id] + tokenizer.encode(prompt)
).unsqueeze(0),
do_sample=True,
# try setting bad_words_ids=[[0]] to disallow generating an EOS token, without this the model is
# prone to ending generation early because a significant number of texts from the training corpus
# is quite short
bad_words_ids=[[0]],
max_length=max_length,
)[0]
print(tokenizer.decode(output))
```
## Training details
GerPT2 is trained on the entire German data (67GB) from the [CC-100 Corpus](http://data.statmt.org/cc-100/) and weights were initialized from the [English GPT2 model](https://huggingface.co/gpt2).
GerPT2 was trained with:
- a batch size of 256
- using OneCycle learning rate with a maximum of 5e-3
- with AdamW with a weight decay of 0.01
- for 7 epochs
Training took roughly 6 days on 8 TPUv3 cores.
To train GerPT2, follow these steps. Scripts are located in the [Github repository](https://github.com/bminixhofer/gerpt2):
0. Download and unzip training data from http://data.statmt.org/cc-100/.
1. Train a tokenizer using `prepare/train_tokenizer.py`. As training data for the tokenizer I used a random subset of 5% of the CC-100 data.
2. (optionally) generate a German input embedding matrix with `prepare/generate_aligned_wte.py`. This uses a neat trick to semantically map tokens from the English tokenizer to tokens from the German tokenizer using aligned word embeddings. E. g.:
```
ĠMinde -> Ġleast
Ġjed -> Ġwhatsoever
flughafen -> Air
vermittlung -> employment
teilung -> ignment
ĠInterpretation -> Ġinterpretation
Ġimport -> Ġimported
hansa -> irl
genehmigungen -> exempt
ĠAuflist -> Ġlists
Ġverschwunden -> Ġdisappeared
ĠFlyers -> ĠFlyers
Kanal -> Channel
Ġlehr -> Ġteachers
Ġnahelie -> Ġconvenient
gener -> Generally
mitarbeiter -> staff
```
This helps a lot on a trial run I did, although I wasn't able to do a full comparison due to budget and time constraints. To use this WTE matrix it can be passed via the `wte_path` to the training script. Credit to [this blogpost](https://medium.com/@pierre_guillou/faster-than-training-from-scratch-fine-tuning-the-english-gpt-2-in-any-language-with-hugging-f2ec05c98787) for the idea of initializing GPT2 from English weights.
3. Tokenize the corpus using `prepare/tokenize_text.py`. This generates files for train and validation tokens in JSON Lines format.
4. Run the training script `train.py`! `run.sh` shows how this was executed for the full run with config `configs/tpu.json`.
## License
GerPT2 is licensed under the MIT License.
## Acknowledgements
Thanks to [Hugging Face](https://huggingface.co) for awesome tools and infrastructure.
Special thanks to [PetFinder.my](https://www.petfinder.my/) for generously sponsoring the resources used for training.
|
benjaminlevy/CornBERT | 2021-01-28T03:35:03.000Z | []
| [
".gitattributes"
]
| benjaminlevy | 0 | |||
benmanns/press-mentions | 2021-04-26T17:28:07.000Z | []
| [
".gitattributes"
]
| benmanns | 0 | |||
bennu/gpt-neo-1.3B | 2021-04-07T08:27:23.000Z | []
| [
".gitattributes"
]
| bennu | 0 | |||
benny6/roberta-tydiqa | 2021-05-24T12:19:00.000Z | [
"pytorch",
"roberta",
"question-answering",
"transformers"
]
| question-answering | [
".gitattributes",
"config.json",
"eval_results.txt",
"merges.txt",
"model_args.json",
"optimizer.pt",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| benny6 | 13 | transformers | |
benny6/roberta_QA | 2021-05-20T14:26:38.000Z | [
"roberta",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"language_model.bin",
"language_model_config.json",
"merges.txt",
"prediction_head_0.bin",
"prediction_head_0_config.json",
"processor_config.json",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| benny6 | 7 | transformers | |
beomi/KcELECTRA-base | 2021-04-07T04:05:29.000Z | [
"pytorch",
"electra",
"pretraining",
"transformers"
]
| [
".gitattributes",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| beomi | 14,727 | transformers | ||
beomi/exKcBERT-kowiki | 2021-06-14T13:45:28.000Z | [
"pytorch",
"exbert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"optimizer.pt",
"pytorch_model.bin",
"rng_state.pth",
"scheduler.pt",
"special_tokens_map.json",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin",
"vocab.txt"
]
| beomi | 24 | transformers | |
beomi/exKcBERT-paws-extonly | 2021-06-14T06:35:28.000Z | [
"pytorch",
"exbert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
".gitignore",
"all_results.json",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_results.json",
"trainer_state.json",
"training_args.bin",
"vocab.txt"
]
| beomi | 5 | transformers | |
beomi/exKcBERT-paws | 2021-06-10T16:21:09.000Z | [
"pytorch",
"exbert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
".gitignore",
"all_results.json",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"train_results.json",
"trainer_state.json",
"training_args.bin",
"vocab.txt"
]
| beomi | 70 | transformers | |
beomi/exKcBERT | 2021-06-10T04:57:38.000Z | []
| [
".gitattributes",
"bert_config_ex_s3.json",
"kbstop_loss.pkl",
"kbstop_stat_dict",
"loss.pkl",
"modu_sampled.pkl",
"state_dic_exBERT_0",
"vocab.txt"
]
| beomi | 0 | |||
beomi/kcbert-base-dev | 2021-05-19T12:28:53.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| beomi | 12 | transformers | |
beomi/kcbert-base | 2021-05-19T12:29:40.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| beomi | 131,714 | transformers | |
beomi/kcbert-large-dev | 2021-05-19T12:31:44.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| beomi | 15 | transformers | |
beomi/kcbert-large | 2021-05-19T12:35:08.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"tokenizer_config.json",
"vocab.txt"
]
| beomi | 834 | transformers | |
beomi/kcgpt2-dev | 2021-05-21T14:11:55.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"optimizer.pt",
"pytorch_model.bin",
"rng_state_0.pth",
"rng_state_1.pth",
"rng_state_2.pth",
"rng_state_3.pth",
"rng_state_4.pth",
"rng_state_5.pth",
"rng_state_6.pth",
"rng_state_7.pth",
"scheduler.pt",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin",
"vocab.json"
]
| beomi | 19 | transformers | |
beomi/kobert | 2021-06-08T08:36:08.000Z | [
"pytorch",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"kobert_vocab.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| beomi | 15 | transformers | |
bergr7/finbert_sec | 2021-03-22T11:10:49.000Z | []
| [
".gitattributes"
]
| bergr7 | 0 | |||
berkergurcay/10k-pretrained-bert-model | 2021-05-27T10:15:44.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin"
]
| berkergurcay | 13 | transformers | |
berkergurcay/1k-fineutuned-bert-model | 2021-05-23T14:40:43.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin"
]
| berkergurcay | 7 | transformers | |
berkergurcay/1k-pretrained-bert-model | 2021-05-23T12:03:10.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin"
]
| berkergurcay | 14 | transformers | |
berkergurcay/finetuned-bert-base-uncased | 2021-05-26T13:33:43.000Z | [
"pytorch",
"bert",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin"
]
| berkergurcay | 9 | transformers | |
berkergurcay/finetuned-roberta | 2021-06-14T12:12:27.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"config.json",
"pytorch_model.bin"
]
| berkergurcay | 9 | transformers | |
bewgle/bart-large-mnli-bewgle | 2020-12-09T18:30:05.000Z | [
"pytorch",
"bart",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| bewgle | 55 | transformers | ---
widget :
- text: "I like you. </s></s> I love you."
---
## bart-large-mnli
Trained by Facebook, [original source](https://github.com/pytorch/fairseq/tree/master/examples/bart)
|
beyhan/bert-base-turkish-ner-cased-pretrained | 2021-05-19T12:37:40.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"optimizer.pt",
"pytorch_model.bin",
"scheduler.pt",
"test_predictions.txt",
"test_results.txt",
"training_args.bin",
"vocab.txt"
]
| beyhan | 27 | transformers | |
beyhan/checkpoint-3750 | 2021-05-19T12:38:52.000Z | [
"pytorch",
"jax",
"bert",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"optimizer.pt",
"pytorch_model.bin",
"scheduler.pt",
"training_args.bin"
]
| beyhan | 14 | transformers | |
bhadresh-savani/albert-base-v2-emotion | 2021-06-18T15:25:19.000Z | [
"pytorch",
"albert",
"text-classification",
"en",
"dataset:emotion",
"arxiv:1909.11942",
"transformers",
"emotion",
"license:apache-2.0"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json"
]
| bhadresh-savani | 0 | transformers | |
bhadresh-savani/bert-base-uncased-emotion | 2021-06-18T15:12:40.000Z | [
"pytorch",
"bert",
"text-classification",
"en",
"dataset:emotion",
"arxiv:1810.04805",
"transformers",
"emotion",
"license:apache-2.0"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.txt"
]
| bhadresh-savani | 14 | transformers | |
bhadresh-savani/distilbert-base-uncased-emotion | 2021-06-18T15:17:00.000Z | [
"pytorch",
"distilbert",
"text-classification",
"en",
"dataset:emotion",
"arxiv:1910.01108",
"transformers",
"emotion",
"license:apache-2.0"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| bhadresh-savani | 76,510 | transformers | ---
language:
- en
thumbnail: https://avatars3.githubusercontent.com/u/32437151?s=460&u=4ec59abc8d21d5feea3dab323d23a5860e6996a4&v=4
tags:
- text-classification
- emotion
- pytorch
license: apache-2.0
datasets:
- emotion
metrics:
- Accuracy, F1 Score
---
# Distilbert-base-uncased-emotion
## Model description:
`Distilbert-base-uncased` finetuned on the emotion dataset using HuggingFace Trainer.
```
learning rate 2e-5,
batch size 64,
num_train_epochs=8,
```
## How to Use the model:
```python
from transformers import pipeline
classifier = pipeline("sentiment-analysis",model='bhadresh-savani/distilbert-base-uncased-emotion')
prediction = classifier("I love using transformers. The best part is wide range of support and its easy to use")
print(prediction)
```
## Dataset:
[Twitter-Sentiment-Analysis](https://huggingface.co/nlp/viewer/?dataset=emotion).
## Training procedure
[Colab Notebook](https://github.com/bhadreshpsavani/ExploringSentimentalAnalysis/blob/main/SentimentalAnalysisWithDistilbert.ipynb)
## Eval results
```
{
'test_accuracy': 0.938,
'test_f1': 0.937932884041714,
'test_loss': 0.1472451239824295,
'test_mem_cpu_alloc_delta': 0,
'test_mem_cpu_peaked_delta': 0,
'test_mem_gpu_alloc_delta': 0,
'test_mem_gpu_peaked_delta': 163454464,
'test_runtime': 5.0164,
'test_samples_per_second': 398.69
}
``` |
bhadresh-savani/roberta-base-emotion | 2021-06-18T15:24:13.000Z | [
"pytorch",
"roberta",
"text-classification",
"en",
"dataset:emotion",
"arxiv:1907.11692",
"transformers",
"emotion",
"license:apache-2.0"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer.json",
"tokenizer_config.json",
"vocab.json"
]
| bhadresh-savani | 0 | transformers | |
biasedai/bert-based-ner | 2021-06-10T11:43:16.000Z | [
"pytorch",
"bert",
"token-classification",
"transformers"
]
| token-classification | [
".gitattributes",
"README.md",
"config.json",
"optimizer.pt",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.txt"
]
| biasedai | 27 | transformers | Creating finetuned model for NER task |
bigjoedata/friendlychatbot | 2021-05-21T14:13:11.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"config.json",
"eval_results.txt",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| bigjoedata | 101 | transformers | |
bigjoedata/obama-gpt2-sm | 2021-05-21T14:14:23.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| bigjoedata | 6 | transformers | |
bigjoedata/rockbot-scratch | 2021-05-21T14:15:08.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| bigjoedata | 25 | transformers |
# 🎸 🥁 Rockbot 🎤 🎧
A [GPT-2](https://openai.com/blog/better-language-models/) based lyrics generator fine-tuned on the writing styles of 16000 songs by 270 artists across MANY genres (not just rock).
**Instructions:** Type in a fake song title, pick an artist, click "Generate".
Most language models are imprecise and Rockbot is no exception. You may see NSFW lyrics unexpectedly. I have made no attempts to censor. Generated lyrics may be repetitive and/or incoherent at times, but hopefully you'll encounter something interesting or memorable.
Oh, and generation is resource intense and can be slow. I set governors on song length to keep generation time somewhat reasonable. You may adjust song length and other parameters on the left or check out [Github](https://github.com/bigjoedata/rockbot) to spin up your own Rockbot.
Just have fun.
[Demo](https://share.streamlit.io/bigjoedata/rockbot/main/src/main.py) Adjust settings to increase speed
[Github](https://github.com/bigjoedata/rockbot)
[GPT-2 124M version Model page on Hugging Face](https://huggingface.co/bigjoedata/rockbot)
[DistilGPT2 version Model page on Hugging Face](https://huggingface.co/bigjoedata/rockbot-distilgpt2/) This is leaner with the tradeoff being that the lyrics are more simplistic.
🎹 🪘 🎷 🎺 🪗 🪕 🎻
## Background
With the shutdown of [Google Play Music](https://en.wikipedia.org/wiki/Google_Play_Music) I used Google's takeout function to gather the metadata from artists I've listened to over the past several years. I wanted to take advantage of this bounty to build something fun. I scraped the top 50 lyrics for artists I'd listened to at least once from [Genius](https://genius.com/), then fine tuned [GPT-2's](https://openai.com/blog/better-language-models/) 124M token model using the [AITextGen](https://github.com/minimaxir/aitextgen) framework after considerable post-processing. For more on generation, see [here.](https://huggingface.co/blog/how-to-generate)
### Full Tech Stack
[Google Play Music](https://en.wikipedia.org/wiki/Google_Play_Music) (R.I.P.).
[Python](https://www.python.org/).
[Streamlit](https://www.streamlit.io/).
[GPT-2](https://openai.com/blog/better-language-models/).
[AITextGen](https://github.com/minimaxir/aitextgen).
[Pandas](https://pandas.pydata.org/).
[LyricsGenius](https://lyricsgenius.readthedocs.io/en/master/).
[Google Colab](https://colab.research.google.com/) (GPU based Training).
[Knime](https://www.knime.com/) (data cleaning).
## How to Use The Model
Please refer to [AITextGen](https://github.com/minimaxir/aitextgen) for much better documentation.
### Training Parameters Used
ai.train("lyrics.txt",
line_by_line=False,
from_cache=False,
num_steps=10000,
generate_every=2000,
save_every=2000,
save_gdrive=False,
learning_rate=1e-3,
batch_size=3,
eos_token="<|endoftext|>",
#fp16=True
)
### To Use
Generate With Prompt (Use Title Case):
Song Name
BY
Artist Name
|
bigjoedata/rockbot | 2021-05-21T14:15:36.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| bigjoedata | 20 | transformers |
# 🎸 🥁 Rockbot 🎤 🎧
A [GPT-2](https://openai.com/blog/better-language-models/) based lyrics generator fine-tuned on the writing styles of 16000 songs by 270 artists across MANY genres (not just rock).
**Instructions:** Type in a fake song title, pick an artist, click "Generate".
Most language models are imprecise and Rockbot is no exception. You may see NSFW lyrics unexpectedly. I have made no attempts to censor. Generated lyrics may be repetitive and/or incoherent at times, but hopefully you'll encounter something interesting or memorable.
Oh, and generation is resource intense and can be slow. I set governors on song length to keep generation time somewhat reasonable. You may adjust song length and other parameters on the left or check out [Github](https://github.com/bigjoedata/rockbot) to spin up your own Rockbot.
Just have fun.
[Demo](https://share.streamlit.io/bigjoedata/rockbot/main/src/main.py) Adjust settings to increase speed
[Github](https://github.com/bigjoedata/rockbot)
[GPT-2 124M version Model page on Hugging Face](https://huggingface.co/bigjoedata/rockbot)
[DistilGPT2 version Model page on Hugging Face](https://huggingface.co/bigjoedata/rockbot-distilgpt2/) This is leaner with the tradeoff being that the lyrics are more simplistic.
🎹 🪘 🎷 🎺 🪗 🪕 🎻
## Background
With the shutdown of [Google Play Music](https://en.wikipedia.org/wiki/Google_Play_Music) I used Google's takeout function to gather the metadata from artists I've listened to over the past several years. I wanted to take advantage of this bounty to build something fun. I scraped the top 50 lyrics for artists I'd listened to at least once from [Genius](https://genius.com/), then fine tuned [GPT-2's](https://openai.com/blog/better-language-models/) 124M token model using the [AITextGen](https://github.com/minimaxir/aitextgen) framework after considerable post-processing. For more on generation, see [here.](https://huggingface.co/blog/how-to-generate)
### Full Tech Stack
[Google Play Music](https://en.wikipedia.org/wiki/Google_Play_Music) (R.I.P.).
[Python](https://www.python.org/).
[Streamlit](https://www.streamlit.io/).
[GPT-2](https://openai.com/blog/better-language-models/).
[AITextGen](https://github.com/minimaxir/aitextgen).
[Pandas](https://pandas.pydata.org/).
[LyricsGenius](https://lyricsgenius.readthedocs.io/en/master/).
[Google Colab](https://colab.research.google.com/) (GPU based Training).
[Knime](https://www.knime.com/) (data cleaning).
## How to Use The Model
Please refer to [AITextGen](https://github.com/minimaxir/aitextgen) for much better documentation.
### Training Parameters Used
ai.train("lyrics.txt",
line_by_line=False,
from_cache=False,
num_steps=10000,
generate_every=2000,
save_every=2000,
save_gdrive=False,
learning_rate=1e-3,
batch_size=3,
eos_token="<|endoftext|>",
#fp16=True
)
### To Use
Generate With Prompt (Use Title Case):
Song Name
BY
Artist Name
|
bigjoedata/rockbot355M | 2021-05-21T14:17:25.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| bigjoedata | 45 | transformers |
# 🎸 🥁 Rockbot 🎤 🎧
A [GPT-2](https://openai.com/blog/better-language-models/) based lyrics generator fine-tuned on the writing styles of 16000 songs by 270 artists across MANY genres (not just rock).
**Instructions:** Type in a fake song title, pick an artist, click "Generate".
Most language models are imprecise and Rockbot is no exception. You may see NSFW lyrics unexpectedly. I have made no attempts to censor. Generated lyrics may be repetitive and/or incoherent at times, but hopefully you'll encounter something interesting or memorable.
Oh, and generation is resource intense and can be slow. I set governors on song length to keep generation time somewhat reasonable. You may adjust song length and other parameters on the left or check out [Github](https://github.com/bigjoedata/rockbot) to spin up your own Rockbot.
Just have fun.
[Demo](https://share.streamlit.io/bigjoedata/rockbot/main/src/main.py) Adjust settings to increase speed
[Github](https://github.com/bigjoedata/rockbot)
[GPT-2 124M version Model page on Hugging Face](https://huggingface.co/bigjoedata/rockbot)
[DistilGPT2 version Model page on Hugging Face](https://huggingface.co/bigjoedata/rockbot-distilgpt2/) This is leaner with the tradeoff being that the lyrics are more simplistic.
🎹 🪘 🎷 🎺 🪗 🪕 🎻
## Background
With the shutdown of [Google Play Music](https://en.wikipedia.org/wiki/Google_Play_Music) I used Google's takeout function to gather the metadata from artists I've listened to over the past several years. I wanted to take advantage of this bounty to build something fun. I scraped the top 50 lyrics for artists I'd listened to at least once from [Genius](https://genius.com/), then fine tuned [GPT-2's](https://openai.com/blog/better-language-models/) 124M token model using the [AITextGen](https://github.com/minimaxir/aitextgen) framework after considerable post-processing. For more on generation, see [here.](https://huggingface.co/blog/how-to-generate)
### Full Tech Stack
[Google Play Music](https://en.wikipedia.org/wiki/Google_Play_Music) (R.I.P.).
[Python](https://www.python.org/).
[Streamlit](https://www.streamlit.io/).
[GPT-2](https://openai.com/blog/better-language-models/).
[AITextGen](https://github.com/minimaxir/aitextgen).
[Pandas](https://pandas.pydata.org/).
[LyricsGenius](https://lyricsgenius.readthedocs.io/en/master/).
[Google Colab](https://colab.research.google.com/) (GPU based Training).
[Knime](https://www.knime.com/) (data cleaning).
## How to Use The Model
Please refer to [AITextGen](https://github.com/minimaxir/aitextgen) for much better documentation.
### Training Parameters Used
ai.train("lyrics.txt",
line_by_line=False,
from_cache=False,
num_steps=10000,
generate_every=2000,
save_every=2000,
save_gdrive=False,
learning_rate=1e-3,
batch_size=3,
eos_token="<|endoftext|>",
#fp16=True
)
### To Use
Generate With Prompt (Use Title Case):
Song Name
BY
Artist Name
|
bigjoedata/rockchatbot | 2021-05-21T14:20:07.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"config.json",
"eval_results.txt",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| bigjoedata | 43 | transformers | |
bigjoedata/trump-gpt2-sm | 2021-05-21T14:21:14.000Z | [
"pytorch",
"jax",
"gpt2",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| bigjoedata | 8 | transformers | |
binarymax/roberta-base-squad2-outdoors | 2021-01-30T20:33:45.000Z | []
| [
".gitattributes"
]
| binarymax | 0 | |||
binwang/bert-base-nli-stsb | 2021-05-19T12:39:50.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| binwang | 13 | transformers | |
binwang/bert-base-nli | 2021-05-19T12:40:51.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| binwang | 55 | transformers | |
binwang/bert-base-uncased | 2021-05-19T12:43:37.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| binwang | 14 | transformers | |
binwang/bert-large-nli-stsb | 2021-05-19T12:45:07.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| binwang | 36 | transformers | |
binwang/bert-large-nli | 2021-05-19T12:47:28.000Z | [
"pytorch",
"jax",
"bert",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.txt"
]
| binwang | 20 | transformers | |
binwang/roberta-base | 2021-05-20T14:27:23.000Z | [
"pytorch",
"jax",
"roberta",
"masked-lm",
"transformers",
"fill-mask"
]
| fill-mask | [
".gitattributes",
"config.json",
"flax_model.msgpack",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| binwang | 12 | transformers | |
binwang/xlnet-base-cased | 2020-12-11T21:34:38.000Z | [
"pytorch",
"xlnet",
"lm-head",
"causal-lm",
"transformers",
"text-generation"
]
| text-generation | [
".gitattributes",
"README.md",
"config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"spiece.model",
"tokenizer_config.json"
]
| binwang | 24 | transformers | This model is pre-trained **XLNET** with 12 layers.
It comes with paper: SBERT-WK: A Sentence Embedding Method By Dissecting BERT-based Word Models
Project Page: [SBERT-WK](https://github.com/BinWang28/SBERT-WK-Sentence-Embedding)
|
bionlp/bluebert_pubmed_mimic_uncased_L-12_H-768_A-12 | 2021-05-19T12:48:54.000Z | [
"pytorch",
"jax",
"en",
"dataset:PubMed",
"dataset:MIMIC-III",
"transformers",
"bert",
"bluebert",
"license:public domain notice"
]
| [
".gitattributes",
"README.md",
"bert_model.ckpt.data-00000-of-00001",
"bert_model.ckpt.index",
"bert_model.ckpt.meta",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"vocab.txt"
]
| bionlp | 1,053 | transformers | ---
language:
- en
tags:
- bert
- bluebert
license:
- PUBLIC DOMAIN NOTICE
datasets:
- PubMed
- MIMIC-III
---
# BlueBert-Base, Uncased, PubMed and MIMIC-III
## Model description
A BERT model pre-trained on PubMed abstracts and clinical notes ([MIMIC-III](https://mimic.physionet.org/)).
## Intended uses & limitations
#### How to use
Please see https://github.com/ncbi-nlp/bluebert
## Training data
We provide [preprocessed PubMed texts](https://ftp.ncbi.nlm.nih.gov/pub/lu/Suppl/NCBI-BERT/pubmed_uncased_sentence_nltk.txt.tar.gz) that were used to pre-train the BlueBERT models.
The corpus contains ~4000M words extracted from the [PubMed ASCII code version](https://www.ncbi.nlm.nih.gov/research/bionlp/APIs/BioC-PubMed/).
Pre-trained model: https://huggingface.co/bert-base-uncased
## Training procedure
* lowercasing the text
* removing speical chars `\x00`-`\x7F`
* tokenizing the text using the [NLTK Treebank tokenizer](https://www.nltk.org/_modules/nltk/tokenize/treebank.html)
Below is a code snippet for more details.
```python
value = value.lower()
value = re.sub(r'[\r\n]+', ' ', value)
value = re.sub(r'[^\x00-\x7F]+', ' ', value)
tokenized = TreebankWordTokenizer().tokenize(value)
sentence = ' '.join(tokenized)
sentence = re.sub(r"\s's\b", "'s", sentence)
```
### BibTeX entry and citation info
```bibtex
@InProceedings{peng2019transfer,
author = {Yifan Peng and Shankai Yan and Zhiyong Lu},
title = {Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets},
booktitle = {Proceedings of the 2019 Workshop on Biomedical Natural Language Processing (BioNLP 2019)},
year = {2019},
pages = {58--65},
}
```
### Acknowledgments
This work was supported by the Intramural Research Programs of the National Institutes of Health, National Library of
Medicine and Clinical Center. This work was supported by the National Library of Medicine of the National Institutes of Health under award number 4R00LM013001-01.
We are also grateful to the authors of BERT and ELMo to make the data and codes publicly available.
We would like to thank Dr Sun Kim for processing the PubMed texts.
### Disclaimer
This tool shows the results of research conducted in the Computational Biology Branch, NCBI. The information produced
on this website is not intended for direct diagnostic use or medical decision-making without review and oversight
by a clinical professional. Individuals should not change their health behavior solely on the basis of information
produced on this website. NIH does not independently verify the validity or utility of the information produced
by this tool. If you have questions about the information produced on this website, please see a health care
professional. More information about NCBI's disclaimer policy is available.
|
|
bionlp/bluebert_pubmed_mimic_uncased_L-24_H-1024_A-16 | 2021-05-19T12:52:39.000Z | [
"pytorch",
"jax",
"en",
"dataset:PubMed",
"dataset:MIMIC-III",
"transformers",
"bert",
"bluebert",
"license:public domain notice"
]
| [
".gitattributes",
"README.md",
"bert_model.ckpt.data-00000-of-00001",
"bert_model.ckpt.index",
"bert_model.ckpt.meta",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"vocab.txt"
]
| bionlp | 119 | transformers | ---
language:
- en
tags:
- bert
- bluebert
license:
- PUBLIC DOMAIN NOTICE
datasets:
- PubMed
- MIMIC-III
---
# BlueBert-Base, Uncased, PubMed and MIMIC-III
## Model description
A BERT model pre-trained on PubMed abstracts and clinical notes ([MIMIC-III](https://mimic.physionet.org/)).
## Intended uses & limitations
#### How to use
Please see https://github.com/ncbi-nlp/bluebert
## Training data
We provide [preprocessed PubMed texts](https://ftp.ncbi.nlm.nih.gov/pub/lu/Suppl/NCBI-BERT/pubmed_uncased_sentence_nltk.txt.tar.gz) that were used to pre-train the BlueBERT models.
The corpus contains ~4000M words extracted from the [PubMed ASCII code version](https://www.ncbi.nlm.nih.gov/research/bionlp/APIs/BioC-PubMed/).
Pre-trained model: https://huggingface.co/bert-large-uncased
## Training procedure
* lowercasing the text
* removing speical chars `\x00`-`\x7F`
* tokenizing the text using the [NLTK Treebank tokenizer](https://www.nltk.org/_modules/nltk/tokenize/treebank.html)
Below is a code snippet for more details.
```python
value = value.lower()
value = re.sub(r'[\r\n]+', ' ', value)
value = re.sub(r'[^\x00-\x7F]+', ' ', value)
tokenized = TreebankWordTokenizer().tokenize(value)
sentence = ' '.join(tokenized)
sentence = re.sub(r"\s's\b", "'s", sentence)
```
### BibTeX entry and citation info
```bibtex
@InProceedings{peng2019transfer,
author = {Yifan Peng and Shankai Yan and Zhiyong Lu},
title = {Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets},
booktitle = {Proceedings of the 2019 Workshop on Biomedical Natural Language Processing (BioNLP 2019)},
year = {2019},
pages = {58--65},
}
```
### Acknowledgments
This work was supported by the Intramural Research Programs of the National Institutes of Health, National Library of
Medicine and Clinical Center. This work was supported by the National Library of Medicine of the National Institutes of Health under award number 4R00LM013001-01.
We are also grateful to the authors of BERT and ELMo to make the data and codes publicly available.
We would like to thank Dr Sun Kim for processing the PubMed texts.
### Disclaimer
This tool shows the results of research conducted in the Computational Biology Branch, NCBI. The information produced
on this website is not intended for direct diagnostic use or medical decision-making without review and oversight
by a clinical professional. Individuals should not change their health behavior solely on the basis of information
produced on this website. NIH does not independently verify the validity or utility of the information produced
by this tool. If you have questions about the information produced on this website, please see a health care
professional. More information about NCBI's disclaimer policy is available.
|
|
bionlp/bluebert_pubmed_uncased_L-12_H-768_A-12 | 2020-12-11T21:34:48.000Z | [
"pytorch",
"en",
"dataset:pubmed",
"transformers",
"bluebert",
"license:public domain notice"
]
| [
".gitattributes",
"README.md",
"bert_config.json",
"bert_model.ckpt.data-00000-of-00001",
"bert_model.ckpt.index",
"bert_model.ckpt.meta",
"checkpoint",
"config.json",
"pytorch_model.bin",
"vocab.txt"
]
| bionlp | 4,825 | transformers | ---
language:
- en
tags:
- bluebert
license:
- PUBLIC DOMAIN NOTICE
datasets:
- pubmed
---
# BlueBert-Base, Uncased, PubMed
## Model description
A BERT model pre-trained on PubMed abstracts
## Intended uses & limitations
#### How to use
Please see https://github.com/ncbi-nlp/bluebert
## Training data
We provide [preprocessed PubMed texts](https://ftp.ncbi.nlm.nih.gov/pub/lu/Suppl/NCBI-BERT/pubmed_uncased_sentence_nltk.txt.tar.gz) that were used to pre-train the BlueBERT models.
The corpus contains ~4000M words extracted from the [PubMed ASCII code version](https://www.ncbi.nlm.nih.gov/research/bionlp/APIs/BioC-PubMed/).
Pre-trained model: https://huggingface.co/bert-base-uncased
## Training procedure
* lowercasing the text
* removing speical chars `\x00`-`\x7F`
* tokenizing the text using the [NLTK Treebank tokenizer](https://www.nltk.org/_modules/nltk/tokenize/treebank.html)
Below is a code snippet for more details.
```python
value = value.lower()
value = re.sub(r'[\r\n]+', ' ', value)
value = re.sub(r'[^\x00-\x7F]+', ' ', value)
tokenized = TreebankWordTokenizer().tokenize(value)
sentence = ' '.join(tokenized)
sentence = re.sub(r"\s's\b", "'s", sentence)
```
### BibTeX entry and citation info
```bibtex
@InProceedings{peng2019transfer,
author = {Yifan Peng and Shankai Yan and Zhiyong Lu},
title = {Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets},
booktitle = {Proceedings of the 2019 Workshop on Biomedical Natural Language Processing (BioNLP 2019)},
year = {2019},
pages = {58--65},
}
```
|
|
bionlp/bluebert_pubmed_uncased_L-24_H-1024_A-16 | 2021-05-19T12:55:12.000Z | [
"pytorch",
"jax",
"en",
"dataset:PubMed",
"transformers",
"bert",
"bluebert",
"license:public domain notice"
]
| [
".gitattributes",
"README.md",
"bert_model.ckpt.data-00000-of-00001",
"bert_model.ckpt.index",
"bert_model.ckpt.meta",
"config.json",
"flax_model.msgpack",
"pytorch_model.bin",
"vocab.txt"
]
| bionlp | 47 | transformers | ---
language:
- en
tags:
- bert
- bluebert
license:
- PUBLIC DOMAIN NOTICE
datasets:
- PubMed
---
# BlueBert-Base, Uncased, PubMed
## Model description
A BERT model pre-trained on PubMed abstracts.
## Intended uses & limitations
#### How to use
Please see https://github.com/ncbi-nlp/bluebert
## Training data
We provide [preprocessed PubMed texts](https://ftp.ncbi.nlm.nih.gov/pub/lu/Suppl/NCBI-BERT/pubmed_uncased_sentence_nltk.txt.tar.gz) that were used to pre-train the BlueBERT models.
The corpus contains ~4000M words extracted from the [PubMed ASCII code version](https://www.ncbi.nlm.nih.gov/research/bionlp/APIs/BioC-PubMed/).
Pre-trained model: https://huggingface.co/bert-large-uncased
## Training procedure
* lowercasing the text
* removing speical chars `\x00`-`\x7F`
* tokenizing the text using the [NLTK Treebank tokenizer](https://www.nltk.org/_modules/nltk/tokenize/treebank.html)
Below is a code snippet for more details.
```python
value = value.lower()
value = re.sub(r'[\r\n]+', ' ', value)
value = re.sub(r'[^\x00-\x7F]+', ' ', value)
tokenized = TreebankWordTokenizer().tokenize(value)
sentence = ' '.join(tokenized)
sentence = re.sub(r"\s's\b", "'s", sentence)
```
### BibTeX entry and citation info
```bibtex
@InProceedings{peng2019transfer,
author = {Yifan Peng and Shankai Yan and Zhiyong Lu},
title = {Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets},
booktitle = {Proceedings of the 2019 Workshop on Biomedical Natural Language Processing (BioNLP 2019)},
year = {2019},
pages = {58--65},
}
```
### Acknowledgments
This work was supported by the Intramural Research Programs of the National Institutes of Health, National Library of
Medicine and Clinical Center. This work was supported by the National Library of Medicine of the National Institutes of Health under award number 4R00LM013001-01.
We are also grateful to the authors of BERT and ELMo to make the data and codes publicly available.
We would like to thank Dr Sun Kim for processing the PubMed texts.
### Disclaimer
This tool shows the results of research conducted in the Computational Biology Branch, NCBI. The information produced
on this website is not intended for direct diagnostic use or medical decision-making without review and oversight
by a clinical professional. Individuals should not change their health behavior solely on the basis of information
produced on this website. NIH does not independently verify the validity or utility of the information produced
by this tool. If you have questions about the information produced on this website, please see a health care
professional. More information about NCBI's disclaimer policy is available.
|
|
birgermoell/wav2vec2-large-xlrs-estonian | 2021-03-25T20:57:09.000Z | [
"pytorch",
"wav2vec2",
"et",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
]
| automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"scheduler.pt",
"special_tokens_map.json",
"tokenizer_config.json",
"trainer_state.json",
"training_args.bin",
"vocab.json"
]
| birgermoell | 7 | transformers | ---
language: et
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Estonian by Birger Moell
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice Estonian
type: common_voice
args: et
metrics:
- name: Test WER
type: wer
value: 36.951816
---
# Wav2Vec2-Large-XLSR-53-Estonian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Luganda using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "et", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("birgermoell/wav2vec2-large-xlrs-estonian")
model = Wav2Vec2ForCTC.from_pretrained("birgermoell/wav2vec2-large-xlrs-estonian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Luganda test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "fi", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("birgermoell/wav2vec2-large-xlrs-estonian")
model = Wav2Vec2ForCTC.from_pretrained("birgermoell/wav2vec2-large-xlrs-estonian")
model.to("cuda")
chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\!\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\-\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\;\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\:\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\twith torch.no_grad():
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\t\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**:
WER: 36.951816
## Training
The Common Voice `train` and `validation` datasets were used for training.
The script used for training can be found here
https://colab.research.google.com/drive/1VcWT92vBCwVn-5d-mkYxhgILPr11OHfR?usp=sharing
|
birgermoell/wav2vec2-large-xlsr-finnish | 2021-03-23T08:59:15.000Z | [
"pytorch",
"wav2vec2",
"fi",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
]
| automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| birgermoell | 37 | transformers | ---
language: fi
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Finnish by Birger Moell
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice fi
type: common_voice
args: fi
metrics:
- name: Test WER
type: wer
value: 55.097365
---
# Wav2Vec2-Large-XLSR-53-Finnish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Finnish using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "fi", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("birgermoell/wav2vec2-large-xlsr-finnish")
model = Wav2Vec2ForCTC.from_pretrained("birgermoell/wav2vec2-large-xlsr-finnish")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Finnish test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "fi", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("birgermoell/wav2vec2-large-xlsr-finnish")
model = Wav2Vec2ForCTC.from_pretrained("birgermoell/wav2vec2-large-xlsr-finnish")
model.to("cuda")
chars_to_ignore_regex = '[\\\\,\\\\?\\\\.\\\\!\\\\-\\\\;\\\\:\\\\"\\\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\\twith torch.no_grad():
\\t\\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
\\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**:
The WER is 55.097365
## Training
The Common Voice `train` and `validation` datasets were used for training.
The script used for training can be found here
https://colab.research.google.com/drive/16AyzqMWU_aWNe3IA-NxrhskB1WLPHG-Q?usp=sharing
|
birgermoell/wav2vec2-large-xlsr-hungarian | 2021-03-25T10:16:01.000Z | [
"pytorch",
"wav2vec2",
"hu",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
]
| automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| birgermoell | 13 | transformers | ---
language: hu
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Hugarian by Birger Moell
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice hu
type: common_voice
args: hu
metrics:
- name: Test WER
type: wer
value: 46.97
---
# Wav2Vec2-Large-XLSR-53-Hungarian
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Hungarian using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "hu", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("birgermoell/wav2vec2-large-xlsr-hungarian")
model = Wav2Vec2ForCTC.from_pretrained("birgermoell/wav2vec2-large-xlsr-hungarian")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Hungarian test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "hu", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("birgermoell/wav2vec2-large-xlsr-hungarian")
model = Wav2Vec2ForCTC.from_pretrained("birgermoell/wav2vec2-large-xlsr-hungarian")
model.to("cuda")
chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\twith torch.no_grad():
\t\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 46.97 %
## Training
The Common Voice `train` and `validation` datasets were used for training.
The script used for training can be found [here](https://colab.research.google.com/drive/1c8LS-RP-RMukvXkpqJ9kLXRWmRKFjevs?usp=sharing)
|
birgermoell/wav2vec2-luganda | 2021-03-23T19:48:18.000Z | [
"pytorch",
"wav2vec2",
"lg",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
]
| automatic-speech-recognition | [
".gitattributes",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| birgermoell | 13 | transformers | ---
language: lg
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Luganda by Birger Moell
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice Luganda
type: common_voice
args: lg
metrics:
- name: Test WER
type: wer
value: 48.31
---
# Wav2Vec2-Large-XLSR-53-Luganda
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Luganda using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "lg", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("birgermoell/wav2vec2-luganda")
model = Wav2Vec2ForCTC.from_pretrained("birgermoell/wav2vec2-luganda")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\\\\\\\\\\\\\\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\\\\\\\\\\\\\\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\\\\\\\\\\\\\\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
\\\\\\\\\\\\\\\\tlogits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Luganda test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "fi", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("birgermoell/wav2vec2-luganda")
model = Wav2Vec2ForCTC.from_pretrained("birgermoell/wav2vec2-luganda")
model.to("cuda")
chars_to_ignore_regex = '[\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\,\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\?\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\.\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\!\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\-\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\;\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\:\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\"\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
\\\\\\\\\\\\\\\\tbatch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
\\\\\\\\\\\\\\\\tspeech_array, sampling_rate = torchaudio.load(batch["path"])
\\\\\\\\\\\\\\\\tbatch["speech"] = resampler(speech_array).squeeze().numpy()
\\\\\\\\\\\\\\\\treturn batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
\\\\\\\\\\\\\\\\tinputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
\\\\\\\\\\\\\\\\twith torch.no_grad():
\\\\\\\\\\\\\\\\t\\\\\\\\\\\\\\\\tlogits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
\\\\\\\\\\\\\\\\tbatch["pred_strings"] = processor.batch_decode(pred_ids)
\\\\\\\\\\\\\\\\treturn batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**:
WER: 48.314356
## Training
The Common Voice `train` and `validation` datasets were used for training.
The script used for training can be found here
https://colab.research.google.com/drive/1ZeII36LZ5IpBrTV7kBaTVfhDqygznlmC?usp=sharing
|
birgermoell/wav2vec2-swedish-common-voice | 2021-03-22T07:10:52.000Z | [
"pytorch",
"wav2vec2",
"sv",
"dataset:common_voice",
"transformers",
"audio",
"automatic-speech-recognition",
"speech",
"xlsr-fine-tuning-week",
"license:apache-2.0"
]
| automatic-speech-recognition | [
".gitattributes",
".gitignore",
"README.md",
"config.json",
"preprocessor_config.json",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"training_args.bin",
"vocab.json"
]
| birgermoell | 17 | transformers | ---
language: sv
datasets:
- common_voice
tags:
- audio
- automatic-speech-recognition
- speech
- xlsr-fine-tuning-week
license: apache-2.0
model-index:
- name: XLSR Wav2Vec2 Swedish by Birger Moell
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice sv-SE
type: common_voice
args: sv-SE
metrics:
- name: Test WER
type: wer
value: 36.91
---
# Wav2Vec2-Large-XLSR-53-Swedish
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Swedish using the [Common Voice](https://huggingface.co/datasets/common_voice). The training data amounts to 402 MB.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "sv-SE", split="test[:2%]").
processor = Wav2Vec2Processor.from_pretrained("birgermoell/wav2vec2-swedish-common-voice")
model = Wav2Vec2ForCTC.from_pretrained("birgermoell/wav2vec2-swedish-common-voice")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "sv-SE", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("birgermoell/wav2vec2-swedish-common-voice")
model = Wav2Vec2ForCTC.from_pretrained("birgermoell/wav2vec2-swedish-common-voice")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 36.91 %
## Training
The Common Voice `train`, `validation` datasets were used for training.
The script used for training can be found [here](https://colab.research.google.com/drive/1KkD4PeZwnIwxxxOP1bUE7XTZMK7-SzRj?usp=sharing)
|
blackbird/alberta-base-mnli-v1 | 2021-06-04T02:36:43.000Z | [
"pytorch",
"roberta",
"text-classification",
"transformers"
]
| text-classification | [
".gitattributes",
"README.md",
"config.json",
"merges.txt",
"pytorch_model.bin",
"special_tokens_map.json",
"tokenizer_config.json",
"vocab.json"
]
| blackbird | 13 | transformers |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.