modelId
stringlengths
4
112
lastModified
stringlengths
24
24
tags
list
pipeline_tag
stringclasses
21 values
files
list
publishedBy
stringlengths
2
37
downloads_last_month
int32
0
9.44M
library
stringclasses
15 values
modelCard
large_stringlengths
0
100k
NENstudio/Metarlekin
2021-02-10T14:12:39.000Z
[]
[ ".gitattributes" ]
NENstudio
0
NLP4H/ms_bert
2021-05-18T21:46:48.000Z
[ "pytorch", "jax", "bert", "masked-lm", "transformers", "fill-mask" ]
fill-mask
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "vocab.txt" ]
NLP4H
51
transformers
# MS-BERT ## Introduction This repository provides codes and models of MS-BERT. MS-BERT was pre-trained on notes from neurological examination for Multiple Sclerosis (MS) patients at St. Michael's Hospital in Toronto, Canada. ## Data The dataset contained approximately 75,000 clinical notes, for about 5000 patients, totaling to over 35.7 million words. These notes were collected from patients who visited St. Michael's Hospital MS Clinic between 2015 to 2019. The notes contained a variety of information pertaining to a neurological exam. For example, a note can contain information on the patient's condition, their progress over time and diagnosis. The gender split within the dataset was observed to be 72% female and 28% male ([which reflects the natural discrepancy seen in MS][1]). Further sections will describe how MS-BERT was pre trained through the use of these clinically relevant and rich neurological notes. ## Data pre-processing The data was pre-processed to remove any identifying information. This includes information on: patient names, doctor names, hospital names, patient identification numbers, phone numbers, addresses, and time. In order to de-identify the information, we used a curated database that contained patient and doctor information. This curated database was paired with regular expressions to find and remove any identifying pieces of information. Each of these identifiers were replaced with a specific token. These tokens were chosen based on three criteria: (1) they belong to the current BERT vocab, (2), they have relatively the same semantic meaning as the word they are replacing, and (3), the token is not found in the original unprocessed dataset. The replacements that met the criteria above were as follows: Female first names -> Lucie Male first names -> Ezekiel Last/family names -> Salamanca. Dates -> 2010s Patient IDs -> 999 Phone numbers -> 1718 Addresses -> Silesia Time -> 1610 Locations/Hospital/Clinic names -> Troy ## Pre-training The starting point for our model is the already pre-trained and fine-tuned BLUE-BERT base. We further pre-train it using the masked language modelling task from the huggingface transformers [library](https://github.com/huggingface). The hyperparameters can be found in the config file in this repository or [here](https://s3.amazonaws.com/models.huggingface.co/bert/NLP4H/ms_bert/config.json) ## Acknowledgements We would like to thank the researchers and staff at the Data Science and Advanced Analytics (DSAA) department, St. Michael’s Hospital, for providing consistent support and guidance throughout this project. We would also like to thank Dr. Marzyeh Ghassemi, Taylor Killan, Nathan Ng and Haoran Zhang for providing us the opportunity to work on this exciting project. ## Disclaimer MS-BERT shows the results of research conducted at the Data Science and Advanced Analytics (DSAA) department, St. Michael’s Hospital. The results produced by MS-BERT are not intended for direct diagnostic use or medical decision-making without review and oversight by a clinical professional. Individuals should not make decisions about their health solely on the basis of the results produced by MS-BERT. St. Michael’s Hospital does not independently verify the validity or utility of the results produced by MS-BERT. If you have questions about the results produced by MS-BERT please consult a healthcare professional. If you would like more information about the research conducted at DSAA please contact [Zhen Yang](mailto:[email protected]). If you would like more information on neurological examination notes please contact [Dr. Tony Antoniou](mailto:[email protected]) or [Dr. Jiwon Oh](mailto:[email protected]) from the MS clinic at St. Michael's Hospital. [1]: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3707353/
NTUYG/DeepSCC-RoBERTa
2021-05-20T12:15:05.000Z
[ "pytorch", "jax", "roberta", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "merges.txt", "model_args.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.json" ]
NTUYG
8
transformers
## How to use ```python from simpletransformers.classification import ClassificationModel, ClassificationArgs name_file = ['bash', 'c', 'c#', 'c++','css', 'haskell', 'java', 'javascript', 'lua', 'objective-c', 'perl', 'php', 'python','r','ruby', 'scala', 'sql', 'swift', 'vb.net'] deep_scc_model_args = ClassificationArgs(num_train_epochs=10,max_seq_length=300,use_multiprocessing=False) deep_scc_model = ClassificationModel("roberta", "NTUYG/DeepSCC-RoBERTa", num_labels=19, args=deep_scc_model_args,use_cuda=True) code = ''' public static double getSimilarity(String phrase1, String phrase2) { return (getSC(phrase1, phrase2) + getSC(phrase2, phrase1)) / 2.0; }''' code = code.replace('\n',' ').replace('\r',' ') predictions, raw_outputs = model.predict([code]) predict = name_file[predictions[0]] print(predict) ```
NTUYG/SOTitle-csharp-BART
2021-06-13T17:33:05.000Z
[ "pytorch", "jax", "bart", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "eval_results.txt", "flax_model.msgpack", "merges.txt", "model_args.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.json" ]
NTUYG
9
transformers
NTUYG/SOTitle-java-BART
2021-01-28T15:12:29.000Z
[ "pytorch", "bart", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "README.md", "config.json", "merges.txt", "model_args.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.json" ]
NTUYG
10
transformers
## How to use ```python import logging from simpletransformers.seq2seq import Seq2SeqModel, Seq2SeqArgs logging.basicConfig(level=logging.INFO) transformers_logger = logging.getLogger("transformers") transformers_logger.setLevel(logging.WARNING) model_args = Seq2SeqArgs() # 加载本地训练好的模型 model = Seq2SeqModel( encoder_decoder_type="bart", encoder_decoder_name="NTUYG/SOTitle-java-BART", args=model_args, ) describe = """ I am a beginner at Android Java development but I have a few years of school + uni experience in Java. I am trying to write to a text file in an assets folder in my app using FileOutputStream but it doesn't seem to write to it at all since I am using InputStream to read the file after and there haven't any updates. Here is my code """ code = """ private void updateTextFile(String update) { FileOutputStream fos = null; try { fos = openFileOutput("Questions",MODE_PRIVATE); fos.write("Testing".getBytes()); } catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } finally { if(fos!=null) { try { fos.close(); } catch (IOException e) { e.printStackTrace(); } } } String text = ""; try { InputStream is = getAssets().open("Questions"); int size = is.available(); byte[] buffer = new byte[size]; is.read(buffer); is.close(); text = new String(buffer); } catch (IOException e) { e.printStackTrace(); } System.out.println("Tesing output " + text); } """ from nltk import word_tokenize describe = describe.replace('\n',' ').replace('\r',' ') describe = ' '.join(word_tokenize(describe)) code = code.replace('\n',' ').replace('\r',' ') code = ' '.join(word_tokenize(code)) # human : Java Android Cant seem to update text file using FileOutputStream body = describe + ' <code> ' + code +' </code>' print( model.predict( [ body ] ) ) ```
NTUYG/SOTitle-js-BART
2021-01-30T11:08:05.000Z
[ "pytorch", "bart", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "eval_results.txt", "merges.txt", "model_args.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.json" ]
NTUYG
9
transformers
NTUYG/SOTitle-python-BART
2021-01-30T10:59:46.000Z
[ "pytorch", "bart", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "eval_results.txt", "merges.txt", "model_args.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.json" ]
NTUYG
14
transformers
Nagnew/clean_data
2021-06-04T17:20:18.000Z
[]
[ ".gitattributes" ]
Nagnew
0
Nanci/relation_extraction
2021-01-29T09:09:47.000Z
[]
[ ".gitattributes" ]
Nanci
0
Narender/en-hi-retrained
2021-04-24T08:35:38.000Z
[]
[ ".gitattributes", "README.md" ]
Narender
0
NareshPS/test
2021-05-11T06:05:28.000Z
[]
[ ".gitattributes", "README.md" ]
NareshPS
0
Narrativa/byt5-base-tweet-hate-detection
2021-06-04T16:55:41.000Z
[ "pytorch", "t5", "seq2seq", "en", "dataset:tweets_hate_speech_detection", "arxiv:1907.06292", "arxiv:1910.10683", "transformers", "hate", "speech", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json" ]
Narrativa
132
transformers
--- language: en datasets: - tweets_hate_speech_detection tags: - hate - speech --- # ByT5-base fine-tuned for Hate Speech Detection (on Tweets) [ByT5](https://huggingface.co/google/byt5-base) base fine-tuned on [tweets hate speech detection](https://huggingface.co/datasets/tweets_hate_speech_detection) dataset for **Sequence Classification** downstream task. # Details of ByT5 - Base 🧠 ByT5 is a tokenizer-free version of [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) and generally follows the architecture of [MT5](https://huggingface.co/google/mt5-base). ByT5 was only pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) excluding any supervised training with an average span-mask of 20 UTF-8 characters. Therefore, this model has to be fine-tuned before it is useable on a downstream task. ByT5 works especially well on noisy text data,*e.g.*, `google/byt5-base` significantly outperforms [mt5-base](https://huggingface.co/google/mt5-base) on [TweetQA](https://arxiv.org/abs/1907.06292). Paper: [ByT5: Towards a token-free future with pre-trained byte-to-byte models](https://arxiv.org/pdf/1910.10683.pdf) Authors: *Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, Colin Raffel* ## Details of the downstream task (Sequence Classification as Text generation) - Dataset 📚 [tweets_hate_speech_detection](hhttps://huggingface.co/datasets/tweets_hate_speech_detection) The objective of this task is to detect hate speech in tweets. For the sake of simplicity, we say a tweet contains hate speech if it has a racist or sexist sentiment associated with it. So, the task is to classify racist or sexist tweets from other tweets. Formally, given a training sample of tweets and labels, where label ‘1’ denotes the tweet is racist/sexist and label ‘0’ denotes the tweet is not racist/sexist, your objective is to predict the labels on the given test dataset. - Data Instances: The dataset contains a label denoting is the tweet a hate speech or not ```json {'label': 0, # not a hate speech 'tweet': ' @user when a father is dysfunctional and is so selfish he drags his kids into his dysfunction. #run'} ``` - Data Fields: **label**: 1 - it is a hate speech, 0 - not a hate speech **tweet**: content of the tweet as a string - Data Splits: The data contains training data with **31962** entries ## Test set metrics 🧾 We created a representative test set with the 5% of the entries. The dataset is so imbalanced and we got a **F1 score of 79.8** ## Model in Action 🚀 ```sh git clone https://github.com/huggingface/transformers.git pip install -q ./transformers ``` ```python from transformers import AutoTokenizer, T5ForConditionalGeneration ckpt = 'Narrativa/byt5-base-tweet-hate-detection' tokenizer = AutoTokenizer.from_pretrained(ckpt) model = T5ForConditionalGeneration.from_pretrained(ckpt).to("cuda") def classify_tweet(tweet): inputs = tokenizer([tweet], padding='max_length', truncation=True, max_length=512, return_tensors='pt') input_ids = inputs.input_ids.to('cuda') attention_mask = inputs.attention_mask.to('cuda') output = model.generate(input_ids, attention_mask=attention_mask) return tokenizer.decode(output[0], skip_special_tokens=True) classify_tweet('here goes your tweet...') ``` Created by: [Narrativa](https://www.narrativa.com/) About Narrativa: Natural Language Generation (NLG) | Gabriele, our machine learning-based platform, builds and deploys natural language solutions. #NLG #AI
Narrativa/mbart-large-50-finetuned-opus-en-pt-translation
2021-06-19T09:51:26.000Z
[ "pytorch", "mbart", "seq2seq", "en", "es", "dataset:opus100", "dataset:opusbook", "arxiv:2008.00401", "arxiv:2004.11867", "transformers", "translation", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "sentencepiece.bpe.model", "special_tokens_map.json", "tokenizer.json", "tokenizer_config.json" ]
Narrativa
23
transformers
Narsil/esm1b_t33_650M_UR50S
2021-02-21T14:12:29.000Z
[]
[ ".gitattributes" ]
Narsil
0
Narsil/fr_pretrained
2020-01-30T08:39:05.000Z
[ "pytorch", "transformers" ]
[ ".gitattributes", "config.json", "pytorch_model.bin" ]
Narsil
10
transformers
Narsil/gpt2
2021-05-19T16:25:59.000Z
[ "pytorch", "tf", "jax", "tflite", "rust", "gpt2", "lm-head", "causal-lm", "en", "transformers", "exbert", "license:mit", "text-generation" ]
text-generation
[ ".gitattributes", "64-8bits.tflite", "64-fp16.tflite", "64.tflite", "README.md", "config.json", "flax_model.msgpack", "merges.txt", "pytorch_model.bin", "rust_model.ot", "tf_model.h5", "tokenizer.json", "vocab.json" ]
Narsil
0
transformers
--- language: en tags: - exbert license: mit --- # GPT-2 Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in [this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and first released at [this page](https://openai.com/blog/better-language-models/). Disclaimer: The team releasing GPT-2 also wrote a [model card](https://github.com/openai/gpt-2/blob/master/model_card.md) for their model. Content from this model card has been written by the Hugging Face team to complete the information they provided and give specific examples of bias. ## Model description GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt. ## Intended uses & limitations You can use the raw model for text generation or fine-tune it to a downstream task. See the [model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you. ### How to use You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='gpt2') >>> set_seed(42) >>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5) [{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."}, {'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"}, {'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"}, {'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"}, {'generated_text': 'Hello, I\'m a language model, not a language model"\n\nThe concept of "no-tricks" comes in handy later with new'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import GPT2Tokenizer, GPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = GPT2Model.from_pretrained('gpt2') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import GPT2Tokenizer, TFGPT2Model tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = TFGPT2Model.from_pretrained('gpt2') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ### Limitations and bias The training data used for this model has not been released as a dataset one can browse. We know it contains a lot of unfiltered content from the internet, which is far from neutral. As the openAI team themselves point out in their [model card](https://github.com/openai/gpt-2/blob/master/model_card.md#out-of-scope-use-cases): > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases > that require the generated text to be true. > > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do > not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a > study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, > and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar > levels of caution around use cases that are sensitive to biases around human attributes. Here's an example of how the model can have biased predictions: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='gpt2') >>> set_seed(42) >>> generator("The White man worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The White man worked as a mannequin for'}, {'generated_text': 'The White man worked as a maniser of the'}, {'generated_text': 'The White man worked as a bus conductor by day'}, {'generated_text': 'The White man worked as a plumber at the'}, {'generated_text': 'The White man worked as a journalist. He had'}] >>> set_seed(42) >>> generator("The Black man worked as a", max_length=10, num_return_sequences=5) [{'generated_text': 'The Black man worked as a man at a restaurant'}, {'generated_text': 'The Black man worked as a car salesman in a'}, {'generated_text': 'The Black man worked as a police sergeant at the'}, {'generated_text': 'The Black man worked as a man-eating monster'}, {'generated_text': 'The Black man worked as a slave, and was'}] ``` This bias will also affect all fine-tuned versions of this model. ## Training data The OpenAI team wanted to train this model on a corpus as large as possible. To build it, they scraped all the web pages from outbound links on Reddit which received at least 3 karma. Note that all Wikipedia pages were removed from this dataset, so the model was not trained on any part of Wikipedia. The resulting dataset (called WebText) weights 40GB of texts but has not been publicly released. You can find a list of the top 1,000 domains present in WebText [here](https://github.com/openai/gpt-2/blob/master/domains.txt). ## Training procedure ### Preprocessing The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50,257. The inputs are sequences of 1024 consecutive tokens. The larger model was trained on 256 cloud TPU v3 cores. The training duration was not disclosed, nor were the exact details of training. ## Evaluation results The model achieves the following results without any fine-tuning (zero-shot): | Dataset | LAMBADA | LAMBADA | CBT-CN | CBT-NE | WikiText2 | PTB | enwiki8 | text8 | WikiText103 | 1BW | |:--------:|:-------:|:-------:|:------:|:------:|:---------:|:------:|:-------:|:------:|:-----------:|:-----:| | (metric) | (PPL) | (ACC) | (ACC) | (ACC) | (PPL) | (PPL) | (BPB) | (BPC) | (PPL) | (PPL) | | | 35.13 | 45.99 | 87.65 | 83.4 | 29.41 | 65.85 | 1.16 | 1,17 | 37.50 | 75.20 | ### BibTeX entry and citation info ```bibtex @article{radford2019language, title={Language Models are Unsupervised Multitask Learners}, author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya}, year={2019} } ``` <a href="https://huggingface.co/exbert/?model=gpt2"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
Narsil/pretrained
2020-01-28T08:09:28.000Z
[ "pytorch", "transformers" ]
[ ".gitattributes", "config.json", "pytorch_model.bin" ]
Narsil
13
transformers
Narsil/pretrained2
2020-11-16T08:56:01.000Z
[ "lm-head", "transformers" ]
[ ".gitattributes", "config.json" ]
Narsil
14
transformers
Narsil/small
2021-05-19T11:19:20.000Z
[ "tf", "bert", "token-classification", "transformers" ]
token-classification
[ ".gitattributes", "README.md", "config.json", "roberta.json", "special_tokens_map.json", "tf_model.h5", "tokenizer_config.json", "unigram.json", "unigram.model", "unigram_wagahaiwa_nekodearu-unigram.json", "unigram_wagahaiwa_nekodearu.txt", "vocab.txt" ]
Narsil
3,722
transformers
Small change. again. again ? again.
Narsil/small_conversational_test
2021-01-20T16:30:52.000Z
[ "albert", "transformers" ]
[ ".gitattributes", "README.md", "config.json", "tokenizer.json" ]
Narsil
1,999
transformers
```python import tempfile from tokenizers import Tokenizer, models, processors from transformers.tokenization_utils_fast import PreTrainedTokenizerFast vocab = [(chr(i), i) for i in range(256)] tokenizer = Tokenizer(models.Unigram(vocab)) tokenizer.add_special_tokens(["<bos>", "<eos>"]) tokenizer.post_processor = processors.TemplateProcessing( single="<bos> $0 <eos>", special_tokens=[("<bos>", 256), ("<eos>", 257)] ) with tempfile.NamedTemporaryFile() as f: tokenizer.save(f.name) real_tokenizer = PreTrainedTokenizerFast(tokenizer_file=f.name, eos_token="<eos>", bos_token="<bos>") real_tokenizer._tokenizer.save("dummy.json") ``` Small change.
Narsil/small_summarization_test
2021-01-08T11:18:02.000Z
[ "albert", "transformers" ]
[ ".gitattributes", "README.md", "config.json", "tokenizer.json", "tokenizer_config.json" ]
Narsil
3,423
transformers
```python import tempfile from tokenizers import Tokenizer, models from transformers import PreTrainedTokenizerFast model_max_length = 4 vocab = [(chr(i), i) for i in range(256)] tokenizer = Tokenizer(models.Unigram(vocab)) with tempfile.NamedTemporaryFile() as f: tokenizer.save(f.name) real_tokenizer = PreTrainedTokenizerFast(tokenizer_file=f.name, model_max_length=model_max_length) real_tokenizer._tokenizer.save("dummy/tokenizer.json") ``` config uses Albert which works with a minimal `config.json`
NathanZhu/GabHateCorpusTrained
2021-05-18T21:47:53.000Z
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
[ ".DS_Store", ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
NathanZhu
7
transformers
Test for use in Google Colab :'(
Naveen-k/KanBERTo
2021-05-20T12:16:02.000Z
[ "pytorch", "jax", "roberta", "masked-lm", "kn", "transformers", "fill-mask" ]
fill-mask
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "merges.txt", "pytorch_model.bin", "training_args.bin", "vocab.json", "checkpoint-150000/pytorch_model.bin" ]
Naveen-k
22
transformers
--- language: kn --- # Welcome to KanBERTo (ಕನ್ಬರ್ಟೋ) ## Model Description > This is a small language model for [Kannada](https://en.wikipedia.org/wiki/Kannada) language with 1M data samples taken from [OSCAR page](https://traces1.inria.fr/oscar/files/compressed-orig/kn.txt.gz) ## Training params - **Dataset** - 1M data samples are used to train this model from OSCAR page(https://traces1.inria.fr/oscar/) eventhough data set is of 1.7 GB due to resource constraint to train I have picked only 1M data from the total 1.7GB data set. If you are interested in collaboration and have computational resources to train on you are most welcome to do so. - **Preprocessing** - ByteLevelBPETokenizer is used to tokenize the sentences at character level and vocabulary size is set to 52k as per standard values given by 🤗 - **Hyperparameters** - __ByteLevelBPETokenizer__ : vocabulary size = 52_000 and min_frequency = 2 __Trainer__ : num_train_epochs=12 - trained for 12 epochs per_gpu_train_batch_size=64 - batch size for the datasamples is 64 save_steps=10_000 - save model for every 10k steps save_total_limit=2 - save limit is set for 2 **Intended uses & limitations** this is for anyone who wants to make use of kannada language models for various tasks like language generation, translation and many more use cases. **Whatever else is helpful!** If you are intersted in collaboration feel free to reach me [Naveen](mailto:[email protected])
Navid/AdvBERTRanker
2021-06-04T14:51:05.000Z
[]
[ ".gitattributes" ]
Navid
0
Nay/Nay
2021-06-04T10:06:17.000Z
[]
[ ".gitattributes" ]
Nay
0
NbAiLab/nb-bert-base-mnli
2021-05-18T21:49:06.000Z
[ "pytorch", "jax", "bert", "text-classification", "no", "dataset:mnli", "dataset:multi_nli", "dataset:xnli", "arxiv:1909.00161", "transformers", "license:cc-by 4.0", "nb-bert", "tensorflow", "norwegian" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
NbAiLab
456
transformers
--- language: no license: CC-BY 4.0 thumbnail: https://raw.githubusercontent.com/NBAiLab/notram/master/images/nblogo_2.png tags: - nb-bert - text-classification - pytorch - tensorflow - norwegian - bert datasets: - mnli - multi_nli - xnli widget: - text: "The widget does not work for Norwegian." --- **Release 1.0** (March 11, 2021) # NB-Bert base model finetuned on Norwegian machine translated MNLI ## Description The most effective way of creating a good classifier is to finetune a pre-trained model for the specific task at hand. However, in many cases this is simply impossible. [Yin et al.](https://arxiv.org/abs/1909.00161) proposed a very clever way of using pre-trained MNLI models as zero-shot sequence classifiers. The methods works by reformulating the question to an MNLI hypothesis. If we want to figure out if a text is about "sport", we simply state that "This text is about sport" ("Denne teksten handler om sport"). When the model is finetuned on the 400k large MNLI task, it is in many cases able to solve this classification tasks. There are no MNLI-set of this size in Norwegian but we have trained it on a machine translated version of the original MNLI-set. ## Testing the model For testing the model, we recommend the [NbAiLab Colab Notebook](https://colab.research.google.com/gist/peregilk/769b5150a2f807219ab8f15dd11ea449/nbailab-mnli-norwegian-demo.ipynb) ## Hugging Face zero-shot-classification pipeline The easiest way to try this out is by using the Hugging Face pipeline. Please, note that you will get better results when using Norwegian hypothesis template instead of the default English one. ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model="NbAiLab/nb-bert-base-mnli") ``` You can then use this pipeline to classify sequences into any of the class names you specify. ```python sequence_to_classify = 'Folkehelseinstituttets mest optimistiske anslag er at alle voksne er ferdigvaksinert innen midten av september.' candidate_labels = ['politikk', 'helse', 'sport', 'religion'] hypothesis_template = 'Dette eksempelet er {}.' classifier(sequence_to_classify, candidate_labels, hypothesis_template=hypothesis_template, multi_class=True) # {'labels': ['helse', 'politikk', 'sport', 'religion'], # 'scores': [0.4210019111633301, 0.0674605593085289, 0.000840459018945694, 0.0007541406666859984], # 'sequence': 'Folkehelseinstituttets mest optimistiske anslag er at alle over 18 år er ferdigvaksinert innen midten av september.'} ``` ## More information For more information on the model, see https://github.com/NBAiLab/notram Here you will also find a Colab explaining more in details how to use the zero-shot-classification pipeline.
NbAiLab/nb-bert-base
2021-05-19T11:20:30.000Z
[ "pytorch", "tf", "jax", "bert", "no", "transformers", "license:cc-by 4.0", "norwegian", "fill-mask", "pipeline_tag:fill-mask" ]
fill-mask
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "nblogo_2.png", "pytorch_model.bin", "special_tokens_map.json", "tf_model.h5", "tokenizer_config.json", "vocab.txt" ]
NbAiLab
1,848
transformers
--- language: no license: CC-BY 4.0 tags: - norwegian - bert thumbnail: nblogo_3.png pipeline_tag: fill-mask widget: - text: "På biblioteket kan du låne [MASK]." --- - **Release 1.1** (March 11, 2021) - **Release 1.0** (January 13, 2021) # NB-BERT-base ## Description NB-BERT-base is a general BERT-base model built on the large digital collection at the National Library of Norway. This model is based on the same structure as [BERT Cased multilingual model](https://github.com/google-research/bert/blob/master/multilingual.md), and is trained on a wide variety of Norwegian text (both bokmål and nynorsk) from the last 200 years. ## Intended use & limitations The 1.1 version of the model is general, and should be fine-tuned for any particular use. Some fine-tuning sets may be found on GitHub, see * https://github.com/NBAiLab/notram ## Training data The model is trained on a wide variety of text. The training set is described on * https://github.com/NBAiLab/notram ## More information For more information on the model, see https://github.com/NBAiLab/notram
NbAiLab/nb-bert-large
2021-05-19T11:23:18.000Z
[ "pytorch", "tf", "jax", "bert", "no", "transformers", "license:cc-by 4.0", "norwegian", "fill-mask", "pipeline_tag:fill-mask" ]
fill-mask
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "tf_model.h5", "tokenizer_config.json", "vocab.txt" ]
NbAiLab
3,016
transformers
--- language: no license: CC-BY 4.0 tags: - norwegian - bert thumbnail: nblogo_3.png pipeline_tag: fill-mask widget: - text: "På biblioteket kan du låne en [MASK]." --- - **Release 1.0beta** (April 29, 2021) # NB-BERT-large (beta) ## Description NB-BERT-large is a general BERT-large model built on the large digital collection at the National Library of Norway. This model is trained from scratch on a wide variety of Norwegian text (both bokmål and nynorsk) from the last 200 years using a monolingual Norwegian vocabulary. ## Intended use & limitations The 1.0 version of the model is general, and should be fine-tuned for any particular use. Some fine-tuning sets may be found on Github, see * https://github.com/NBAiLab/notram ## Training data The model is trained on a wide variety of text. The training set is described on * https://github.com/NBAiLab/notram ## More information For more information on the model, see https://github.com/NBAiLab/notram
NegativeSector/fakeNewsBot
2021-04-19T23:46:05.000Z
[]
[ ".gitattributes" ]
NegativeSector
0
Nenma/romanian-bert-fake-news
2021-05-18T20:28:44.000Z
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "config.json", "pytorch_model.bin" ]
Nenma
14
transformers
NeuML/bert-small-cord19-squad2
2021-05-18T21:52:28.000Z
[ "pytorch", "jax", "bert", "question-answering", "transformers" ]
question-answering
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "nbest_predictions_.json", "null_odds_.json", "predictions_.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
NeuML
20
transformers
# BERT-Small CORD-19 fine-tuned on SQuAD 2.0 [bert-small-cord19 model](https://huggingface.co/NeuML/bert-small-cord19) fine-tuned on SQuAD 2.0 ## Building the model ```bash python run_squad.py --model_type bert --model_name_or_path bert-small-cord19 --do_train --do_eval --do_lower_case --version_2_with_negative --train_file train-v2.0.json --predict_file dev-v2.0.json --per_gpu_train_batch_size 8 --learning_rate 3e-5 --num_train_epochs 3.0 --max_seq_length 384 --doc_stride 128 --output_dir bert-small-cord19-squad2 --save_steps 0 --threads 8 --overwrite_cache --overwrite_output_dir
NeuML/bert-small-cord19
2021-05-18T21:52:56.000Z
[ "pytorch", "jax", "bert", "masked-lm", "transformers", "fill-mask" ]
fill-mask
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
NeuML
25
transformers
# BERT-Small fine-tuned on CORD-19 dataset [BERT L6_H-512_A-8 model](https://huggingface.co/google/bert_uncased_L-6_H-512_A-8) fine-tuned on the [CORD-19 dataset](https://www.semanticscholar.org/cord19). ## CORD-19 data subset The training data for this dataset is stored as a [Kaggle dataset](https://www.kaggle.com/davidmezzetti/cord19-qa?select=cord19.txt). The training data is a subset of the full corpus, focusing on high-quality, study-design detected articles. ## Building the model ```bash python run_language_modeling.py --model_type bert --model_name_or_path google/bert_uncased_L-6_H-512_A-8 --do_train --mlm --line_by_line --block_size 512 --train_data_file cord19.txt --per_gpu_train_batch_size 4 --learning_rate 3e-5 --num_train_epochs 3.0 --output_dir bert-small-cord19 --save_steps 0 --overwrite_output_dir
NeuML/bert-small-cord19qa
2021-05-18T21:53:32.000Z
[ "pytorch", "jax", "tfsavedmodel", "bert", "question-answering", "transformers" ]
question-answering
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "saved_model.tar.gz", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.txt" ]
NeuML
713
transformers
# BERT-Small fine-tuned on CORD-19 QA dataset [bert-small-cord19-squad model](https://huggingface.co/NeuML/bert-small-cord19-squad2) fine-tuned on the [CORD-19 QA dataset](https://www.kaggle.com/davidmezzetti/cord19-qa?select=cord19-qa.json). ## CORD-19 QA dataset The CORD-19 QA dataset is a SQuAD 2.0 formatted list of question, context, answer combinations covering the [CORD-19 dataset](https://www.semanticscholar.org/cord19). ## Building the model ```bash python run_squad.py \ --model_type bert \ --model_name_or_path bert-small-cord19-squad \ --do_train \ --do_lower_case \ --version_2_with_negative \ --train_file cord19-qa.json \ --per_gpu_train_batch_size 8 \ --learning_rate 5e-5 \ --num_train_epochs 10.0 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir bert-small-cord19qa \ --save_steps 0 \ --threads 8 \ --overwrite_cache \ --overwrite_output_dir ``` ## Testing the model Example usage below: ```python from transformers import pipeline qa = pipeline( "question-answering", model="NeuML/bert-small-cord19qa", tokenizer="NeuML/bert-small-cord19qa" ) qa({ "question": "What is the median incubation period?", "context": "The incubation period is around 5 days (range: 4-7 days) with a maximum of 12-13 day" }) qa({ "question": "What is the incubation period range?", "context": "The incubation period is around 5 days (range: 4-7 days) with a maximum of 12-13 day" }) qa({ "question": "What type of surfaces does it persist?", "context": "The virus can survive on surfaces for up to 72 hours such as plastic and stainless steel ." }) ``` ```json {"score": 0.5970273583242793, "start": 32, "end": 38, "answer": "5 days"} {"score": 0.999555868193891, "start": 39, "end": 56, "answer": "(range: 4-7 days)"} {"score": 0.9992726505196998, "start": 61, "end": 88, "answer": "plastic and stainless steel"} ```
Nhut/wav2vec2-large-xlsr-french
2021-04-26T08:14:32.000Z
[ "pytorch", "wav2vec2", "fr", "dataset:common_voice", "transformers", "audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week", "license:apache-2.0" ]
automatic-speech-recognition
[ ".DS_Store", ".gitattributes", "README.md", "config.json", "preprocessor_config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.json" ]
Nhut
75
transformers
--- language: fr datasets: - common_voice tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: wav2vec2-large-xlsr-53-French by Nhut DOAN NGUYEN results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice fr type: common_voice args: fr metrics: - name: Test WER type: wer value: xx.xx --- # wav2vec2-large-xlsr-53-french Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in French using the [Common Voice](https://huggingface.co/datasets/common_voice) When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "fr", split="test[:20%]") processor = Wav2Vec2Processor.from_pretrained("Nhut/wav2vec2-large-xlsr-french") model = Wav2Vec2ForCTC.from_pretrained("Nhut/wav2vec2-large-xlsr-french") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the French test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "fr") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("Nhut/wav2vec2-large-xlsr-french") model = Wav2Vec2ForCTC.from_pretrained("Nhut/wav2vec2-large-xlsr-french") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 29.31 % ## Training V1 of the Common Voice `train`, `validation` datasets were used for training. ## Testing 20% of V6.1 of the Common Voice `Test` dataset were used for training.
Nhut/wav2vec2-large-xlsr-vietnamese
2021-03-28T21:37:07.000Z
[ "pytorch", "wav2vec2", "vi", "transformers", "audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week", "license:apache-2.0" ]
automatic-speech-recognition
[ ".DS_Store", ".gitattributes", "README.md", "config.json", "preprocessor_config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.json" ]
Nhut
19
transformers
--- language: vi datasets: - common_voice - FOSD: https://data.mendeley.com/datasets/k9sxg2twv4/4 - VIVOS: https://ailab.hcmus.edu.vn/vivos metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: XLSR Wav2Vec2 Vietnamese by Nhut results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice vi type: common_voice args: vi metrics: - name: Test WER type: wer value: 49.59 --- # Wav2Vec2-Large-XLSR-53-Vietnamese Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Vietnamese using the [Common Voice](https://huggingface.co/datasets/common_voice), [FOSD](https://data.mendeley.com/datasets/k9sxg2twv4/4) and [VIVOS](https://ailab.hcmus.edu.vn/vivos). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor ENCODER = { "ia ": "iê ", "ìa ": "iề ", "ía ": "iế ", "ỉa ": "iể ", "ĩa ": "iễ ", "ịa ": "iệ ", "ya ": "yê ", "ỳa ": "yề ", "ýa ": "yế ", "ỷa ": "yể ", "ỹa ": "yễ ", "ỵa ": "yệ ", "ua ": "uô ", "ùa ": "uồ ", "úa ": "uố ", "ủa ": "uổ ", "ũa ": "uỗ ", "ụa ": "uộ ", "ưa ": "ươ ", "ừa ": "ườ ", "ứa ": "ướ ", "ửa ": "ưở ", "ữa ": "ưỡ ", "ựa ": "ượ ", "ke": "ce", "kè": "cè", "ké": "cé", "kẻ": "cẻ", "kẽ": "cẽ", "kẹ": "cẹ", "kê": "cê", "kề": "cề", "kế": "cế", "kể": "cể", "kễ": "cễ", "kệ": "cệ", "ki": "ci", "kì": "cì", "kí": "cí", "kỉ": "cỉ", "kĩ": "cĩ", "kị": "cị", "ky": "cy", "kỳ": "cỳ", "ký": "cý", "kỷ": "cỷ", "kỹ": "cỹ", "kỵ": "cỵ", "ghe": "ge", "ghè": "gè", "ghé": "gé", "ghẻ": "gẻ", "ghẽ": "gẽ", "ghẹ": "gẹ", "ghê": "gê", "ghề": "gề", "ghế": "gế", "ghể": "gể", "ghễ": "gễ", "ghệ": "gệ", "ngh": "\x80", "uyê": "\x96", "uyề": "\x97", "uyế": "\x98", "uyể": "\x99", "uyễ": "\x9a", "uyệ": "\x9b", "ng": "\x81", "ch": "\x82", "gh": "\x83", "nh": "\x84", "gi": "\x85", "ph": "\x86", "kh": "\x87", "th": "\x88", "tr": "\x89", "uy": "\x8a", "uỳ": "\x8b", "uý": "\x8c", "uỷ": "\x8d", "uỹ": "\x8e", "uỵ": "\x8f", "iê": "\x90", "iề": "\x91", "iế": "\x92", "iể": "\x93", "iễ": "\x94", "iệ": "\x95", "uô": "\x9c", "uồ": "\x9d", "uố": "\x9e", "uổ": "\x9f", "uỗ": "\xa0", "uộ": "\xa1", "ươ": "\xa2", "ườ": "\xa3", "ướ": "\xa4", "ưở": "\xa5", "ưỡ": "\xa6", "ượ": "\xa7", } def decode_string(x): for k, v in list(reversed(list(ENCODER.items()))): x = x.replace(v, k) return x test_dataset = load_dataset("common_voice", "vi", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("Nhut/wav2vec2-large-xlsr-vietnamese") model = Wav2Vec2ForCTC.from_pretrained("Nhut/wav2vec2-large-xlsr-vietnamese") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", [decode_string(x) for x in processor.batch_decode(predicted_ids)]) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Vietnamese test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re ENCODER = { "ia ": "iê ", "ìa ": "iề ", "ía ": "iế ", "ỉa ": "iể ", "ĩa ": "iễ ", "ịa ": "iệ ", "ya ": "yê ", "ỳa ": "yề ", "ýa ": "yế ", "ỷa ": "yể ", "ỹa ": "yễ ", "ỵa ": "yệ ", "ua ": "uô ", "ùa ": "uồ ", "úa ": "uố ", "ủa ": "uổ ", "ũa ": "uỗ ", "ụa ": "uộ ", "ưa ": "ươ ", "ừa ": "ườ ", "ứa ": "ướ ", "ửa ": "ưở ", "ữa ": "ưỡ ", "ựa ": "ượ ", "ke": "ce", "kè": "cè", "ké": "cé", "kẻ": "cẻ", "kẽ": "cẽ", "kẹ": "cẹ", "kê": "cê", "kề": "cề", "kế": "cế", "kể": "cể", "kễ": "cễ", "kệ": "cệ", "ki": "ci", "kì": "cì", "kí": "cí", "kỉ": "cỉ", "kĩ": "cĩ", "kị": "cị", "ky": "cy", "kỳ": "cỳ", "ký": "cý", "kỷ": "cỷ", "kỹ": "cỹ", "kỵ": "cỵ", "ghe": "ge", "ghè": "gè", "ghé": "gé", "ghẻ": "gẻ", "ghẽ": "gẽ", "ghẹ": "gẹ", "ghê": "gê", "ghề": "gề", "ghế": "gế", "ghể": "gể", "ghễ": "gễ", "ghệ": "gệ", "ngh": "\x80", "uyê": "\x96", "uyề": "\x97", "uyế": "\x98", "uyể": "\x99", "uyễ": "\x9a", "uyệ": "\x9b", "ng": "\x81", "ch": "\x82", "gh": "\x83", "nh": "\x84", "gi": "\x85", "ph": "\x86", "kh": "\x87", "th": "\x88", "tr": "\x89", "uy": "\x8a", "uỳ": "\x8b", "uý": "\x8c", "uỷ": "\x8d", "uỹ": "\x8e", "uỵ": "\x8f", "iê": "\x90", "iề": "\x91", "iế": "\x92", "iể": "\x93", "iễ": "\x94", "iệ": "\x95", "uô": "\x9c", "uồ": "\x9d", "uố": "\x9e", "uổ": "\x9f", "uỗ": "\xa0", "uộ": "\xa1", "ươ": "\xa2", "ườ": "\xa3", "ướ": "\xa4", "ưở": "\xa5", "ưỡ": "\xa6", "ượ": "\xa7", } def decode_string(x): for k, v in list(reversed(list(ENCODER.items()))): x = x.replace(v, k) return x test_dataset = load_dataset("common_voice", "vi", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("Nhut/wav2vec2-large-xlsr-vietnamese") model = Wav2Vec2ForCTC.from_pretrained("Nhut/wav2vec2-large-xlsr-vietnamese") model.to("cuda") chars_to_ignore_regex = '[\\\+\@\ǀ\,\?\.\!\-\;\:\"\“\%\‘\”\�]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) # decode_string: We replace the encoded letter with the initial letters batch["pred_strings"] = [decode_string(x) for x in batch["pred_strings"]] return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 49.59 % ## Training The Common Voice `train`, `validation` and FOSD datasets and VIVOS datasets were used for training as well. The script used for training can be found [here](https://colab.research.google.com/drive/11pP4uVJj4SYZTzGjlCUtOHywlhYqs0cPx)
NickCavarretta/DialoGPT-small-laffy
2021-06-03T03:10:42.000Z
[ "pytorch", "gpt2", "lm-head", "causal-lm", "transformers", "conversational", "text-generation" ]
conversational
[ ".gitattributes", "README.md", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer.json", "tokenizer_config.json", "vocab.json" ]
NickCavarretta
82
transformers
--- tags: - conversational --- # My Awesome Laffy
NikhilRamesh/Fetch_Loc
2021-05-25T06:19:00.000Z
[]
[ ".gitattributes" ]
NikhilRamesh
0
NlpHUST/gpt-neo-vi-small
2021-04-23T07:21:34.000Z
[ "pytorch", "gpt_neo", "causal-lm", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "README.md", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.json" ]
NlpHUST
91
transformers
--- language: - vi tags: - text generation - pytorch # GPT-Neo-small for vietnamese First GPT for vietnamese ## Model Description GPT-Neo-vi-small is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. ## Training data GPT-Neo-vi-smal was trained on the News datasets, a large scale dataset created by from News Website for the purpose of training this model. ### How to use his example generates a different sequence each time it's run: ```py from transformers import GPTNeoForCausalLM, GPT2Tokenizer model = GPTNeoForCausalLM.from_pretrained("NlpHUST/gpt-neo-vi-small") tokenizer = GPT2Tokenizer.from_pretrained("NlpHUST/gpt-neo-vi-small") prompt = "Ngay sau Tết Nguyên đán Tân Sửu, hiện tượng giá đất tăng tại nhiều địa phương. Thị trường nhộn nhịp, tạo ra những cơn sóng sốt đất khó tin khiến bộ ngành, địa phương đưa cảnh báo." input_ids = tokenizer(prompt, return_tensors="pt").input_ids gen_tokens = model.generate(input_ids, do_sample=True, temperature=1.0, max_length=1024) gen_text = tokenizer.batch_decode(gen_tokens)[0] print(gen_text) ``` ### Contact information For personal communication related to this project, please contact Nha Nguyen Van ([email protected]).
NlpHUST/t5-en-vi-base
2021-04-26T03:27:01.000Z
[ "pytorch", "t5", "seq2seq", "arxiv:1706.05565", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
NlpHUST
10
transformers
# T5-EN-VI-BASE:Pretraining Text-To-Text Transfer Transformer for English Vietnamese Translation # Dataset The *IWSLT'15 English-Vietnamese* data is used from [Stanford NLP group](https://nlp.stanford.edu/projects/nmt/). For all experiments the corpus was split into training, development and test set: | Data set | Sentences | Download | ----------- | --------- | --------------------------------------------------------------------------------------------------------------------------------- | Training | 133,317 | via [GitHub](https://github.com/stefan-it/nmt-en-vi/raw/master/data/train-en-vi.tgz) or located in `data/train-en-vi.tgz` | Development | 1,553 | via [GitHub](https://github.com/stefan-it/nmt-en-vi/raw/master/data/dev-2012-en-vi.tgz) or located in `data/dev-2012-en-vi.tgz` | Test | 1,268 | via [GitHub](https://github.com/stefan-it/nmt-en-vi/raw/master/data/test-2013-en-vi.tgz) or located in `data/test-2013-en-vi.tgz` ## Results The results on test set. | Model | BLEU (Beam Search) | ----------------------------------------------------------------------------------------------------- | ------------------ | [Luong & Manning (2015)](https://nlp.stanford.edu/pubs/luong-manning-iwslt15.pdf) | 23.30 | Sequence-to-sequence model with attention | 26.10 | Neural Phrase-based Machine Translation [Huang et. al. (2017)](https://arxiv.org/abs/1706.05565) | 27.69 | Neural Phrase-based Machine Translation + LM [Huang et. al. (2017)](https://arxiv.org/abs/1706.05565) | 28.07 | t5-en-vi-small (pretraining, without training data) | **28.46** (cased) / **29.23** (uncased) |t5-en-vi-small (fineturning with training data) | **32.38** (cased) / **33.19** (uncased) | t5-en-vi-base (pretraining, without training data) | **29.66** (cased) / **30.37** (uncased) #### Example Using ``` bash import torch from transformers import T5ForConditionalGeneration, T5Tokenizer import torch if torch.cuda.is_available(): device = torch.device("cuda") print('There are %d GPU(s) available.' % torch.cuda.device_count()) print('We will use the GPU:', torch.cuda.get_device_name(0)) else: print('No GPU available, using the CPU instead.') device = torch.device("cpu") model = T5ForConditionalGeneration.from_pretrained("NlpHUST/t5-en-vi-small") tokenizer = T5Tokenizer.from_pretrained("NlpHUST/t5-en-vi-small") model.to(device) src = "In school , we spent a lot of time studying the history of Kim Il-Sung , but we never learned much about the outside world , except that America , South Korea , Japan are the enemies ." tokenized_text = tokenizer.encode(src, return_tensors="pt").to(device) model.eval() summary_ids = model.generate( tokenized_text, max_length=128, num_beams=5, repetition_penalty=2.5, length_penalty=1.0, early_stopping=True ) output = tokenizer.decode(summary_ids[0], skip_special_tokens=True) print(output) ``` #### Output ``` bash Ở trường, chúng tôi dành nhiều thời gian để nghiên cứu về lịch sử Kim Il-Sung, nhưng chúng tôi chưa bao giờ học được nhiều về thế giới bên ngoài, ngoại trừ Mỹ, Hàn Quốc, Nhật Bản là kẻ thù. ``` ### Contact information For personal communication related to this project, please contact Nha Nguyen Van ([email protected]).
NlpHUST/t5-en-vi-small
2021-04-15T07:15:50.000Z
[ "pytorch", "t5", "seq2seq", "arxiv:1706.05565", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
NlpHUST
74
transformers
# T5-EN-VI-SMALL:Pretraining Text-To-Text Transfer Transformer for English Vietnamese Translation # Dataset The *IWSLT'15 English-Vietnamese* data is used from [Stanford NLP group](https://nlp.stanford.edu/projects/nmt/). For all experiments the corpus was split into training, development and test set: | Data set | Sentences | Download | ----------- | --------- | --------------------------------------------------------------------------------------------------------------------------------- | Training | 133,317 | via [GitHub](https://github.com/stefan-it/nmt-en-vi/raw/master/data/train-en-vi.tgz) or located in `data/train-en-vi.tgz` | Development | 1,553 | via [GitHub](https://github.com/stefan-it/nmt-en-vi/raw/master/data/dev-2012-en-vi.tgz) or located in `data/dev-2012-en-vi.tgz` | Test | 1,268 | via [GitHub](https://github.com/stefan-it/nmt-en-vi/raw/master/data/test-2013-en-vi.tgz) or located in `data/test-2013-en-vi.tgz` ## Results The results on test set. | Model | BLEU (Beam Search) | ----------------------------------------------------------------------------------------------------- | ------------------ | [Luong & Manning (2015)](https://nlp.stanford.edu/pubs/luong-manning-iwslt15.pdf) | 23.30 | Sequence-to-sequence model with attention | 26.10 | Neural Phrase-based Machine Translation [Huang et. al. (2017)](https://arxiv.org/abs/1706.05565) | 27.69 | Neural Phrase-based Machine Translation + LM [Huang et. al. (2017)](https://arxiv.org/abs/1706.05565) | 28.07 | t5-en-vi-small (pretraining, without training data) | **28.46** (cased) / **29.23** (uncased) |t5-en-vi-small (fineturning with training data) | **32.38** (cased) / **33.19** (uncased) #### Example Using ``` bash import torch from transformers import T5ForConditionalGeneration, T5Tokenizer import torch if torch.cuda.is_available(): device = torch.device("cuda") print('There are %d GPU(s) available.' % torch.cuda.device_count()) print('We will use the GPU:', torch.cuda.get_device_name(0)) else: print('No GPU available, using the CPU instead.') device = torch.device("cpu") model = T5ForConditionalGeneration.from_pretrained("NlpHUST/t5-en-vi-small") tokenizer = T5Tokenizer.from_pretrained("NlpHUST/t5-en-vi-small") model.to(device) src = "In school , we spent a lot of time studying the history of Kim Il-Sung , but we never learned much about the outside world , except that America , South Korea , Japan are the enemies ." tokenized_text = tokenizer.encode(src, return_tensors="pt").to(device) model.eval() summary_ids = model.generate( tokenized_text, max_length=128, num_beams=5, repetition_penalty=2.5, length_penalty=1.0, early_stopping=True ) output = tokenizer.decode(summary_ids[0], skip_special_tokens=True) print(output) ``` #### Output ``` bash Ở trường, chúng tôi dành nhiều thời gian để nghiên cứu về lịch sử Kim Il-Sung, nhưng chúng tôi chưa bao giờ học được nhiều về thế giới bên ngoài, ngoại trừ Mỹ, Hàn Quốc, Nhật Bản là kẻ thù. ``` ### Contact information For personal communication related to this project, please contact Nha Nguyen Van ([email protected]).
NlpHUST/t5-small-vi-summarization
2021-04-15T07:14:48.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
NlpHUST
84
transformers
# T5-SMALL-SUMMARIZATION :Pretraining Text-To-Text Transfer Transformer for Vietnamese Text Summarization #### Example Using ``` bash import torch from transformers import T5ForConditionalGeneration, T5Tokenizer import torch if torch.cuda.is_available(): device = torch.device("cuda") print('There are %d GPU(s) available.' % torch.cuda.device_count()) print('We will use the GPU:', torch.cuda.get_device_name(0)) else: print('No GPU available, using the CPU instead.') device = torch.device("cpu") model = T5ForConditionalGeneration.from_pretrained("NlpHUST/t5-small-vi-summarization") tokenizer = T5Tokenizer.from_pretrained("NlpHUST/t5-small-vi-summarization") model.to(device) src = "Theo BHXH Việt Nam, nhiều doanh nghiệp vẫn chỉ đóng BHXH cho người lao động theo mức lương. \\\\ Dù quy định từ 1/1/2018, tiền lương tháng đóng BHXH gồm mức lương và thêm khoản bổ sung khác. \\\\ BHXH Việt Nam vừa có báo cáo về tình hình thực hiện chính sách BHXH thời gian qua. \\\\ Theo đó, tình trạng nợ, trốn đóng BHXH, BHTN vẫn xảy ra ở hầu hết các tỉnh, thành. \\\\ Thống kê tới ngày 31/12/2020, tổng số nợ BHXH, BHYT, BHTN là hơn 13.500 tỷ đồng, \\\\ chiếm 3,35 % số phải thu, trong đó: Số nợ BHXH bắt buộc là hơn 8.600 tỷ đồng, \\\\ nợ BHTN là 335 tỷ đồng. Liên quan tới tiền lương đóng BHXH, báo cáo của \\\\ BHXH Việt Nam cho thấy: Nhiều doanh nghiệp vẫn chủ yếu xây dựng thang, \\\\ bảng lương để đóng BHXH bằng mức thấp nhất. Tức là bằng mức lương tối \\\\ thiểu vùng, cộng thêm 7 % đối với lao động đã qua đào tạo nghề và cộng \\\\ thêm 5 % hoặc 7 % đối với lao động làm nghề hoặc công việc nặng nhọc, \\\\ độc hại, nguy hiểm, đặc biệt nặng nhọc độc hại và nguy hiểm. Đối với \\\\ lao động giữ chức vụ, khoảng 80 % doanh nghiệp đã xây dựng thang, \\\\ bảng lương cụ thể theo chức danh. Đơn cử như với chức vụ giám đốc \\\\ sản xuất, giám đốc điều hành, trưởng phòng. Còn lại các doanh nghiệp \\\\ xây dựng đối với lao động giữ chức vụ theo thang lương, bảng lương \\\\ chuyên môn nghiệp vụ và bảng phụ cấp chức vụ, phụ cấp trách nhiệm. \\\\ Thống kê của BHXH Việt Nam cũng cho thấy, đa số doanh nghiệp đã đăng \\\\ ký đóng BHXH cho người lao động theo mức lương mà không có khoản bổ \\\\ sung khác. Mặc dù quy định từ ngày 1/1/2018, tiền lương tháng đóng BHXH \\\\ gồm mức lương và thêm khoản bổ sung khác." tokenized_text = tokenizer.encode(src, return_tensors="pt").to(device) model.eval() summary_ids = model.generate( tokenized_text, max_length=256, num_beams=5, repetition_penalty=2.5, length_penalty=1.0, early_stopping=True ) output = tokenizer.decode(summary_ids[0], skip_special_tokens=True) print(output) ``` #### Output ``` bash Nhiều doanh nghiệp vẫn chủ yếu xây dựng thang, bảng lương để đóng BHXH bằng mức thấp nhất. \\ Dù quy định từ 1/1/2018, tiền lương tháng đóng BHXH gồm mức lương và thêm khoản bổ sung khác. \\ Thống kê của BHXH Việt Nam cho thấy, nhiều doanh nghiệp vẫn chỉ đóng BHXH \\ cho người lao động theo mức lương mà không có khoản bổ sung khác. ``` ### Contact information For personal communication related to this project, please contact Nha Nguyen Van ([email protected]).
NlpHUST/t5-vi-en-base
2021-04-26T11:08:56.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
NlpHUST
11
transformers
--- language: - vi tags: - t5 - seq2seq # Machine translation for vietnamese ## Model Description T5-vi-en-base is a transformer model for vietnamese machine translation designed using T5 architecture. ## Training data T5-vi-en-base was trained on 4M sentence pairs (english,vietnamese) ### How to use ```py from transformers import T5ForConditionalGeneration, T5Tokenizer import torch if torch.cuda.is_available(): device = torch.device("cuda") print('There are %d GPU(s) available.' % torch.cuda.device_count()) print('We will use the GPU:', torch.cuda.get_device_name(0)) else: print('No GPU available, using the CPU instead.') device = torch.device("cpu") model = T5ForConditionalGeneration.from_pretrained("NlpHUST/t5-vi-en-base") tokenizer = T5Tokenizer.from_pretrained("NlpHUST/t5-vi-en-base") model.to(device) src = "Theo lãnh đạo Sở Y tế, 3 người này không có triệu chứng sốt, ho, khó thở, đã được lấy mẫu xét nghiệm và cách ly tập trung." tokenized_text = tokenizer.encode(src, return_tensors="pt").to(device) model.eval() summary_ids = model.generate( tokenized_text, max_length=256, num_beams=5, repetition_penalty=2.5, length_penalty=1.0, early_stopping=True ) output = tokenizer.decode(summary_ids[0], skip_special_tokens=True) print(output) According to the head of the Department of Health, the three people had no symptoms of fever, cough, shortness of breath, were taken samples for testing and concentrated quarantine. ```
NlpHUST/t5-vi-en-small
2021-04-22T02:49:50.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
NlpHUST
36
transformers
--- language: - vi tags: - t5 - seq2seq # Machine translation for vietnamese ## Model Description T5-vi-en-small is a transformer model for vietnamese machine translation designed using T5 architecture. ## Training data T5-vi-en-small was trained on 4M sentence pairs (english,vietnamese) ### How to use ```py from transformers import T5ForConditionalGeneration, T5Tokenizer import torch if torch.cuda.is_available(): device = torch.device("cuda") print('There are %d GPU(s) available.' % torch.cuda.device_count()) print('We will use the GPU:', torch.cuda.get_device_name(0)) else: print('No GPU available, using the CPU instead.') device = torch.device("cpu") model = T5ForConditionalGeneration.from_pretrained("NlpHUST/t5-vi-en-small") tokenizer = T5Tokenizer.from_pretrained("NlpHUST/t5-vi-en-small") model.to(device) src = "Indonesia phỏng đoán nguyên nhân tàu ngầm chở 53 người mất tích bí ẩn" tokenized_text = tokenizer.encode(src, return_tensors="pt").to(device) model.eval() summary_ids = model.generate( tokenized_text, max_length=256, num_beams=5, repetition_penalty=2.5, length_penalty=1.0, early_stopping=True ) output = tokenizer.decode(summary_ids[0], skip_special_tokens=True) print(output) Indonesia anticipates the cause of the submarine transporting 53 mysterious missing persons ```
NlpHUST/vi-electra-base
2021-03-18T04:21:33.000Z
[ "pytorch", "electra", "pretraining", "transformers" ]
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "vocab.txt" ]
NlpHUST
16
transformers
# ELECTRA ## Introduction **ELECTRA** is a method for self-supervised language representation learning. It can be used to pre-train transformer networks using relatively little compute. ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf). At small scale, ELECTRA achieves strong results even when trained on a single GPU. At large scale, ELECTRA achieves state-of-the-art results on the [SQuAD 2.0](https://rajpurkar.github.io/SQuAD-explorer/) dataset. ### Contact information For personal communication related to this project, please contact Nha Nguyen Van ([email protected]).
NlpHUST/vi-electra-small
2020-12-07T04:18:04.000Z
[ "pytorch", "electra", "pretraining", "transformers" ]
[ ".gitattributes", "config.json", "pytorch_model.bin", "vocab.txt" ]
NlpHUST
11
transformers
NlpHUST/vibert4news-base-cased
2021-04-09T02:35:04.000Z
[ "pytorch", "masked-lm", "vn", "transformers", "fill-mask" ]
fill-mask
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "tokenizer_config.json", "vocab.txt" ]
NlpHUST
1,004
transformers
--- language: vn --- # BERT for Vietnamese is trained on more 20 GB news dataset Apply for task sentiment analysis on using [AIViVN's comments dataset](https://www.aivivn.com/contests/6) The model achieved 0.90268 on the public leaderboard, (winner's score is 0.90087) Bert4news is used for a toolkit Vietnames(segmentation and Named Entity Recognition) at ViNLPtoolkit(https://github.com/bino282/ViNLP) We use word sentencepiece, use basic bert tokenization and same config with bert base with lowercase = False. You can download trained model: - [tensorflow](https://drive.google.com/file/d/1X-sRDYf7moS_h61J3L79NkMVGHP-P-k5/view?usp=sharing). - [pytorch](https://drive.google.com/file/d/11aFSTpYIurn-oI2XpAmcCTccB_AonMOu/view?usp=sharing). Use with huggingface/transformers ``` bash import torch from transformers import BertTokenizer,BertModel tokenizer= BertTokenizer.from_pretrained("NlpHUST/vibert4news-base-cased") bert_model = BertModel.from_pretrained("NlpHUST/vibert4news-base-cased") line = "Tôi là sinh viên trường Bách Khoa Hà Nội ." input_id = tokenizer.encode(line,add_special_tokens = True) att_mask = [int(token_id > 0) for token_id in input_id] input_ids = torch.tensor([input_id]) att_masks = torch.tensor([att_mask]) with torch.no_grad(): features = bert_model(input_ids,att_masks) print(features) ``` # Vietnamese toolkit with bert ViNLP is a system annotation for Vietnamese, it use pretrain [Bert4news](https://github.com/bino282/bert4news/) to fine-turning to NLP problems in Vietnamese components of wordsegmentation,Named entity recognition (NER) and achieve high accuravy. ### Installation ```bash git clone https://github.com/bino282/ViNLP.git cd ViNLP python setup.py develop build ``` ### Test Segmentation The model achieved F1 score : 0.984 on VLSP 2013 dataset |Model | F1 | |--------|-----------| | **BertVnTokenizer** | 98.40 | | **DongDu** | 96.90 | | **JvnSegmenter-Maxent** | 97.00 | | **JvnSegmenter-CRFs** | 97.06 | | **VnTokenizer** | 97.33 | | **UETSegmenter** | 97.87 | | **VnTokenizer** | 97.33 | | **VnCoreNLP (i.e. RDRsegmenter)** | 97.90 | ``` bash from ViNLP import BertVnTokenizer tokenizer = BertVnTokenizer() sentences = tokenizer.split(["Tổng thống Donald Trump ký sắc lệnh cấm mọi giao dịch của Mỹ với ByteDance và Tecent - chủ sở hữu của 2 ứng dụng phổ biến TikTok và WeChat sau 45 ngày nữa."]) print(sentences[0]) ``` ``` bash Tổng_thống Donald_Trump ký sắc_lệnh cấm mọi giao_dịch của Mỹ với ByteDance và Tecent - chủ_sở_hữu của 2 ứng_dụng phổ_biến TikTok và WeChat sau 45 ngày nữa . ``` ### Test Named Entity Recognition The model achieved F1 score VLSP 2018 for all named entities including nested entities : 0.786 |Model | F1 | |--------|-----------| | **BertVnNer** | 78.60 | | **VNER Attentive Neural Network** | 77.52 | | **vietner CRF (ngrams + word shapes + cluster + w2v)** | 76.63 | | **ZA-NER BiLSTM** | 74.70 | ``` bash from ViNLP import BertVnNer bert_ner_model = BertVnNer() sentence = "Theo SCMP, báo cáo của CSIS với tên gọi Định hình Tương lai Chính sách của Mỹ với Trung Quốc cũng cho thấy sự ủng hộ tương đối rộng rãi của các chuyên gia về việc cấm Huawei, tập đoàn viễn thông khổng lồ của Trung Quốc" entities = bert_ner_model.annotate([sentence]) print(entities) ``` ``` bash [{'ORGANIZATION': ['SCMP', 'CSIS', 'Huawei'], 'LOCATION': ['Mỹ', 'Trung Quốc']}] ``` Run training with base config ``` bash python train_pytorch.py \\\\ --model_path=bert4news.pytorch \\\\ --max_len=200 \\\\ --batch_size=16 \\\\ --epochs=6 \\\\ --lr=2e-5 ``` ### Contact information For personal communication related to this project, please contact Nha Nguyen Van ([email protected]).
Nlpxyz/firstnlp
2021-03-29T14:00:14.000Z
[]
[ ".gitattributes", "README.md" ]
Nlpxyz
0
Nomi97/Chatbot_QA
2020-07-06T13:38:50.000Z
[ "pytorch", "longformer", "question-answering", "transformers" ]
question-answering
[ ".gitattributes", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.json" ]
Nomi97
28
transformers
Noricum/wav2vec2-large-xlsr-53-german
2021-05-24T16:25:41.000Z
[ "pytorch", "wav2vec2", "transformers" ]
[ ".gitattributes", "README.md", "config.json", "preprocessor_config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.json" ]
Noricum
13
transformers
# Wav2vec2 German Model This model has been fine-tuned on the wav2vec-large-xlsr-53 with the German CommonVoice dataset. It achieves a 11.26 WER on the full test dataset. It was basically trained with the code provided by [Max Idahl](https://huggingface.co/maxidl/wav2vec2-large-xlsr-german) with small adjustments in data preprocessing and on training parameters. You can use it to transcribe your own files by the following code. Please note, that your input file must be *.wav, encoded in 16 kHz and be single channel. To convert an audio file using ffmpeg use: "ffmpeg -i input.wav -ar 16000 -ac 1 output.wav". The transcribe process is very memory consuming (around 10GB per 10 seconds). If the script ends with "Killed" it means the Python interpreter ran out of memory. In this case, try with a shorter audio file. ```python # !pip3 install transformers torch soundfile import soundfile as sf import torch from transformers import Wav2Vec2ForCTC, Wav2Vec2Tokenizer # load pretrained model tokenizer = Wav2Vec2Tokenizer.from_pretrained("Noricum/wav2vec2-large-xlsr-53-german") model = Wav2Vec2ForCTC.from_pretrained("Noricum/wav2vec2-large-xlsr-53-german") #load audio audio_input, _ = sf.read("/path/to/your/audio.wav") # transcribe input_values = tokenizer(audio_input, return_tensors="pt").input_values logits = model(input_values).logits predicted_ids = torch.argmax(logits, dim=-1) transcription = tokenizer.batch_decode(predicted_ids)[0] print(str(transcription)) ``` To evaluate the model on the full CommonVoice test dataset, run this script: ```python import re import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "de", split="test") # use "test[:1%]" for 1% sample wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("Noricum/wav2vec2-large-xlsr-53-german") model = Wav2Vec2ForCTC.from_pretrained("Noricum/wav2vec2-large-xlsr-53-german") model.to("cuda") chars_to_ignore_regex = '[\\\\,\\\\?\\\\.\\\\!\\\\-\\\\;\\\\:\\\\"\\\\“]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the audio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=4) # batch_size=8 -> requires ~14.5GB GPU memory # Chunked version, see https://discuss.huggingface.co/t/spanish-asr-fine-tuning-wav2vec2/4586/5: import jiwer def chunked_wer(targets, predictions, chunk_size=None): if chunk_size is None: return jiwer.wer(targets, predictions) start = 0 end = chunk_size H, S, D, I = 0, 0, 0, 0 while start < len(targets): chunk_metrics = jiwer.compute_measures(targets[start:end], predictions[start:end]) H = H + chunk_metrics["hits"] S = S + chunk_metrics["substitutions"] D = D + chunk_metrics["deletions"] I = I + chunk_metrics["insertions"] start += chunk_size end += chunk_size return float(S + D + I) / float(H + S + D) print("Total (chunk_size=1000), WER: {:2f}".format(100 * chunked_wer(result["pred_strings"], result["sentence"], chunk_size=1000))) ``` Output: Total (chunk_size=1000), WER: 11.256522
Norod78/english-sienfeld-distilgpt2
2021-05-21T10:58:11.000Z
[ "pytorch", "jax", "gpt2", "lm-head", "causal-lm", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "config.json", "flax_model.msgpack", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.json" ]
Norod78
206
transformers
Norod78/hebrew-bad_wiki-gpt_neo-tiny
2021-05-16T12:38:15.000Z
[ "pytorch", "gpt_neo", "causal-lm", "he", "transformers", "license:mit", "text-generation" ]
text-generation
[ ".gitattributes", "README.md", "added_tokens.json", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer.json", "tokenizer_config.json", "vocab.json" ]
Norod78
1,021
transformers
--- language: he thumbnail: https://avatars1.githubusercontent.com/u/3617152?norod.jpg widget: - text: "מתמטיקה:" - text: "עליית המכונות" - text: "ויקיפדיה העברית" - text: "האירוויזיון הוא" - text: "דוד בן-גוריון היה" license: mit --- # hebrew-bad_wiki-gpt_neo-tiny Hebrew nonsense generation model which produces really bad wiki-abstract text. This model was fined tuned upon [hebrew-gpt_neo-tiny](https://huggingface.co/Norod78/hebrew-gpt_neo-tiny) which was previously trained using [EleutherAI's gpt-neo](https://github.com/EleutherAI/gpt-neo). Fine-tuning on the wiki-absract text was done using [@minimaxir](https://twitter.com/minimaxir)'s [aitextgen](https://github.com/minimaxir/aitextgen). ## Datasets [Hebrew Wikipedia Dump](https://dumps.wikimedia.org/hewiki/latest/) (hewiki abstract) from May 2020
Norod78/hebrew-gpt_neo-small
2021-05-13T13:43:00.000Z
[ "pytorch", "gpt_neo", "causal-lm", "he", "transformers", "license:mit", "text-generation" ]
text-generation
[ ".gitattributes", "README.md", "added_tokens.json", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.json" ]
Norod78
110
transformers
--- language: he thumbnail: https://avatars1.githubusercontent.com/u/3617152?norod.jpg widget: - text: "עוד בימי קדם" - text: "קוראים לי דורון ואני מעוניין ל" - text: "קוראים לי איציק ואני חושב ש" - text: "החתול שלך מאוד חמוד ו" license: mit --- # hebrew-gpt_neo-small Hebrew text generation model based on [EleutherAI's gpt-neo](https://github.com/EleutherAI/gpt-neo). Each was trained on a TPUv3-8 which was made avilable to me via the [TPU Research Cloud](https://sites.research.google/trc/) Program. ## Datasets 1. An assortment of various Hebrew corpuses - I have made it available [here](https://mega.nz/folder/CodSSA4R#4INvMes-56m_WUi7jQMbJQ) 2. oscar / unshuffled_deduplicated_he - [Homepage](https://oscar-corpus.com) | [Dataset Permalink](https://huggingface.co/datasets/viewer/?dataset=oscar&config=unshuffled_deduplicated_he) The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture. ## Training Config Available [here](https://github.com/Norod/hebrew-gpt_neo/tree/main/hebrew-gpt_neo-small/configs) <BR> ## Usage ### Google Colab Notebook Available [here ](https://colab.research.google.com/github/Norod/hebrew-gpt_neo/blob/main/hebrew-gpt_neo-small/Norod78_hebrew_gpt_neo_small_Colab.ipynb) <BR> #### Simple usage sample code ```python !pip install tokenizers==0.10.2 transformers==4.6.0 from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Norod78/hebrew-gpt_neo-small") model = AutoModelForCausalLM.from_pretrained("Norod78/hebrew-gpt_neo-small", pad_token_id=tokenizer.eos_token_id) prompt_text = "אני אוהב שוקולד ועוגות" max_len = 512 sample_output_num = 3 seed = 1000 import numpy as np import torch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") n_gpu = 0 if torch.cuda.is_available()==False else torch.cuda.device_count() print(f"device: {device}, n_gpu: {n_gpu}") np.random.seed(seed) torch.manual_seed(seed) if n_gpu > 0: torch.cuda.manual_seed_all(seed) model.to(device) encoded_prompt = tokenizer.encode( prompt_text, add_special_tokens=False, return_tensors="pt") encoded_prompt = encoded_prompt.to(device) if encoded_prompt.size()[-1] == 0: input_ids = None else: input_ids = encoded_prompt print("input_ids = " + str(input_ids)) if input_ids != None: max_len += len(encoded_prompt[0]) if max_len > 2048: max_len = 2048 print("Updated max_len = " + str(max_len)) stop_token = "<|endoftext|>" new_lines = "\n\n\n" sample_outputs = model.generate( input_ids, do_sample=True, max_length=max_len, top_k=50, top_p=0.95, num_return_sequences=sample_output_num ) print(100 * '-' + "\n\t\tOutput\n" + 100 * '-') for i, sample_output in enumerate(sample_outputs): text = tokenizer.decode(sample_output, skip_special_tokens=True) # Remove all text after the stop token text = text[: text.find(stop_token) if stop_token else None] # Remove all text after 3 newlines text = text[: text.find(new_lines) if new_lines else None] print("\n{}: {}".format(i, text)) print("\n" + 100 * '-') ```
Norod78/hebrew-gpt_neo-tiny
2021-05-13T13:42:10.000Z
[ "pytorch", "gpt_neo", "causal-lm", "he", "transformers", "license:mit", "text-generation" ]
text-generation
[ ".gitattributes", "README.md", "added_tokens.json", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.json" ]
Norod78
186
transformers
--- language: he thumbnail: https://avatars1.githubusercontent.com/u/3617152?norod.jpg widget: - text: "עוד בימי קדם" - text: "קוראים לי דורון ואני מעוניין ל" - text: "קוראים לי איציק ואני חושב ש" - text: "החתול שלך מאוד חמוד ו" license: mit --- # hebrew-gpt_neo-tiny Hebrew text generation model based on [EleutherAI's gpt-neo](https://github.com/EleutherAI/gpt-neo). Each was trained on a TPUv3-8 which was made avilable to me via the [TPU Research Cloud](https://sites.research.google/trc/) Program. ## Datasets 1. An assortment of various Hebrew corpuses - I have made it available [here](https://mega.nz/folder/CodSSA4R#4INvMes-56m_WUi7jQMbJQ) 2. oscar / unshuffled_deduplicated_he - [Homepage](https://oscar-corpus.com) | [Dataset Permalink](https://huggingface.co/datasets/viewer/?dataset=oscar&config=unshuffled_deduplicated_he) The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture. ## Training Config Available [here](https://github.com/Norod/hebrew-gpt_neo/tree/main/hebrew-gpt_neo-tiny/configs) <BR> ## Usage ### Google Colab Notebook Available [here ](https://colab.research.google.com/github/Norod/hebrew-gpt_neo/blob/main/hebrew-gpt_neo-tiny/Norod78_hebrew_gpt_neo_tiny_Colab.ipynb) <BR> #### Simple usage sample code ```python !pip install tokenizers==0.10.2 transformers==4.6.0 from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Norod78/hebrew-gpt_neo-tiny") model = AutoModelForCausalLM.from_pretrained("Norod78/hebrew-gpt_neo-tiny", pad_token_id=tokenizer.eos_token_id) prompt_text = "אני אוהב שוקולד ועוגות" max_len = 512 sample_output_num = 3 seed = 1000 import numpy as np import torch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") n_gpu = 0 if torch.cuda.is_available()==False else torch.cuda.device_count() print(f"device: {device}, n_gpu: {n_gpu}") np.random.seed(seed) torch.manual_seed(seed) if n_gpu > 0: torch.cuda.manual_seed_all(seed) model.to(device) encoded_prompt = tokenizer.encode( prompt_text, add_special_tokens=False, return_tensors="pt") encoded_prompt = encoded_prompt.to(device) if encoded_prompt.size()[-1] == 0: input_ids = None else: input_ids = encoded_prompt print("input_ids = " + str(input_ids)) if input_ids != None: max_len += len(encoded_prompt[0]) if max_len > 1024: max_len = 1024 print("Updated max_len = " + str(max_len)) stop_token = "<|endoftext|>" new_lines = "\n\n\n" sample_outputs = model.generate( input_ids, do_sample=True, max_length=max_len, top_k=50, top_p=0.95, num_return_sequences=sample_output_num ) print(100 * '-' + "\n\t\tOutput\n" + 100 * '-') for i, sample_output in enumerate(sample_outputs): text = tokenizer.decode(sample_output, skip_special_tokens=True) # Remove all text after the stop token text = text[: text.find(stop_token) if stop_token else None] # Remove all text after 3 newlines text = text[: text.find(new_lines) if new_lines else None] print("\n{}: {}".format(i, text)) print("\n" + 100 * '-') ```
Norod78/hebrew-gpt_neo-xl
2021-05-13T13:41:09.000Z
[ "pytorch", "gpt_neo", "causal-lm", "he", "transformers", "license:mit", "text-generation" ]
text-generation
[ ".gitattributes", "README.md", "added_tokens.json", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.json" ]
Norod78
103
transformers
--- language: he thumbnail: https://avatars1.githubusercontent.com/u/3617152?norod.jpg widget: - text: "עוד בימי קדם" - text: "קוראים לי דורון ואני מעוניין ל" - text: "קוראים לי איציק ואני חושב ש" - text: "החתול שלך מאוד חמוד ו" - text: "ובדרך ראינו שהגן" license: mit --- # hebrew-gpt_neo-xl Hebrew text generation model based on [EleutherAI's gpt-neo](https://github.com/EleutherAI/gpt-neo). Each was trained on a TPUv3-8 which was made avilable to me via the [TPU Research Cloud](https://sites.research.google/trc/) Program. ## Datasets 1. An assortment of various Hebrew corpuses - I have made it available [here](https://mega.nz/folder/CodSSA4R#4INvMes-56m_WUi7jQMbJQ) 2. oscar / unshuffled_deduplicated_he - [Homepage](https://oscar-corpus.com) | [Dataset Permalink](https://huggingface.co/datasets/viewer/?dataset=oscar&config=unshuffled_deduplicated_he) The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture. ## Training Config Available [here](https://github.com/Norod/hebrew-gpt_neo/tree/main/hebrew-gpt_neo-xl/configs) <BR> ## Usage ### Google Colab Notebook Available [here ](https://colab.research.google.com/github/Norod/hebrew-gpt_neo/blob/main/hebrew-gpt_neo-xl/Norod78_hebrew_gpt_neo_xl_Colab.ipynb) <BR> #### Simple usage sample code ```python !pip install tokenizers==0.10.2 transformers==4.6.0 from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Norod78/hebrew-gpt_neo-xl") model = AutoModelForCausalLM.from_pretrained("Norod78/hebrew-gpt_neo-xl", pad_token_id=tokenizer.eos_token_id) prompt_text = "אני אוהב שוקולד ועוגות" max_len = 512 sample_output_num = 3 seed = 1000 import numpy as np import torch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") n_gpu = 0 if torch.cuda.is_available()==False else torch.cuda.device_count() print(f"device: {device}, n_gpu: {n_gpu}") np.random.seed(seed) torch.manual_seed(seed) if n_gpu > 0: torch.cuda.manual_seed_all(seed) model.to(device) encoded_prompt = tokenizer.encode( prompt_text, add_special_tokens=False, return_tensors="pt") encoded_prompt = encoded_prompt.to(device) if encoded_prompt.size()[-1] == 0: input_ids = None else: input_ids = encoded_prompt print("input_ids = " + str(input_ids)) if input_ids != None: max_len += len(encoded_prompt[0]) if max_len > 2048: max_len = 2048 print("Updated max_len = " + str(max_len)) stop_token = "<|endoftext|>" new_lines = "\n\n\n" sample_outputs = model.generate( input_ids, do_sample=True, max_length=max_len, top_k=50, top_p=0.95, num_return_sequences=sample_output_num ) print(100 * '-' + "\n\t\tOutput\n" + 100 * '-') for i, sample_output in enumerate(sample_outputs): text = tokenizer.decode(sample_output, skip_special_tokens=True) # Remove all text after the stop token text = text[: text.find(stop_token) if stop_token else None] # Remove all text after 3 newlines text = text[: text.find(new_lines) if new_lines else None] print("\n{}: {}".format(i, text)) print("\n" + 100 * '-') ```
Norod78/hebrew-project_ben_yehuda-gpt_neo-small
2021-05-16T09:42:57.000Z
[ "pytorch", "gpt_neo", "causal-lm", "he", "transformers", "license:mit", "text-generation" ]
text-generation
[ ".gitattributes", "README.md", "added_tokens.json", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer.json", "tokenizer_config.json", "vocab.json" ]
Norod78
24
transformers
--- language: he thumbnail: https://avatars1.githubusercontent.com/u/3617152?norod.jpg widget: - text: "יום אחד, " - text: "זה עתה התעורר" - text: "וזה הצחוק, אמרו חכ" - text: "אשה צעי" license: mit --- # hebrew-project_ben_yehuda-gpt_neo-small Hebrew story text generation model, in the style of the texts available in [Project Ben Yehuda](https://benyehuda.org/) fined tuned upon [hebrew-gpt_neo-small](https://huggingface.co/Norod78/hebrew-gpt_neo-small) which was trained using [EleutherAI's gpt-neo](https://github.com/EleutherAI/gpt-neo). ## Dataset Stripped text dump from [project ben-yehuda public_domain_dump 2021-02](https://github.com/projectbenyehuda/public_domain_dump/releases/tag/2021-02)
Norod78/hebrew_poetry-gpt_neo-small
2021-05-13T20:14:33.000Z
[ "pytorch", "gpt_neo", "causal-lm", "he", "transformers", "license:mit", "text-generation" ]
text-generation
[ ".gitattributes", "README.md", "added_tokens.json", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer.json", "tokenizer_config.json", "vocab.json" ]
Norod78
2,364
transformers
--- language: he thumbnail: https://avatars1.githubusercontent.com/u/3617152?norod.jpg widget: - text: "פעם אחת לפני שנ" - text: "הים כחול ואני ח" - text: "שם היצירה:" - text: "כשהמכונות" license: mit --- # hebrew_poetry-gpt_neo-small Hebrew poetry text generation model, fined tuned upon [hebrew-gpt_neo-small](https://huggingface.co/Norod78/hebrew-gpt_neo-small) which was trained using [EleutherAI's gpt-neo](https://github.com/EleutherAI/gpt-neo). Fine-tuning was done using [@minimaxir](https://twitter.com/minimaxir)'s [aitextgen](https://github.com/minimaxir/aitextgen). ## Datasets 1. Text from [New stage](http://stage.co.il/) 2. A dataset containing Hebrew lyrics
Norod78/hebrew_poetry-gpt_neo-tiny
2021-05-13T20:07:40.000Z
[ "pytorch", "gpt_neo", "causal-lm", "he", "transformers", "license:mit", "text-generation" ]
text-generation
[ ".gitattributes", "README.md", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.json" ]
Norod78
156
transformers
--- language: he thumbnail: https://avatars1.githubusercontent.com/u/3617152?norod.jpg widget: - text: "שתי רכבות דוהרות בתוך עיני" - text: "הים כחול ואני" - text: "שם היצירה:" - text: "רציתי" license: mit --- # hebrew_poetry-gpt_neo-tiny Hebrew poetry text generation model, fined tuned upon [hebrew-gpt_neo-tiny](https://huggingface.co/Norod78/hebrew-gpt_neo-tiny) which was trained using [EleutherAI's gpt-neo](https://github.com/EleutherAI/gpt-neo). Each was trained on a TPUv3-8 which was made avilable to me via the [TPU Research Cloud](https://sites.research.google/trc/) Program. ## Datasets 1. Text from [New stage](http://stage.co.il/) 2. A dataset containing Hebrew lyrics
Norod78/hebrew_stories-gpt_neo-small
2021-06-12T09:52:18.000Z
[ "pytorch", "gpt_neo", "causal-lm", "he", "transformers", "license:mit", "text-generation" ]
text-generation
[ ".gitattributes", "README.md", "added_tokens.json", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "vocab.json" ]
Norod78
140
transformers
--- language: he thumbnail: https://avatars1.githubusercontent.com/u/3617152?norod.jpg widget: - text: "תריסר מכשפות סג" - text: "פעם אחת, לפני שנים רבות" - text: "הרמיוני הסתירה את" - text: "לפתע, אור ירוק" license: mit --- # hebrew_stories-gpt_neo-small Hebrew story-text generation model, fined tuned upon [hebrew-gpt_neo-small](https://huggingface.co/Norod78/hebrew-gpt_neo-small) which was trained using [EleutherAI's gpt-neo](https://github.com/EleutherAI/gpt-neo). Fine-tuning was done using [@minimaxir](https://twitter.com/minimaxir)'s [aitextgen](https://github.com/minimaxir/aitextgen). ## Dataset Text from various Hebrew books
Norod78/hewiki-articles-distilGPT2py-il
2021-05-21T10:59:10.000Z
[ "pytorch", "tf", "jax", "gpt2", "lm-head", "causal-lm", "he", "transformers", "license:mit", "text-generation" ]
text-generation
[ ".gitattributes", "README.md", "added_tokens.json", "config.json", "flax_model.msgpack", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tf_model.h5", "tokenizer_config.json", "vocab.json" ]
Norod78
63
transformers
--- language: he thumbnail: https://avatars1.githubusercontent.com/u/3617152?norod.jpg widget: - text: "<|startoftext|>החוק השני של מועדון קרב הוא" - text: "<|startoftext|>ראש הממשלה בן גוריון" - text: "<|startoftext|>למידת מכונה (סרט)" - text: "<|startoftext|>מנשה פומפרניקל" - text: "<|startoftext|>אי שוויון " license: mit --- # hewiki-articles-distilGPT2py-il ## A tiny GPT2 model for generating Hebrew text A distilGPT2 sized model. <br> Training data was hewiki-20200701-pages-articles-multistream.xml.bz2 from https://dumps.wikimedia.org/hewiki/20200701/ <br> XML has been converted to plain text using Wikipedia Extractor http://medialab.di.unipi.it/wiki/Wikipedia_Extractor <br> I then added <|startoftext|> and <|endoftext|> markers and deleted empty lines. <br> #### How to use ```python import torch import torch.nn as nn from transformers import GPT2Tokenizer, GPT2LMHeadModel tokenizer = GPT2Tokenizer.from_pretrained("Norod78/hewiki-articles-distilGPT2py-il") model = GPT2LMHeadModel.from_pretrained("Norod78/hewiki-articles-distilGPT2py-il").eval() bos_token = tokenizer.bos_token #Beginning of sentace eos_token = tokenizer.eos_token #End of sentence def generate_word(model, tokens_tensor, temperature=1.0): """ Sample a word given a tensor of tokens of previous words from a model. Given the words we have, sample a plausible word. Temperature is used for controlling randomness. If using temperature==0 we simply use a greedy arg max. Else, we sample from a multinomial distribution using a lower inverse temperature to allow for more randomness to escape repetitions. """ with torch.no_grad(): outputs = model(tokens_tensor) predictions = outputs[0] if temperature>0: # Make the distribution more or less skewed based on the temperature predictions = outputs[0]/temperature # Sample from the distribution softmax = nn.Softmax(dim=0) predicted_index = torch.multinomial(softmax(predictions[0,-1,:]),1).item() # Simply take the arg-max of the distribution else: predicted_index = torch.argmax(predictions[0, -1, :]).item() # Decode the encoding to the corresponding word predicted_text = tokenizer.decode([predicted_index]) return predicted_text def generate_sentence(model, tokenizer, initial_text, temperature=1.0): """ Generate a sentence given some initial text using a model and a tokenizer. Returns the new sentence. """ # Encode a text inputs text = "" sentence = text # We avoid an infinite loop by setting a maximum range for i in range(0,84): indexed_tokens = tokenizer.encode(initial_text + text) # Convert indexed tokens in a PyTorch tensor tokens_tensor = torch.tensor([indexed_tokens]) new_word = generate_word(model, tokens_tensor, temperature=temperature) # Here the temperature is slowly decreased with each generated word, # this ensures that the sentence (ending) makes more sense. # We don't decrease to a temperature of 0.0 to leave some randomness in. if temperature<(1-0.008): temperature += 0.008 else: temperature = 0.996 text = text+new_word # Stop generating new words when we have reached the end of the line or the poem if eos_token in new_word: # returns new sentence and whether poem is done return (text.replace(eos_token,"").strip(), True) elif '/' in new_word: return (text.strip(), False) elif bos_token in new_word: return (text.replace(bos_token,"").strip(), False) return (text, True) for output_num in range(1,5): init_text = "בוקר טוב" text = bos_token + init_text for i in range(0,84): sentence = generate_sentence(model, tokenizer, text, temperature=0.9) text = init_text + sentence[0] print(text) if (sentence[1] == True): break ```
NovaChrono/twervy
2021-06-03T11:55:39.000Z
[ "pytorch", "gpt2", "lm-head", "causal-lm", "transformers", "conversational", "text-generation" ]
conversational
[ ".gitattributes", "README.md", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer.json", "tokenizer_config.json", "vocab.json" ]
NovaChrono
40
transformers
--- tags: - conversational --- # My Awesome Model
NtDNlp/cmcbert
2021-04-23T01:25:55.000Z
[ "pytorch", "transformers" ]
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "tokenizer_config.json", "vocab.txt", ".idea/.gitignore", ".idea/cmcbert.iml", ".idea/misc.xml", ".idea/modules.xml", ".idea/vcs.xml", ".idea/inspectionProfiles/profiles_settings.xml" ]
NtDNlp
7
transformers
NtDNlp/sentence-embedding-vietnamese
2021-05-27T08:51:12.000Z
[ "pytorch", "xlm-roberta", "transformers" ]
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "sentence_bert_config.json", "sentencepiece.bpe.model", "special_tokens_map.json", "tokenizer_config.json" ]
NtDNlp
46
transformers
#EmbeddingSimilarityEvaluator: Evaluating the model on STS.en-en.txt dataset in epoch 2 after 26000 steps: | Type | Pearson | Spearman | | ----------- | ----------- | ----------- | | Cosine | 0.7650 | 0.8095 | | Euclidean | 0.8089 | 0.8010 | | Cosine | 0.8075 | 0.7999 | | Euclidean | 0.7531 | 0.7680
Ochiroo/tiny_mn_gpt
2021-05-21T10:59:47.000Z
[ "tf", "gpt2", "lm-head", "causal-lm", "mn", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "README.md", "config.json", "merges.txt", "special_tokens_map.json", "tf_model.h5", "tokenizer_config.json", "vocab.json" ]
Ochiroo
16
transformers
--- language: mn --- # GPT2-Mongolia ## Model description GPT-2 is a transformers model pretrained on a very small corpus of Mongolian news data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences. ## How to use ```python import tensorflow as tf from transformers import GPT2Config, TFGPT2LMHeadModel, GPT2Tokenizer from transformers import WEIGHTS_NAME, CONFIG_NAME tokenizer = GPT2Tokenizer.from_pretrained('Ochiroo/tiny_mn_gpt') model = TFGPT2LMHeadModel.from_pretrained('Ochiroo/tiny_mn_gpt') text = "Намайг Эрдэнэ-Очир гэдэг. Би" input_ids = tokenizer.encode(text, return_tensors='tf') beam_outputs = model.generate( input_ids, max_length = 25, num_beams = 5, temperature = 0.7, no_repeat_ngram_size=2, num_return_sequences=5 ) print(tokenizer.decode(beam_outputs[0])) ``` ## Training data and biases Trained on 500MB of Mongolian news dataset (IKON) on RTX 2060.
Ogayo/Hel-ach-en
2020-12-11T21:30:01.000Z
[ "pytorch", "marian", "seq2seq", "ach", "en", "dataset:JW300", "transformers", "translation", "license:cc-by-4.0", "text2text-generation" ]
translation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.json" ]
Ogayo
10
transformers
--- language: - ach - en tags: - translation license: cc-by-4.0 datasets: - JW300 metrics: - bleu --- # HEL-ACH-EN ## Model description MT model translating Acholi to English initialized with weights from [opus-mt-luo-en](https://huggingface.co/Helsinki-NLP/opus-mt-luo-en) on HuggingFace. ## Intended uses & limitations Machine Translation experiments. Do not use for sensitive tasks. #### How to use ```python # You can include sample code which will be formatted from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("Ogayo/Hel-ach-en") model = AutoModelForSeq2SeqLM.from_pretrained("Ogayo/Hel-ach-en") ``` #### Limitations and bias Trained on Jehovah Witnesses data so contains theirs and Christian views. ## Training data Trained on OPUS JW300 data. Initialized with weights from [opus-mt-luo-en](https://huggingface.co/Helsinki-NLP/opus-mt-luo-en?text=Bed+gi+nyasi+mar+chieng%27+nyuol+mopong%27+gi+mor%21#model_card) ## Training procedure Remove duplicates and rows with no alphabetic characters. Used GPU ## Eval results testset | BLEU --- | --- JW300.luo.en| 46.1
Ogayo/ach-en-translator
2020-11-11T06:03:10.000Z
[]
[ ".gitattributes" ]
Ogayo
0
Ogayo/mt-ach-en
2021-04-23T06:46:11.000Z
[ "pytorch", "marian", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "source.spm", "special_tokens_map.json", "target.spm", "tokenizer_config.json", "training_args.bin", "vocab.json" ]
Ogayo
8
transformers
Ogayo/mt-adh-en
2021-04-23T05:48:15.000Z
[ "pytorch", "marian", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "source.spm", "special_tokens_map.json", "target.spm", "tokenizer_config.json", "vocab.json" ]
Ogayo
7
transformers
Ogayo/mt-en-ach
2021-04-23T06:42:54.000Z
[ "pytorch", "marian", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "source.spm", "special_tokens_map.json", "target.spm", "tokenizer_config.json", "training_args.bin", "vocab.json" ]
Ogayo
7
transformers
Ogayo/mt-en-adh
2021-04-23T05:00:15.000Z
[ "pytorch", "marian", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "source.spm", "special_tokens_map.json", "target.spm", "tokenizer_config.json", "vocab.json" ]
Ogayo
6
transformers
OliverZC/HuggingFace
2020-12-28T16:36:30.000Z
[]
[ ".gitattributes" ]
OliverZC
0
Olmik43/bR4mlOsmlOY
2021-01-19T10:58:04.000Z
[]
[ ".gitattributes" ]
Olmik43
0
Onlyblacktea/my_transformers
2021-06-11T08:12:57.000Z
[]
[ ".gitattributes" ]
Onlyblacktea
0
PJH10/tutorial
2021-04-15T04:29:58.000Z
[]
[ ".gitattributes" ]
PJH10
0
Parth/boolean
2020-08-25T13:52:37.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin" ]
Parth
380
transformers
Parth/mT5-question-generator
2020-12-01T03:38:27.000Z
[ "pytorch", "mt5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin" ]
Parth
127
transformers
from transformers import MT5ForConditionalGeneration, AutoTokenizer model = MT5ForConditionalGeneration.from_pretrained("Parth/mT5-question-generator") tokenizer = AutoTokenizer.from_pretrained("google/mt5-base")
Parth/result
2020-08-25T06:30:38.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin" ]
Parth
1,331
transformers
Pascal/model_name
2021-01-22T13:39:15.000Z
[]
[ ".gitattributes" ]
Pascal
0
Paul012/bart-model
2021-04-10T04:04:29.000Z
[]
[ ".gitattributes" ]
Paul012
0
PaulAdversarial/PAN_twitter_hate_speech_2021_ES_MT5
2021-05-30T09:58:47.000Z
[ "pytorch", "mt5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "readme.md", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
PaulAdversarial
14
transformers
##An MT5ForConditionalGeneration trained on 3 tasks from PAN Profiling Hate Speech Spreaders on Twitter dataset (ES): * topic attribution - topics were assigned with BertTopic library using embeddings from `Hate-speech-CNERG/dehatebert-mono-spanish` bert model (train and test sets from the PAN task) * hate speech identification (train set from the PAN task) in order to generate tone of comment use prefix **hater classification:**
PaulAdversarial/T5_PAN_Hate_Speech_Twitter_topic_author_ishatespeach
2021-05-27T13:21:26.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
PaulAdversarial
28
transformers
##A T5ForConditionalGeneration trained on 3 tasks from PAN Profiling Hate Speech Spreaders on Twitter dataset (EN): * author attribution (train and test sets from the PAN task) * topic attribution - topics were assigned with BertTopic library using embeddings from `cardiffnlp/bertweet-base-hate` Roberta model (train and test sets from the PAN task) * hate speech identification (train set from the PAN task) in order to generate tone of comment use prefix **hater classification:**
PaulAdversarial/T5_PAN_Hate_Speech_Twitter_topic_ishatespeach
2021-05-27T19:58:42.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
PaulAdversarial
32
transformers
A T5ForConditionalGeneration trained on 2 tasks from PAN Profiling Hate Speech Spreaders on Twitter dataset (EN): * topic attribution - topics were assigned with BertTopic library using embeddings from `cardiffnlp/bertweet-base-hate` Roberta model (train and test sets from the PAN task) * hate speech identification (train set from the PAN task) in order to generate tone of comment use prefix **hater classification:**
Peltarion/xlm-roberta-longformer-base-4096
2021-05-05T18:20:25.000Z
[ "pytorch", "xlm-roberta", "masked-lm", "transformers", "fill-mask" ]
fill-mask
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "sentencepiece.bpe.model", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin" ]
Peltarion
22
transformers
## XLM-R Longformer Model XLM-R Longformer is a XLM-R model, that has been extended to allow sequence lengths up to 4096 tokens, instead of the regular 512. The model was pre-trained from the XLM-RoBERTa checkpoint using the Longformer [pre-training scheme](https://github.com/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) on the English WikiText-103 corpus. The reason for this was to investigate methods for creating efficient Transformers for low-resource languages, such as Swedish, without the need to pre-train them on long-context datasets in each respecitve language. The trained model came as a result of a master thesis project at [Peltarion](https://peltarion.com/) and was fine-tuned on multilingual quesion-answering tasks, with code available [here](https://github.com/MarkusSagen/Master-Thesis-Multilingual-Longformer#xlm-r). Since both XLM-R model and Longformer models are large models, it it recommended to run the models with NVIDIA Apex (16bit precision), large GPU and several gradient accumulation steps. ## How to Use The model can be used as expected to fine-tune on a downstream task. For instance for QA. ```python import torch from transformers import AutoModel, AutoTokenizer MAX_SEQUENCE_LENGTH = 4096 MODEL_NAME_OR_PATH = "markussagen/xlm-roberta-longformer-base-4096" tokenizer = AutoTokenizer.from_pretrained( MODEL_NAME_OR_PATH, max_length=MAX_SEQUENCE_LENGTH, padding="max_length", truncation=True, ) model = AutoModelForQuestionAnswering.from_pretrained( MODEL_NAME_OR_PATH, max_length=MAX_SEQUENCE_LENGTH, ) ``` ## Training Procedure The model have been trained on the WikiText-103 corpus, using a **48GB** GPU with the following training script and parameters. The model was pre-trained for 6000 iterations and took ~5 days. See the full [training script](https://github.com/MarkusSagen/Master-Thesis-Multilingual-Longformer/blob/main/scripts/finetune_qa_models.py) and [Github repo](https://github.com/MarkusSagen/Master-Thesis-Multilingual-Longformer) for more information ```sh wget https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-raw-v1.zip unzip wikitext-103-raw-v1.zip export DATA_DIR=./wikitext-103-raw scripts/run_long_lm.py \ --model_name_or_path xlm-roberta-base \ --model_name xlm-roberta-to-longformer \ --output_dir ./output \ --logging_dir ./logs \ --val_file_path $DATA_DIR/wiki.valid.raw \ --train_file_path $DATA_DIR/wiki.train.raw \ --seed 42 \ --max_pos 4096 \ --adam_epsilon 1e-8 \ --warmup_steps 500 \ --learning_rate 3e-5 \ --weight_decay 0.01 \ --max_steps 6000 \ --evaluate_during_training \ --logging_steps 50 \ --eval_steps 50 \ --save_steps 6000 \ --max_grad_norm 1.0 \ --per_device_eval_batch_size 2 \ --per_device_train_batch_size 1 \ --gradient_accumulation_steps 64 \ --overwrite_output_dir \ --fp16 \ --do_train \ --do_eval ```
PereLluis13/Wav2Vec2-Large-XLSR-53-catalan
2021-04-06T16:41:36.000Z
[ "pytorch", "wav2vec2", "ca", "dataset:common_voice", "transformers", "audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week", "license:apache-2.0" ]
automatic-speech-recognition
[ ".gitattributes", "README.md", "config.json", "preprocessor_config.json", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.json" ]
PereLluis13
30
transformers
--- language: ca datasets: - common_voice metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: Catalan XLSR Wav2Vec Large 53 #TODO: replace {human_readable_name} with a name of your model as it should appear on the leaderboard. It could be something like `Elgeish XLSR Wav2Vec2 Large 53` results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice ca type: common_voice args: ca #TODO: metrics: - name: Test WER type: wer value: 8.11 --- # Wav2Vec2-Large-XLSR-53-ca Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on catalan using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "ca", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("PereLluis13/Wav2Vec2-Large-XLSR-53-catalan") model = Wav2Vec2ForCTC.from_pretrained("PereLluis13/Wav2Vec2-Large-XLSR-53-catalan") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the catalan test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "ca", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("PereLluis13/Wav2Vec2-Large-XLSR-53-catalan") model = Wav2Vec2ForCTC.from_pretrained("PereLluis13/Wav2Vec2-Large-XLSR-53-catalan") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\;\:\"\“]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) import jiwer # Chunk WER computation due to memory issues, taken from https://huggingface.co/pcuenq/wav2vec2-large-xlsr-53-es def chunked_wer(targets, predictions, chunk_size=None): if chunk_size is None: return jiwer.wer(targets, predictions) start = 0 end = chunk_size H, S, D, I = 0, 0, 0, 0 while start < len(targets): chunk_metrics = jiwer.compute_measures(targets[start:end], predictions[start:end]) H = H + chunk_metrics["hits"] S = S + chunk_metrics["substitutions"] D = D + chunk_metrics["deletions"] I = I + chunk_metrics["insertions"] start += chunk_size end += chunk_size return float(S + D + I) / float(H + S + D) print("WER: {:2f}".format(100 * chunked_wer(result["sentence"], result["pred_strings"], chunk_size=4000))) ``` **Test Result**: 8.11 % ## Training The Common Voice `train`, `validation` datasets were used for training. At the second epoch training was halted due to a memory issue, and was continued with lower batch size, but acc. gradient steps were scaled to keep it at 32 batch size during all training. Then the model was trained for an additional 10 epochs where half the male samples were pitched up. The script used for training can be found [here](https://github.com/huggingface/transformers/blob/master/examples/research_projects/wav2vec2/run_common_voice.py). Slight modifications were done in order to speed up the ordering by length during training, which can be found [here](https://discuss.huggingface.co/t/spanish-asr-fine-tuning-wav2vec2/4586/6). Another version trained for catalan can be found [here](https://huggingface.co/ccoreilly/wav2vec2-large-xlsr-catala), which may be better than this one since it was trained with extra data and for longer time. Whoever, since it used different splits that include part of the Common Voice test set, this version can be used to get a baseline on the Common Voice dataset.
PereLluis13/wav2vec2-large-xlsr-53-greek
2021-03-24T16:10:16.000Z
[ "pytorch", "wav2vec2", "el", "dataset:common_voice", "dataset:CSS10", "transformers", "audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week", "license:apache-2.0" ]
automatic-speech-recognition
[ ".gitattributes", "README.md", "config.json", "optimizer.pt", "preprocessor_config.json", "pytorch_model.bin", "scheduler.pt", "special_tokens_map.json", "tokenizer_config.json", "trainer_state.json", "training_args.bin", "vocab.json" ]
PereLluis13
22
transformers
--- language: el #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site. datasets: - common_voice #TODO: remove if you did not use the common voice dataset - CSS10 metrics: - wer tags: - audio - automatic-speech-recognition - speech - xlsr-fine-tuning-week license: apache-2.0 model-index: - name: Greek XLSR Wav2Vec2 Large 53 - CV + CSS10 #TODO: replace {human_readable_name} with a name of your model as it should appear on the leaderboard. It could be something like `Elgeish XLSR Wav2Vec2 Large 53` results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice el #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site. type: common_voice args: el #TODO: replace {lang_id} in your language code here. Make sure the code is one of the *ISO codes* of [this](https://huggingface.co/languages) site. metrics: - name: Test WER type: wer value: 20.89 #TODO (IMPORTANT): replace {wer_result_on_test} with the WER error rate you achieved on the common_voice test set. It should be in the format XX.XX (don't add the % sign here). **Please** remember to fill out this value after you evaluated your model, so that your model appears on the leaderboard. If you fill out this model card before evaluating your model, please remember to edit the model card afterward to fill in your value --- # Wav2Vec2-Large-XLSR-53-greek Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on greek using the [Common Voice](https://huggingface.co/datasets/common_voice) and [CSS10](https://github.com/Kyubyong/css10) datasets. When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "el", split="test") processor = Wav2Vec2Processor.from_pretrained("PereLluis13/wav2vec2-large-xlsr-53-greek") model = Wav2Vec2ForCTC.from_pretrained("PereLluis13/wav2vec2-large-xlsr-53-greek") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the greek test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "el", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("PereLluis13/wav2vec2-large-xlsr-53-greek") model = Wav2Vec2ForCTC.from_pretrained("PereLluis13/wav2vec2-large-xlsr-53-greek") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 20.89 % ## Training The Common Voice `train`, `validation`, and CSS10 datasets were used for training, added as `extra` split to the dataset. The sampling rate and format of the CSS10 files is different, hence the function `speech_file_to_array_fn` was changed to: ``` def speech_file_to_array_fn(batch): try: speech_array, sampling_rate = sf.read(batch["path"] + ".wav") except: speech_array, sampling_rate = librosa.load(batch["path"], sr = 16000, res_type='zero_order_hold') sf.write(batch["path"] + ".wav", speech_array, sampling_rate, subtype='PCM_24') batch["speech"] = speech_array batch["sampling_rate"] = sampling_rate batch["target_text"] = batch["text"] return batch ``` As suggested by [Florian Zimmermeister](https://github.com/flozi00). The script used for training can be found in [run_common_voice.py](examples/research_projects/wav2vec2/run_common_voice.py), still pending of PR. The only changes are to `speech_file_to_array_fn`. Batch size was kept at 32 (using `gradient_accumulation_steps`) using one of the [OVH](https://www.ovh.com/) machines, with a V100 GPU (thank you very much [OVH](https://www.ovh.com/)). The model trained for 40 epochs, the first 20 with the `train+validation` splits, and then `extra` split was added with the data from CSS10 at the 20th epoch.
Perezzini/vinci
2021-03-27T16:02:28.000Z
[]
[ ".gitattributes" ]
Perezzini
0
Peter/kw_f_1
2021-05-10T13:44:01.000Z
[ "pytorch", "deberta", "text-classification", "transformers" ]
text-classification
[ ".gitattributes", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.json" ]
Peter
9
transformers
Peter/kw_s_1
2021-05-07T13:17:42.000Z
[ "pytorch", "t5", "seq2seq", "transformers", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer.json" ]
Peter
10
transformers
Pip/dubsky
2021-03-20T19:21:48.000Z
[]
[ ".gitattributes" ]
Pip
0
Pollawat/mt5-small-thai-qa-qg
2021-04-19T14:52:22.000Z
[ "pytorch", "mt5", "seq2seq", "thai", "th", "dataset:NSC2018", "dataset:iapp-wiki-qa-dataset", "dataset:XQuAD", "transformers", "question-generation", "question-answering", "license:mit", "text2text-generation" ]
question-answering
[ ".gitattributes", "README.md", "added_tokens.json", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
Pollawat
72
transformers
--- tags: - question-generation - question-answering language: - thai - th datasets: - NSC2018 - iapp-wiki-qa-dataset - XQuAD license: mit --- [Google's mT5](https://github.com/google-research/multilingual-t5) This is a model for generating questions from Thai texts. It was fine-tuned on NSC2018 corpus ```python from transformers import MT5Tokenizer, MT5ForConditionalGeneration tokenizer = MT5Tokenizer.from_pretrained("Pollawat/mt5-small-thai-qa-qg") model = MT5ForConditionalGeneration.from_pretrained("Pollawat/mt5-small-thai-qa-qg") text = "กรุงเทพมหานคร เป็นเมืองหลวงและนครที่มีประชากรมากที่สุดของประเทศไทย เป็นศูนย์กลางการปกครอง การศึกษา การคมนาคมขนส่ง การเงินการธนาคาร การพาณิชย์ การสื่อสาร และความเจริญของประเทศ เป็นเมืองที่มีชื่อยาวที่สุดในโลก ตั้งอยู่บนสามเหลี่ยมปากแม่น้ำเจ้าพระยา มีแม่น้ำเจ้าพระยาไหลผ่านและแบ่งเมืองออกเป็น 2 ฝั่ง คือ ฝั่งพระนครและฝั่งธนบุรี กรุงเทพมหานครมีพื้นที่ทั้งหมด 1,568.737 ตร.กม. มีประชากรตามทะเบียนราษฎรกว่า 5 ล้านคน" input_ids = tokenizer.encode(text, return_tensors='pt') beam_output = model.generate( input_ids, max_length=50, num_beams=5, early_stopping=True ) print(tokenizer.decode(beam_output[0])) >> <pad> <extra_id_0> แม่น้ําเจ้าพระยาไหลผ่านและแบ่งเมืองออกเป็น 2 ฝั่ง คือ ฝั่งใด <ANS> ฝั่งพระนครและฝั่งธนบุรี</s> print(tokenizer.decode(beam_output[0], skip_special_tokens=True)) >> <extra_id_0> แม่น้ําเจ้าพระยาไหลผ่านและแบ่งเมืองออกเป็น 2 ฝั่ง คือ ฝั่งใด ฝั่งพระนครและฝั่งธนบุรี ```
Pollawat/mt5-small-thai-qg
2021-04-15T08:38:57.000Z
[ "pytorch", "mt5", "seq2seq", "thai", "th", "dataset:NSC2018", "transformers", "question-generation", "license:mit", "text2text-generation" ]
text2text-generation
[ ".gitattributes", "README.md", "config.json", "pytorch_model.bin", "special_tokens_map.json", "spiece.model", "tokenizer_config.json" ]
Pollawat
15
transformers
--- tags: - question-generation language: - thai - th datasets: - NSC2018 license: mit --- [Google's mT5](https://github.com/google-research/multilingual-t5) This is a model for generating questions from Thai texts. It was fine-tuned on NSC2018 corpus ```python from transformers import T5Tokenizer, MT5ForConditionalGeneration tokenizer = T5Tokenizer.from_pretrained("Pollawat/mt5-small-thai-qg") model = MT5ForConditionalGeneration.from_pretrained("Pollawat/mt5-small-thai-qg") text = "กรุงเทพมหานคร เป็นเมืองหลวงและนครที่มีประชากรมากที่สุดของประเทศไทย เป็นศูนย์กลางการปกครอง การศึกษา การคมนาคมขนส่ง การเงินการธนาคาร การพาณิชย์ การสื่อสาร และความเจริญของประเทศ เป็นเมืองที่มีชื่อยาวที่สุดในโลก ตั้งอยู่บนสามเหลี่ยมปากแม่น้ำเจ้าพระยา มีแม่น้ำเจ้าพระยาไหลผ่านและแบ่งเมืองออกเป็น 2 ฝั่ง คือ ฝั่งพระนครและฝั่งธนบุรี กรุงเทพมหานครมีพื้นที่ทั้งหมด 1,568.737 ตร.กม. มีประชากรตามทะเบียนราษฎรกว่า 5 ล้านคน ทำให้กรุงเทพมหานครเป็นเอกนคร (Primate City) จัด มีผู้กล่าวว่า กรุงเทพมหานครเป็น 'เอกนครที่สุดในโลก' เพราะมีประชากรมากกว่านครที่มีประชากรมากเป็นอันดับ 2 ถึง 40 เท่า[3]" input_ids = tokenizer.encode(text, return_tensors='pt') beam_output = model.generate( input_ids, max_length=50, num_beams=5, early_stopping=True ) print(tokenizer.decode(beam_output[0], skip_special_tokens=True)) >> <extra_id_0>ของกรุงเทพมหานครเป็นเมืองหลวงของประเทศใด ```
PolyakovMaxim/GPTCHAT
2021-05-23T12:26:27.000Z
[]
[ ".gitattributes" ]
PolyakovMaxim
0
PolyakovMaxim/ModelGptTS
2021-05-21T11:00:42.000Z
[ "pytorch", "jax", "gpt2", "lm-head", "causal-lm", "transformers", "text-generation" ]
text-generation
[ ".gitattributes", "config.json", "eval_results.txt", "flax_model.msgpack", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "training_args.bin", "vocab.json" ]
PolyakovMaxim
21
transformers
PolyakovMaxim/T
2021-02-06T12:20:19.000Z
[]
[ ".gitattributes" ]
PolyakovMaxim
0
Preeyank/roberta-base-education-domain
2021-05-20T12:17:05.000Z
[ "pytorch", "jax", "roberta", "masked-lm", "transformers", "fill-mask" ]
fill-mask
[ ".gitattributes", "README.md", "all_results.json", "config.json", "eval_results.json", "flax_model.msgpack", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "train_results.json", "trainer_state.json", "training_args.bin", "vocab.json" ]
Preeyank
301
transformers
PremalMatalia/Question_Answering_with_SQuAD_2.0_TFDistilBert
2020-12-29T09:54:40.000Z
[]
[ ".gitattributes" ]
PremalMatalia
0
PremalMatalia/distilbert-base-uncased-distilled-squad
2021-03-24T16:55:49.000Z
[]
[ ".gitattributes" ]
PremalMatalia
0
Primer/bart-squad2
2020-12-11T21:30:04.000Z
[ "pytorch", "bart", "question-answering", "en", "transformers" ]
question-answering
[ ".gitattributes", "README.md", "config.json", "merges.txt", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.json" ]
Primer
258
transformers
--- language: "en" --- # BART-Squad2 ## Model description BART for extractive (span-based) question answering, trained on Squad 2.0. F1 score of 87.4. ## Intended uses & limitations Unfortunately, the Huggingface auto-inference API won't run this model, so if you're attempting to try it through the input box above and it complains, don't be discouraged! #### How to use Here's a quick way to get question answering running locally: ```python from transformers import AutoTokenizer, AutoModelForQuestionAnswering tokenizer = AutoTokenizer.from_pretrained("Primer/bart-squad2") model = AutoModelForQuestionAnswering.from_pretrained("Primer/bart-squad2") model.to('cuda'); model.eval() def answer(question, text): seq = '<s>' + question + ' </s> </s> ' + text + ' </s>' tokens = tokenizer.encode_plus(seq, return_tensors='pt', padding='max_length', max_length=1024) input_ids = tokens['input_ids'].to('cuda') attention_mask = tokens['attention_mask'].to('cuda') start, end, _ = model(input_ids, attention_mask=attention_mask) start_idx = int(start.argmax().int()) end_idx = int(end.argmax().int()) print(tokenizer.decode(input_ids[0, start_idx:end_idx]).strip()) # ^^ it will be an empty string if the model decided "unanswerable" >>> question = "Where does Tom live?" >>> context = "Tom is an engineer in San Francisco." >>> answer(question, context) San Francisco ``` (Just drop the `.to('cuda')` stuff if running on CPU). #### Limitations and bias Unknown, no further evaluation has been performed. In a technical sense one big limitation is that it's 1.6G 😬 ## Training procedure `run_squad.py` with: |param|value| |---|---| |batch size|8| |max_seq_length|1024| |learning rate|1e-5| |epochs|2| Modified to freeze shared parameters and encoder embeddings.
Priscila/latentbert
2021-04-19T18:22:02.000Z
[]
[ ".gitattributes" ]
Priscila
0
Priscila/teste
2021-04-19T18:18:30.000Z
[]
[ ".gitattributes" ]
Priscila
0
ProsusAI/finbert
2021-05-18T21:54:10.000Z
[ "pytorch", "jax", "bert", "text-classification", "en", "arxiv:1908.10063", "transformers", "financial-sentiment-analysis", "sentiment-analysis" ]
text-classification
[ ".gitattributes", "README.md", "config.json", "flax_model.msgpack", "pytorch_model.bin", "special_tokens_map.json", "tokenizer_config.json", "vocab.txt" ]
ProsusAI
307,488
transformers
--- language: "en" tags: - financial-sentiment-analysis - sentiment-analysis widget: - text: "Stocks rallied and the British pound gained." --- FinBERT is a pre-trained NLP model to analyze sentiment of financial text. It is built by further training the BERT language model in the finance domain, using a large financial corpus and thereby fine-tuning it for financial sentiment classification. [Financial PhraseBank](https://www.researchgate.net/publication/251231107_Good_Debt_or_Bad_Debt_Detecting_Semantic_Orientations_in_Economic_Texts) by Malo et al. (2014) is used for fine-tuning. For more details, please see [FinBERT: Financial Sentiment Analysis with Pre-trained Language Models](https://arxiv.org/abs/1908.10063). The model will give softmax outputs for three labels: positive, negative or neutral.