Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4229
- Wer: 0.2386
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.5486 | 4.0 | 500 | 2.1672 | 0.9876 |
| 0.6819 | 8.0 | 1000 | 0.4502 | 0.3301 |
| 0.2353 | 12.0 | 1500 | 0.4352 | 0.2841 |
| 0.1427 | 16.0 | 2000 | 0.4237 | 0.2584 |
| 0.0945 | 20.0 | 2500 | 0.4409 | 0.2545 |
| 0.0671 | 24.0 | 3000 | 0.4257 | 0.2413 |
| 0.0492 | 28.0 | 3500 | 0.4229 | 0.2386 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-demo-colab", "results": []}]} | Rafat/wav2vec2-base-timit-demo-colab | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Raghdan/training | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | RahulKadam0909/Transformers-sentiment-analysis | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | transformers | {} | RahulRaman/Kannada-LM-DeBERTa | null | [
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | transformers | {} | RahulRaman/Kannada-LM-RoBERTa | null | [
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | transformers | {} | RahulRaman/Malayalam-LM-DeBERTa | null | [
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | transformers | {} | RahulRaman/Malayalam-LM-Electra | null | [
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | transformers | {} | RahulRaman/Malayalam-LM-RoBERTa | null | [
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | transformers | {} | RahulRaman/Tamil-LM-DeBERTa | null | [
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | transformers | {} | RahulRaman/Tamil-LM-Electra | null | [
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | transformers | {} | RahulRaman/Tamil-LM-RoBERTa | null | [
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | transformers | {} | RahulRaman/Telugu-LM-DeBERTa | null | [
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | transformers | {} | RahulRaman/Telugu-LM-Electra | null | [
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | transformers | {} | RahulRaman/Telugu-LM-RoBERTa | null | [
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | transformers | {} | RahulRaman/kannada-LM-Electra | null | [
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | transformers | {} | RahuramThiagarajan/rass | null | [
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers | {} | Rai220/test1 | null | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {"license": "afl-3.0"} | Raid/Hh | null | [
"license:afl-3.0",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
token-classification | transformers | {} | Rainiefantasy/GO1984_BERTUncased | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
token-classification | transformers | {} | Rainiefantasy/GO1984_DistilBERT | null | [
"transformers",
"pytorch",
"distilbert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Rainiefantasy/HuggingFace_Model | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Raintree/Wav2Vec2_AMED16K_THIRD | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4526
- Wer: 0.3411
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.7503 | 4.0 | 500 | 2.4125 | 1.0006 |
| 0.9595 | 8.0 | 1000 | 0.4833 | 0.4776 |
| 0.3018 | 12.0 | 1500 | 0.4333 | 0.4062 |
| 0.1751 | 16.0 | 2000 | 0.4474 | 0.3697 |
| 0.1288 | 20.0 | 2500 | 0.4445 | 0.3558 |
| 0.1073 | 24.0 | 3000 | 0.4695 | 0.3464 |
| 0.0816 | 28.0 | 3500 | 0.4526 | 0.3411 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-demo-colab", "results": []}]} | Raintree/wav2vec2-base-timit-demo-colab | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Raintree/wav2vec2-data-16K | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-sports-titles
This model is a fine-tuned pegasus on some **sports news articles scraped from the internet. (For educational purposes only)**. The model can generate titles for sports articles. Try it out using the inference API.
## Model description
A Pegasus model tuned on generating scientific titles has been further fine-tuned to generate titles for sports articles. While training articles on **Tennis, Football (Soccer), Cricket , Athletics and Rugby** were used to generate titles. I experimented training the Tokenizer from scratch but it did not give good results compared to the pre-trained tokenizer.
## Usage
```python
from transformers import pipeline
#Feel free to play around with the generation parameters.
#Reduce the beam width for faster inference
#Note that the maximum length for the generated titles is 64
gen_kwargs = {"length_penalty": 0.6, "num_beams":4, "num_return_sequences": 4,"num_beam_groups":4,"diversity_penalty":2.0}
pipe = pipeline("summarization", model="RajSang/pegasus-sports-titles")
#Change the article according to your wish
article="""
Coutinho was just about to be introduced by Villa boss Gerrard midway through the second half when Bruno Fernandes slammed home
his second goal of the game off the underside of the bar. But the Brazilian proved the catalyst for a memorable response.
First he drove at the United defence, helping to create the space which Jacob Ramsey exploited to halve the deficit. Then Ramsey slid over an excellent
cross from the left which Raphael Varane was unable to intercept as he slid back, leaving Coutinho to finish into an empty net.
The goal brought celebrations at both ends of the pitch as Emiliano Martinez also went into the crowd in relief - it was the Argentine's horrible sixth-minute error that had gifted Fernandes the visitors' opener.
Given his background - with Liverpool, Barcelona and Bayern Munich - Coutinho is a bold loan signing by Villa, and underlines the pedigree of the man they appointed as manager in November.
Gerrard is not at Villa to learn how to avoid relegation.
His demands remain as high as they were as a player and Coutinho's arrival is an example of that.
Villa are a better team since Gerrard's arrival and, after a sluggish start against opponents they dominated but lost to in the FA Cup five days ago, they grew into the game.
The club's other newboy, Lucas Digne, was among those denied by United keeper David de Gea at the end of the first half - in unorthodox fashion, with his knees.
Ollie Watkins did not really test the Spain keeper when Villa broke after Edinson Cavani lost possession in his own half. However, Emi Buendia certainly did with a near-post header. Rooted to his line, De Gea's reactions were up to the job as he beat Buendia's effort away.
When De Gea produced more saves after half-time to deny Ramsey and Digne again, it appeared the image of the night for Villa would be midfielder Morgan Sanson kicking a drinks bottle in fury after his error in gifting Fred possession to set up Fernandes for the visitors' second had been followed immediately by his substitution.
However, as it was the prelude to Coutinho's arrival, it was the moment that changed the course of the game - and the acclaim for the Brazilian at the final whistle indicated Villa's fans are already firmly behind him.
"""
result=pipe(article, **gen_kwargs)[0]["summary_text"]
print(result)
''' Output
Title 1 :
Coutinho's arrival sparks Villa comeback
Title 2 :
Philippe Coutinho marked his debut for Aston Villa with a goal and an assist as Steven Gerrard's side came from two goals down to draw with Manchester United.
Title 3 :
Steven Gerrard's first game in charge of Aston Villa ended in a dramatic draw against Manchester United - but it was the arrival of Philippe Coutinho that marked the night.
Title 4 :
Liverpool loanee Philippe Coutinho marked his first appearance for Aston Villa with two goals as Steven Gerrard's side came from two goals down to draw 2-2.'''
```
## Training procedure
While training, **short titles were combined with the subtitles for the articles to improve the quality of the generated titles and the subtitles were removed from the main body of the articles.**
##Limitations
In rare cases, if the opening few lines of a passage/article are descriptive enough, the model often just copies these lines instead of looking for information further down the articles, which may not be conducive in some cases.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 2
### Training results
**Rouge1:38.2315**
**Rouge2: 18.6598**
**RougueL: 31.7393**
**RougeLsum: 31.7086**
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"language": "en", "tags": ["generated_from_trainer"], "widget": [{"text": "Coutinho was just about to be introduced by Villa boss Gerrard midway through the second half when Bruno Fernandes slammed home his second goal of the game off the underside of the bar. But the Brazilian proved the catalyst for a memorable response. First he drove at the United defence, helping to create the space which Jacob Ramsey exploited to halve the deficit. Then Ramsey slid over an excellent cross from the left which Raphael Varane was unable to intercept as he slid back, leaving Coutinho to finish into an empty net. The goal brought celebrations at both ends of the pitch as Emiliano Martinez also went into the crowd in relief - it was the Argentine's horrible sixth-minute error that had gifted Fernandes the visitors' opener. Given his background - with Liverpool, Barcelona and Bayern Munich - Coutinho is a bold loan signing by Villa, and underlines the pedigree of the man they appointed as manager in November. Gerrard is not at Villa to learn how to avoid relegation. His demands remain as high as they were as a player and Coutinho's arrival is an example of that. Villa are a better team since Gerrard's arrival and, after a sluggish start against opponents they dominated but lost to in the FA Cup five days ago, they grew into the game. The club's other newboy, Lucas Digne, was among those denied by United keeper David de Gea at the end of the first half - in unorthodox fashion, with his knees. Ollie Watkins did not really test the Spain keeper when Villa broke after Edinson Cavani lost possession in his own half. However, Emi Buendia certainly did with a near-post header. Rooted to his line, De Gea's reactions were up to the job as he beat Buendia's effort away. When De Gea produced more saves after half-time to deny Ramsey and Digne again, it appeared the image of the night for Villa would be midfielder Morgan Sanson kicking a drinks bottle in fury after his error in gifting Fred possession to set up Fernandes for the visitors' second had been followed immediately by his substitution. However, as it was the prelude to Coutinho's arrival, it was the moment that changed the course of the game - and the acclaim for the Brazilian at the final whistle indicated Villa's fans are already firmly behind him."}]} | RajSang/pegasus-sports-titles | null | [
"transformers",
"pytorch",
"tensorboard",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"en",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers |
# NepaliBERT(Phase 1)
NEPALIBERT is a state-of-the-art language model for Nepali based on the BERT model. The model is trained using a masked language modeling (MLM).
# Loading the model and tokenizer
1. clone the model repo
```
git lfs install
git clone https://huggingface.co/Rajan/NepaliBERT
```
2. Loading the Tokenizer
```
from transformers import BertTokenizer
vocab_file_dir = './NepaliBERT/'
tokenizer = BertTokenizer.from_pretrained(vocab_file_dir,
strip_accents=False,
clean_text=False )
```
3. Loading the model:
```
from transformers import BertForMaskedLM
model = BertForMaskedLM.from_pretrained('./NepaliBERT')
```
The easiest way to check whether our language model is learning anything interesting is via the ```FillMaskPipeline```.
Pipelines are simple wrappers around tokenizers and models, and the 'fill-mask' one will let you input a sequence containing a masked token (here, [mask]) and return a list of the most probable filled sequences, with their probabilities.
```
from transformers import pipeline
fill_mask = pipeline(
"fill-mask",
model=model,
tokenizer=tokenizer
)
```
For more info visit the [GITHUB🤗](https://github.com/R4j4n/NepaliBERT) | {} | Rajan/NepaliBERT | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {"license": "apache-2.0"} | Rajan/NepaliPos | null | [
"license:apache-2.0",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | ERROR: type should be string, got "\r\nhttps://github.com/R4j4n/Nepali-Word2Vec-from-scratch\r\n\r\nHow to clone : \r\n```\r\ngit lfs install\r\ngit clone https://huggingface.co/Rajan/Nepali_Word2Vec\r\n```" | {"license": "mit"} | Rajan/Nepali_Word2Vec | null | [
"license:mit",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Rajan1/awsdsadd | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
image-classification | transformers |
# metrics:
# - accuracy
# model-index:
# - name: FacialEmoRecog
# results:
# - task:
# name: Image Classification
# type: image-classification
# - metrics:
# name: Accuracy
# type: accuracy
# value: 0.9189583659172058
# FacialEmoRecog
Create your own image classifier for **anything** by running this repo
## Example Images | {"language": ["en"], "license": "mit", "tags": ["image CLassification", "pytorch"], "datasets": ["Jeneral/fer2013"], "metrics": ["accuracy"], "inference": true, "pipeline_tag": "image-classification"} | Rajaram1996/FacialEmoRecog | null | [
"transformers",
"pytorch",
"vit",
"image-classification",
"image CLassification",
"en",
"dataset:Jeneral/fer2013",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
audio-classification | transformers |
Working example of using pretrained model to predict emotion in local audio file
```
def predict_emotion_hubert(audio_file):
""" inspired by an example from https://github.com/m3hrdadfi/soxan """
from audio_models import HubertForSpeechClassification
from transformers import Wav2Vec2FeatureExtractor, AutoConfig
import torch.nn.functional as F
import torch
import numpy as np
from pydub import AudioSegment
model = HubertForSpeechClassification.from_pretrained("Rajaram1996/Hubert_emotion") # Downloading: 362M
feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("facebook/hubert-base-ls960")
sampling_rate=16000 # defined by the model; must convert mp3 to this rate.
config = AutoConfig.from_pretrained("Rajaram1996/Hubert_emotion")
def speech_file_to_array(path, sampling_rate):
# using torchaudio...
# speech_array, _sampling_rate = torchaudio.load(path)
# resampler = torchaudio.transforms.Resample(_sampling_rate, sampling_rate)
# speech = resampler(speech_array).squeeze().numpy()
sound = AudioSegment.from_file(path)
sound = sound.set_frame_rate(sampling_rate)
sound_array = np.array(sound.get_array_of_samples())
return sound_array
sound_array = speech_file_to_array(audio_file, sampling_rate)
inputs = feature_extractor(sound_array, sampling_rate=sampling_rate, return_tensors="pt", padding=True)
inputs = {key: inputs[key].to("cpu").float() for key in inputs}
with torch.no_grad():
logits = model(**inputs).logits
scores = F.softmax(logits, dim=1).detach().cpu().numpy()[0]
outputs = [{
"emo": config.id2label[i],
"score": round(score * 100, 1)}
for i, score in enumerate(scores)
]
return [row for row in sorted(outputs, key=lambda x:x["score"], reverse=True) if row['score'] != '0.0%'][:2]
```
```
result = predict_emotion_hubert("male-crying.mp3")
>>> result
[{'emo': 'male_sad', 'score': 91.0}, {'emo': 'male_fear', 'score': 4.8}]
```
| {"tags": ["speech", "audio", "HUBert"], "inference": true, "pipeline_tag": "audio-classification"} | Rajaram1996/Hubert_emotion | null | [
"transformers",
"pytorch",
"hubert",
"speech",
"audio",
"HUBert",
"audio-classification",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers |
# Wav2Vec2-Large-XLSR-53-tamil
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Tamil using the [Common Voice](https://huggingface.co/datasets/common_voice)
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "ta", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("Rajaram1996/wav2vec2-large-xlsr-53-tamil")
model = Wav2Vec2ForCTC.from_pretrained("Rajaram1996/wav2vec2-large-xlsr-53-tamil")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the {language} test data of Common Voice.
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "ta", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("Rajaram1996/wav2vec2-large-xlsr-53-tamil")
model = Wav2Vec2ForCTC.from_pretrained("Rajaram1996/wav2vec2-large-xlsr-53-tamil")
model.to("cuda")
chars_to_ignore_regex = '[\\\\,\\\\?\\\\.\\\\!\\\\-\\\\;\\\\:\\\\"\\\\“]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 69.76 % | {"language": ["ta"], "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week", "hf-asr-leaderboard"], "datasets": ["common_voice"], "model-index": [{"name": "Rajaram1996/wav2vec2-large-xlsr-53-tamil", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice ta", "type": "common_voice", "args": "ta"}, "metrics": [{"type": "wer", "value": 69.76, "name": "Test WER"}]}]}]} | Rajaram1996/wav2vec2-large-xlsr-53-tamil | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"hf-asr-leaderboard",
"ta",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Rajeev064/bert2gpt2 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Rajnish/summary | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Rajnish/test | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
question-answering | transformers | # Model Card for roberta-base-on-cuad
# Model Details
## Model Description
- **Developed by:** Mohammed Rakib
- **Shared by [Optional]:** More information needed
- **Model type:** Question Answering
- **Language(s) (NLP):** en
- **License:** MIT
- **Related Models:**
- **Parent Model:** RoBERTa
- **Resources for more information:**
- GitHub Repo: [defactolaw](https://github.com/afra-tech/defactolaw)
- Associated Paper: [An Open Source Contractual Language Understanding Application Using Machine Learning](https://aclanthology.org/2022.lateraisse-1.6/)
# Uses
## Direct Use
This model can be used for the task of Question Answering on Legal Documents.
# Training Details
Read: [An Open Source Contractual Language Understanding Application Using Machine Learning](https://aclanthology.org/2022.lateraisse-1.6/)
for detailed information on training procedure, dataset preprocessing and evaluation.
## Training Data
See [CUAD dataset card](https://huggingface.co/datasets/cuad) for more information.
## Training Procedure
### Preprocessing
More information needed
### Speeds, Sizes, Times
More information needed
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
See [CUAD dataset card](https://huggingface.co/datasets/cuad) for more information.
### Factors
### Metrics
More information needed
## Results
More information needed
# Model Examination
More information needed
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
Used V100/P100 from Google Colab Pro
### Software
Python, Transformers
# Citation
**BibTeX:**
```
@inproceedings{nawar-etal-2022-open,
title = "An Open Source Contractual Language Understanding Application Using Machine Learning",
author = "Nawar, Afra and
Rakib, Mohammed and
Hai, Salma Abdul and
Haq, Sanaulla",
booktitle = "Proceedings of the First Workshop on Language Technology and Resources for a Fair, Inclusive, and Safe Society within the 13th Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lateraisse-1.6",
pages = "42--50",
abstract = "Legal field is characterized by its exclusivity and non-transparency. Despite the frequency and relevance of legal dealings, legal documents like contracts remains elusive to non-legal professionals for the copious usage of legal jargon. There has been little advancement in making legal contracts more comprehensible. This paper presents how Machine Learning and NLP can be applied to solve this problem, further considering the challenges of applying ML to the high length of contract documents and training in a low resource environment. The largest open-source contract dataset so far, the Contract Understanding Atticus Dataset (CUAD) is utilized. Various pre-processing experiments and hyperparameter tuning have been carried out and we successfully managed to eclipse SOTA results presented for models in the CUAD dataset trained on RoBERTa-base. Our model, A-type-RoBERTa-base achieved an AUPR score of 46.6{\%} compared to 42.6{\%} on the original RoBERT-base. This model is utilized in our end to end contract understanding application which is able to take a contract and highlight the clauses a user is looking to find along with it{'}s descriptions to aid due diligence before signing. Alongside digital, i.e. searchable, contracts the system is capable of processing scanned, i.e. non-searchable, contracts using tesseract OCR. This application is aimed to not only make contract review a comprehensible process to non-legal professionals, but also to help lawyers and attorneys more efficiently review contracts.",
}
```
# Glossary [optional]
More information needed
# More Information [optional]
More information needed
# Model Card Authors [optional]
Mohammed Rakib in collaboration with Ezi Ozoani and the Hugging Face team
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("Rakib/roberta-base-on-cuad")
model = AutoModelForQuestionAnswering.from_pretrained("Rakib/roberta-base-on-cuad")
```
</details> | {"language": ["en"], "license": "mit", "library_name": "transformers", "tags": ["legal-contract-review", "roberta", "cuad"], "datasets": ["cuad"], "pipeline_tag": "question-answering"} | Rakib/roberta-base-on-cuad | null | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"legal-contract-review",
"cuad",
"en",
"dataset:cuad",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Raksha297/FirstRepo | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Rakshith/HuggingFace | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | RamadasK7/ramadas-t5-squad-finetuned-squad | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | RamadasK7/t5-small-finetuned-squad | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | RameshArvind/roberta_long_answer_nq | null | [
"transformers",
"pytorch",
"jax",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | RamiEbeid/wav2vec2-base-timit-demo-colab | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Ramiz/lyrics-based-genre-classification | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Ramnathan/test | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers | {} | Ramnathan/wav2vec2 | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Ramo828/ramo | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Random0-0/DialoGPT-large-Azomekern | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Random0-0/DialoGPT-med-Azomekern | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Random0-0/DialoGPT-medi-Azomekern | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Random0-0/DialoGPT-medium-Azomekern | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Random0-0/DialoGPT-small-Azomekern | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers | {} | Ranger/Dial0GPT-small-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Ranjith/bert-based-japanese-sentiment | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
question-answering | transformers |
GreatModel does not solve any NLP problem ... for exercise purpose only.
| {} | RaphBL/great-model | null | [
"transformers",
"pytorch",
"camembert",
"question-answering",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1323
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.8535 | 1.0 | 661 | 2.0684 |
| 1.5385 | 2.0 | 1322 | 2.0954 |
| 1.2312 | 3.0 | 1983 | 2.1323 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad_v2"], "model-index": [{"name": "distilbert-base-uncased-finetuned-squad", "results": []}]} | Raphaelg9/distilbert-base-uncased-finetuned-squad | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Raphaelg9/distilbert-base-uncased-finetuned-squad2-finetuned-squad | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
# Rick Morty DialoGPT Model | {"tags": ["conversational"]} | Rashid11/DialoGPT-small-rick | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
# Harry Potter DialoGPT Model | {"tags": ["conversational"]} | Rathod/DialoGPT-small-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-thai-ASR
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6108
- Wer: 0.5636
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 7.1123 | 2.65 | 400 | 3.3946 | 1.0002 |
| 1.5734 | 5.3 | 800 | 0.6881 | 0.7290 |
| 0.5934 | 7.94 | 1200 | 0.5789 | 0.6402 |
| 0.4059 | 10.59 | 1600 | 0.5496 | 0.5976 |
| 0.3136 | 13.24 | 2000 | 0.6109 | 0.5863 |
| 0.2546 | 15.89 | 2400 | 0.6113 | 0.5865 |
| 0.2184 | 18.54 | 2800 | 0.6108 | 0.5636 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-thai-ASR", "results": []}]} | Rattana/wav2vec2-thai-ASR | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-thai-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-thai-colab", "results": []}]} | Rattana/wav2vec2-thai-colab | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers | {} | Ratul/sci_ner | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | RaviKesanakurti/abc | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
token-classification | transformers | {} | Ravika/roberta-base-finetuned | null | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
feature-extraction | transformers | {} | Raviraj/Raviraj-bert | null | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Raviraj/bert-base-multilingual-cased-bert-mlm-eval | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Raviraj/bert_multlang_cased | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Raviraj/my-hindi-fraudbert-tokenizer | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Raviraj/my-new-shiny-tokenizer | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Raviraj/temp | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
fill-mask | transformers |
This model is finetuned for masked language modeling.
I have used xlm-roberta-large model for pretraining over half a million tokens of
Hindi fraud call transcripts.
You can import this model with pretrained() method from the transformer library.
please note this works well on general Hindi but it's result on native language dialogues are enhanced
in comparison to general libraries. | {} | Raviraj/xlm-roberta-large-MLMfintune-hi-fraudcall | null | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Raviraj/xlm-roberta-large-fintune-hi-fraudcall-tok | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | Raychanan/bert-base-chinese-FineTuned-Binary-Best | null | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | DO NOT USE THIS | {} | Raychanan/chinese-roberta-wwm-ext-FineTuned-Binary | null | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers | {} | Raychanan/chinese-roberta-wwm-ext-FineTuned | null | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Raychanan/model_name | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Raychanan/your_model_name | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# QAIDeptModel
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv2](https://huggingface.co/aubmindlab/bert-base-arabertv2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 105 | 2.6675 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "model-index": [{"name": "QAIDeptModel", "results": []}]} | Razan/QAIDeptModel | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Razor/distilbert-base-uncased-finetuned-squad | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Razor/distilgpt2-finetuned-wikitext2 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | RealPokecraft/Daisy | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | transformers | {} | RealShreyas/ATSC | null | [
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Realdeo/indobert-base-p1-finetuned-squad | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
zero-shot-classification | transformers |
# bert-base-spanish-wwm-cased-xnli
**UPDATE, 15.10.2021: Check out our new zero-shot classifiers, much more lightweight and even outperforming this one: [zero-shot SELECTRA small](https://huggingface.co/Recognai/zeroshot_selectra_small) and [zero-shot SELECTRA medium](https://huggingface.co/Recognai/zeroshot_selectra_medium).**
## Model description
This model is a fine-tuned version of the [spanish BERT model](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) with the Spanish portion of the XNLI dataset. You can have a look at the [training script](https://huggingface.co/Recognai/bert-base-spanish-wwm-cased-xnli/blob/main/zeroshot_training_script.py) for details of the training.
### How to use
You can use this model with Hugging Face's [zero-shot-classification pipeline](https://discuss.huggingface.co/t/new-pipeline-for-zero-shot-text-classification/681):
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification",
model="Recognai/bert-base-spanish-wwm-cased-xnli")
classifier(
"El autor se perfila, a los 50 años de su muerte, como uno de los grandes de su siglo",
candidate_labels=["cultura", "sociedad", "economia", "salud", "deportes"],
hypothesis_template="Este ejemplo es {}."
)
"""output
{'sequence': 'El autor se perfila, a los 50 años de su muerte, como uno de los grandes de su siglo',
'labels': ['cultura', 'sociedad', 'economia', 'salud', 'deportes'],
'scores': [0.38897448778152466,
0.22997373342514038,
0.1658431738615036,
0.1205764189362526,
0.09463217109441757]}
"""
```
## Eval results
Accuracy for the test set:
| | XNLI-es |
|-----------------------------|---------|
|bert-base-spanish-wwm-cased-xnli | 79.9% | | {"language": "es", "license": "mit", "tags": ["zero-shot-classification", "nli", "pytorch"], "datasets": ["xnli"], "pipeline_tag": "zero-shot-classification", "widget": [{"text": "El autor se perfila, a los 50 a\u00f1os de su muerte, como uno de los grandes de su siglo", "candidate_labels": "cultura, sociedad, economia, salud, deportes"}]} | Recognai/bert-base-spanish-wwm-cased-xnli | null | [
"transformers",
"pytorch",
"jax",
"safetensors",
"bert",
"text-classification",
"zero-shot-classification",
"nli",
"es",
"dataset:xnli",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers |
# DistilBERT base multilingual model Spanish subset (cased)
This model is the Spanish extract of `distilbert-base-multilingual-cased` (https://huggingface.co/distilbert-base-multilingual-cased), a distilled version of the [BERT base multilingual model](bert-base-multilingual-cased). This model is cased: it does make a difference between english and English.
It uses the extraction method proposed by Geotrend described in https://github.com/Geotrend-research/smaller-transformers.
The resulting model has the same architecture as DistilmBERT: 6 layers, 768 dimension and 12 heads, with a total of **63M parameters** (compared to 134M parameters for DistilmBERT).
The goal of this model is to reduce even further the size of the `distilbert-base-multilingual` multilingual model by selecting only most frequent tokens for Spanish, reducing the size of the embedding layer. For more details visit the paper from the Geotrend team: Load What You Need: Smaller Versions of Multilingual BERT. | {"language": "es", "license": "apache-2.0", "datasets": ["wikipedia"], "widget": [{"text": "Mi nombre es Juan y vivo en [MASK]."}]} | Recognai/distilbert-base-es-multilingual-cased | null | [
"transformers",
"pytorch",
"safetensors",
"distilbert",
"fill-mask",
"es",
"dataset:wikipedia",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | transformers |
# SELECTRA: A Spanish ELECTRA
SELECTRA is a Spanish pre-trained language model based on [ELECTRA](https://github.com/google-research/electra).
We release a `small` and `medium` version with the following configuration:
| Model | Layers | Embedding/Hidden Size | Params | Vocab Size | Max Sequence Length | Cased |
| --- | --- | --- | --- | --- | --- | --- |
| [SELECTRA small](https://huggingface.co/Recognai/selectra_small) | 12 | 256 | 22M | 50k | 512 | True |
| **SELECTRA medium** | **12** | **384** | **41M** | **50k** | **512** | **True** |
**SELECTRA small (medium) is about 5 (3) times smaller than BETO but achieves comparable results** (see Metrics section below).
## Usage
From the original [ELECTRA model card](https://huggingface.co/google/electra-small-discriminator): "ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a GAN."
The discriminator should therefore activate the logit corresponding to the fake input token, as the following example demonstrates:
```python
from transformers import ElectraForPreTraining, ElectraTokenizerFast
discriminator = ElectraForPreTraining.from_pretrained("Recognai/selectra_small")
tokenizer = ElectraTokenizerFast.from_pretrained("Recognai/selectra_small")
sentence_with_fake_token = "Estamos desayunando pan rosa con tomate y aceite de oliva."
inputs = tokenizer.encode(sentence_with_fake_token, return_tensors="pt")
logits = discriminator(inputs).logits.tolist()[0]
print("\t".join(tokenizer.tokenize(sentence_with_fake_token)))
print("\t".join(map(lambda x: str(x)[:4], logits[1:-1])))
"""Output:
Estamos desayun ##ando pan rosa con tomate y aceite de oliva .
-3.1 -3.6 -6.9 -3.0 0.19 -4.5 -3.3 -5.1 -5.7 -7.7 -4.4 -4.2
"""
```
However, you probably want to use this model to fine-tune it on a downstream task.
We provide models fine-tuned on the [XNLI dataset](https://huggingface.co/datasets/xnli), which can be used together with the zero-shot classification pipeline:
- [Zero-shot SELECTRA small](https://huggingface.co/Recognai/zeroshot_selectra_small)
- [Zero-shot SELECTRA medium](https://huggingface.co/Recognai/zeroshot_selectra_medium)
## Metrics
We fine-tune our models on 3 different down-stream tasks:
- [XNLI](https://huggingface.co/datasets/xnli)
- [PAWS-X](https://huggingface.co/datasets/paws-x)
- [CoNLL2002 - NER](https://huggingface.co/datasets/conll2002)
For each task, we conduct 5 trials and state the mean and standard deviation of the metrics in the table below.
To compare our results to other Spanish language models, we provide the same metrics taken from the [evaluation table](https://github.com/PlanTL-SANIDAD/lm-spanish#evaluation-) of the [Spanish Language Model](https://github.com/PlanTL-SANIDAD/lm-spanish) repo.
| Model | CoNLL2002 - NER (f1) | PAWS-X (acc) | XNLI (acc) | Params |
| --- | --- | --- | --- | --- |
| SELECTRA small | 0.865 +- 0.004 | 0.896 +- 0.002 | 0.784 +- 0.002 | 22M |
| SELECTRA medium | 0.873 +- 0.003 | 0.896 +- 0.002 | 0.804 +- 0.002 | 41M |
| | | | | |
| [mBERT](https://huggingface.co/bert-base-multilingual-cased) | 0.8691 | 0.8955 | 0.7876 | 178M |
| [BETO](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) | 0.8759 | 0.9000 | 0.8130 | 110M |
| [RoBERTa-b](https://huggingface.co/BSC-TeMU/roberta-base-bne) | 0.8851 | 0.9000 | 0.8016 | 125M |
| [RoBERTa-l](https://huggingface.co/BSC-TeMU/roberta-large-bne) | 0.8772 | 0.9060 | 0.7958 | 355M |
| [Bertin](https://huggingface.co/bertin-project/bertin-roberta-base-spanish/tree/v1-512) | 0.8835 | 0.8990 | 0.7890 | 125M |
| [ELECTRICIDAD](https://huggingface.co/mrm8488/electricidad-base-discriminator) | 0.7954 | 0.9025 | 0.7878 | 109M |
Some details of our fine-tuning runs:
- epochs: 5
- batch-size: 32
- learning rate: 1e-4
- warmup proportion: 0.1
- linear learning rate decay
- layerwise learning rate decay
For all the details, check out our [selectra repo](https://github.com/recognai/selectra).
## Training
We pre-trained our SELECTRA models on the Spanish portion of the [Oscar](https://huggingface.co/datasets/oscar) dataset, which is about 150GB in size.
Each model version is trained for 300k steps, with a warm restart of the learning rate after the first 150k steps.
Some details of the training:
- steps: 300k
- batch-size: 128
- learning rate: 5e-4
- warmup steps: 10k
- linear learning rate decay
- TPU cores: 8 (v2-8)
For all details, check out our [selectra repo](https://github.com/recognai/selectra).
**Note:** Due to a misconfiguration in the pre-training scripts the embeddings of the vocabulary containing an accent were not optimized. If you fine-tune this model on a down-stream task, you might consider using a tokenizer that does not strip the accents:
```python
tokenizer = ElectraTokenizerFast.from_pretrained("Recognai/selectra_small", strip_accents=False)
```
## Motivation
Despite the abundance of excellent Spanish language models (BETO, BSC-BNE, Bertin, ELECTRICIDAD, etc.), we felt there was still a lack of distilled or compact Spanish language models and a lack of comparing those to their bigger siblings.
## Acknowledgment
This research was supported by the Google TPU Research Cloud (TRC) program.
## Authors
- David Fidalgo ([GitHub](https://github.com/dcfidalgo))
- Javier Lopez ([GitHub](https://github.com/javispp))
- Daniel Vila ([GitHub](https://github.com/dvsrepo))
- Francisco Aranda ([GitHub](https://github.com/frascuchon)) | {"language": ["es"], "license": "apache-2.0", "datasets": ["oscar"], "thumbnail": "url to a thumbnail used in social sharing"} | Recognai/selectra_medium | null | [
"transformers",
"pytorch",
"electra",
"pretraining",
"es",
"dataset:oscar",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | transformers |
# SELECTRA: A Spanish ELECTRA
SELECTRA is a Spanish pre-trained language model based on [ELECTRA](https://github.com/google-research/electra).
We release a `small` and `medium` version with the following configuration:
| Model | Layers | Embedding/Hidden Size | Params | Vocab Size | Max Sequence Length | Cased |
| --- | --- | --- | --- | --- | --- | --- |
| **SELECTRA small** | **12** | **256** | **22M** | **50k** | **512** | **True** |
| [SELECTRA medium](https://huggingface.co/Recognai/selectra_medium) | 12 | 384 | 41M | 50k | 512 | True |
**SELECTRA small (medium) is about 5 (3) times smaller than BETO but achieves comparable results** (see Metrics section below).
## Usage
From the original [ELECTRA model card](https://huggingface.co/google/electra-small-discriminator): "ELECTRA models are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a GAN."
The discriminator should therefore activate the logit corresponding to the fake input token, as the following example demonstrates:
```python
from transformers import ElectraForPreTraining, ElectraTokenizerFast
discriminator = ElectraForPreTraining.from_pretrained("Recognai/selectra_small")
tokenizer = ElectraTokenizerFast.from_pretrained("Recognai/selectra_small")
sentence_with_fake_token = "Estamos desayunando pan rosa con tomate y aceite de oliva."
inputs = tokenizer.encode(sentence_with_fake_token, return_tensors="pt")
logits = discriminator(inputs).logits.tolist()[0]
print("\t".join(tokenizer.tokenize(sentence_with_fake_token)))
print("\t".join(map(lambda x: str(x)[:4], logits[1:-1])))
"""Output:
Estamos desayun ##ando pan rosa con tomate y aceite de oliva .
-3.1 -3.6 -6.9 -3.0 0.19 -4.5 -3.3 -5.1 -5.7 -7.7 -4.4 -4.2
"""
```
However, you probably want to use this model to fine-tune it on a downstream task.
We provide models fine-tuned on the [XNLI dataset](https://huggingface.co/datasets/xnli), which can be used together with the zero-shot classification pipeline:
- [Zero-shot SELECTRA small](https://huggingface.co/Recognai/zeroshot_selectra_small)
- [Zero-shot SELECTRA medium](https://huggingface.co/Recognai/zeroshot_selectra_medium)
## Metrics
We fine-tune our models on 3 different down-stream tasks:
- [XNLI](https://huggingface.co/datasets/xnli)
- [PAWS-X](https://huggingface.co/datasets/paws-x)
- [CoNLL2002 - NER](https://huggingface.co/datasets/conll2002)
For each task, we conduct 5 trials and state the mean and standard deviation of the metrics in the table below.
To compare our results to other Spanish language models, we provide the same metrics taken from the [evaluation table](https://github.com/PlanTL-SANIDAD/lm-spanish#evaluation-) of the [Spanish Language Model](https://github.com/PlanTL-SANIDAD/lm-spanish) repo.
| Model | CoNLL2002 - NER (f1) | PAWS-X (acc) | XNLI (acc) | Params |
| --- | --- | --- | --- | --- |
| SELECTRA small | 0.865 +- 0.004 | 0.896 +- 0.002 | 0.784 +- 0.002 | 22M |
| SELECTRA medium | 0.873 +- 0.003 | 0.896 +- 0.002 | 0.804 +- 0.002 | 41M |
| | | | | |
| [mBERT](https://huggingface.co/bert-base-multilingual-cased) | 0.8691 | 0.8955 | 0.7876 | 178M |
| [BETO](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) | 0.8759 | 0.9000 | 0.8130 | 110M |
| [RoBERTa-b](https://huggingface.co/BSC-TeMU/roberta-base-bne) | 0.8851 | 0.9000 | 0.8016 | 125M |
| [RoBERTa-l](https://huggingface.co/BSC-TeMU/roberta-large-bne) | 0.8772 | 0.9060 | 0.7958 | 355M |
| [Bertin](https://huggingface.co/bertin-project/bertin-roberta-base-spanish/tree/v1-512) | 0.8835 | 0.8990 | 0.7890 | 125M |
| [ELECTRICIDAD](https://huggingface.co/mrm8488/electricidad-base-discriminator) | 0.7954 | 0.9025 | 0.7878 | 109M |
Some details of our fine-tuning runs:
- epochs: 5
- batch-size: 32
- learning rate: 1e-4
- warmup proportion: 0.1
- linear learning rate decay
- layerwise learning rate decay
For all the details, check out our [selectra repo](https://github.com/recognai/selectra).
## Training
We pre-trained our SELECTRA models on the Spanish portion of the [Oscar](https://huggingface.co/datasets/oscar) dataset, which is about 150GB in size.
Each model version is trained for 300k steps, with a warm restart of the learning rate after the first 150k steps.
Some details of the training:
- steps: 300k
- batch-size: 128
- learning rate: 5e-4
- warmup steps: 10k
- linear learning rate decay
- TPU cores: 8 (v2-8)
For all details, check out our [selectra repo](https://github.com/recognai/selectra).
**Note:** Due to a misconfiguration in the pre-training scripts the embeddings of the vocabulary containing an accent were not optimized. If you fine-tune this model on a down-stream task, you might consider using a tokenizer that does not strip the accents:
```python
tokenizer = ElectraTokenizerFast.from_pretrained("Recognai/selectra_small", strip_accents=False)
```
## Motivation
Despite the abundance of excellent Spanish language models (BETO, BSC-BNE, Bertin, ELECTRICIDAD, etc.), we felt there was still a lack of distilled or compact Spanish language models and a lack of comparing those to their bigger siblings.
## Acknowledgment
This research was supported by the Google TPU Research Cloud (TRC) program.
## Authors
- David Fidalgo ([GitHub](https://github.com/dcfidalgo))
- Javier Lopez ([GitHub](https://github.com/javispp))
- Daniel Vila ([GitHub](https://github.com/dvsrepo))
- Francisco Aranda ([GitHub](https://github.com/frascuchon)) | {"language": ["es"], "license": "apache-2.0", "datasets": ["oscar"], "thumbnail": "url to a thumbnail used in social sharing"} | Recognai/selectra_small | null | [
"transformers",
"pytorch",
"electra",
"pretraining",
"es",
"dataset:oscar",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers | {} | Recognai/veganuary_ner | null | [
"transformers",
"pytorch",
"safetensors",
"roberta",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
zero-shot-classification | transformers | # Zero-shot SELECTRA: A zero-shot classifier based on SELECTRA
*Zero-shot SELECTRA* is a [SELECTRA model](https://huggingface.co/Recognai/selectra_small) fine-tuned on the Spanish portion of the [XNLI dataset](https://huggingface.co/datasets/xnli). You can use it with Hugging Face's [Zero-shot pipeline](https://huggingface.co/transformers/master/main_classes/pipelines.html#transformers.ZeroShotClassificationPipeline) to make [zero-shot classifications](https://joeddav.github.io/blog/2020/05/29/ZSL.html).
In comparison to our previous zero-shot classifier [based on BETO](https://huggingface.co/Recognai/bert-base-spanish-wwm-cased-xnli), zero-shot SELECTRA is **much more lightweight**. As shown in the *Metrics* section, the *small* version (5 times fewer parameters) performs slightly worse, while the *medium* version (3 times fewer parameters) **outperforms** the BETO based zero-shot classifier.
## Usage
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification",
model="Recognai/zeroshot_selectra_medium")
classifier(
"El autor se perfila, a los 50 años de su muerte, como uno de los grandes de su siglo",
candidate_labels=["cultura", "sociedad", "economia", "salud", "deportes"],
hypothesis_template="Este ejemplo es {}."
)
"""Output
{'sequence': 'El autor se perfila, a los 50 años de su muerte, como uno de los grandes de su siglo',
'labels': ['sociedad', 'cultura', 'economia', 'salud', 'deportes'],
'scores': [0.6450043320655823,
0.16710571944713593,
0.08507631719112396,
0.0759836807847023,
0.026829993352293968]}
"""
```
The `hypothesis_template` parameter is important and should be in Spanish. **In the widget on the right, this parameter is set to its default value: "This example is {}.", so different results are expected.**
## Demo and tutorial
If you want to see this model in action, we have created a basic tutorial using [Rubrix](https://www.rubrix.ml/), a free and open-source tool to *explore, annotate, and monitor data for NLP*.
The tutorial shows you how to evaluate this classifier for news categorization in Spanish, and how it could be used to build a training set for training a supervised classifier (which might be useful if you want obtain more precise results or improve the model over time).
You can [find the tutorial here](https://rubrix.readthedocs.io/en/master/tutorials/zeroshot_data_annotation.html).
See the video below showing the predictions within the annotation process (see that the predictions are almost correct for every example).
<video width="100%" controls><source src="https://github.com/recognai/rubrix-materials/raw/main/tutorials/videos/zeroshot_selectra_news_data_annotation.mp4" type="video/mp4"></video>
## Metrics
| Model | Params | XNLI (acc) | \*MLSUM (acc) |
| --- | --- | --- | --- |
| [zs BETO](https://huggingface.co/Recognai/bert-base-spanish-wwm-cased-xnli) | 110M | 0.799 | 0.530 |
| zs SELECTRA medium | 41M | **0.807** | **0.589** |
| [zs SELECTRA small](https://huggingface.co/Recognai/zeroshot_selectra_small) | **22M** | 0.795 | 0.446 |
\*evaluated with zero-shot learning (ZSL)
- **XNLI**: The stated accuracy refers to the test portion of the [XNLI dataset](https://huggingface.co/datasets/xnli), after finetuning the model on the training portion.
- **MLSUM**: For this accuracy we take the test set of the [MLSUM dataset](https://huggingface.co/datasets/mlsum) and classify the summaries of 5 selected labels. For details, check out our [evaluation notebook](https://github.com/recognai/selectra/blob/main/zero-shot_classifier/evaluation.ipynb)
## Training
Check out our [training notebook](https://github.com/recognai/selectra/blob/main/zero-shot_classifier/training.ipynb) for all the details.
## Authors
- David Fidalgo ([GitHub](https://github.com/dcfidalgo))
- Daniel Vila ([GitHub](https://github.com/dvsrepo))
- Francisco Aranda ([GitHub](https://github.com/frascuchon))
- Javier Lopez ([GitHub](https://github.com/javispp)) | {"language": "es", "license": "apache-2.0", "tags": ["zero-shot-classification", "nli", "pytorch"], "datasets": ["xnli"], "pipeline_tag": "zero-shot-classification", "widget": [{"text": "El autor se perfila, a los 50 a\u00f1os de su muerte, como uno de los grandes de su siglo", "candidate_labels": "cultura, sociedad, economia, salud, deportes"}]} | Recognai/zeroshot_selectra_medium | null | [
"transformers",
"pytorch",
"safetensors",
"electra",
"text-classification",
"zero-shot-classification",
"nli",
"es",
"dataset:xnli",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
zero-shot-classification | transformers | # Zero-shot SELECTRA: A zero-shot classifier based on SELECTRA
*Zero-shot SELECTRA* is a [SELECTRA model](https://huggingface.co/Recognai/selectra_small) fine-tuned on the Spanish portion of the [XNLI dataset](https://huggingface.co/datasets/xnli). You can use it with Hugging Face's [Zero-shot pipeline](https://huggingface.co/transformers/master/main_classes/pipelines.html#transformers.ZeroShotClassificationPipeline) to make [zero-shot classifications](https://joeddav.github.io/blog/2020/05/29/ZSL.html).
In comparison to our previous zero-shot classifier [based on BETO](https://huggingface.co/Recognai/bert-base-spanish-wwm-cased-xnli), zero-shot SELECTRA is **much more lightweight**. As shown in the *Metrics* section, the *small* version (5 times fewer parameters) performs slightly worse, while the *medium* version (3 times fewer parameters) **outperforms** the BETO based zero-shot classifier.
## Usage
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification",
model="Recognai/zeroshot_selectra_medium")
classifier(
"El autor se perfila, a los 50 años de su muerte, como uno de los grandes de su siglo",
candidate_labels=["cultura", "sociedad", "economia", "salud", "deportes"],
hypothesis_template="Este ejemplo es {}."
)
"""Output
{'sequence': 'El autor se perfila, a los 50 años de su muerte, como uno de los grandes de su siglo',
'labels': ['sociedad', 'cultura', 'salud', 'economia', 'deportes'],
'scores': [0.3711881935596466,
0.25650349259376526,
0.17355826497077942,
0.1641489565372467,
0.03460107371211052]}
"""
```
The `hypothesis_template` parameter is important and should be in Spanish. **In the widget on the right, this parameter is set to its default value: "This example is {}.", so different results are expected.**
## Metrics
| Model | Params | XNLI (acc) | \*MLSUM (acc) |
| --- | --- | --- | --- |
| [zs BETO](https://huggingface.co/Recognai/bert-base-spanish-wwm-cased-xnli) | 110M | 0.799 | 0.530 |
| [zs SELECTRA medium](https://huggingface.co/Recognai/zeroshot_selectra_medium) | 41M | **0.807** | **0.589** |
| zs SELECTRA small | **22M** | 0.795 | 0.446 |
\*evaluated with zero-shot learning (ZSL)
- **XNLI**: The stated accuracy refers to the test portion of the [XNLI dataset](https://huggingface.co/datasets/xnli), after finetuning the model on the training portion.
- **MLSUM**: For this accuracy we take the test set of the [MLSUM dataset](https://huggingface.co/datasets/mlsum) and classify the summaries of 5 selected labels. For details, check out our [evaluation notebook](https://github.com/recognai/selectra/blob/main/zero-shot_classifier/evaluation.ipynb)
## Training
Check out our [training notebook](https://github.com/recognai/selectra/blob/main/zero-shot_classifier/training.ipynb) for all the details.
## Authors
- David Fidalgo ([GitHub](https://github.com/dcfidalgo))
- Daniel Vila ([GitHub](https://github.com/dvsrepo))
- Francisco Aranda ([GitHub](https://github.com/frascuchon))
- Javier Lopez ([GitHub](https://github.com/javispp)) | {"language": "es", "license": "apache-2.0", "tags": ["zero-shot-classification", "nli", "pytorch"], "datasets": ["xnli"], "pipeline_tag": "zero-shot-classification", "widget": [{"text": "El autor se perfila, a los 50 a\u00f1os de su muerte, como uno de los grandes de su siglo", "candidate_labels": "cultura, sociedad, economia, salud, deportes"}]} | Recognai/zeroshot_selectra_small | null | [
"transformers",
"pytorch",
"safetensors",
"electra",
"text-classification",
"zero-shot-classification",
"nli",
"es",
"dataset:xnli",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
## Swedish BERT models for sentiment analysis, Sentiment targets.
[Recorded Future](https://www.recordedfuture.com/) together with [AI Sweden](https://www.ai.se/en) releases a Named Entity Recognition(NER) model for entety detection in Swedish. The model is based on [KB/bert-base-swedish-cased](https://huggingface.co/KB/bert-base-swedish-cased) and finetuned on data collected from various internet sources and forums.
The model has been trained on Swedish data and only supports inference of Swedish input texts. The models inference metrics for all non-Swedish inputs are not defined, these inputs are considered as out of domain data.
The current models are supported at Transformers version >= 4.3.3 and Torch version 1.8.0, compatibility with older versions are not verified.
### Available tags
* Location
* Organization
* Person
* Religion
* Title
### Evaluation metrics
The model had the following metrics when evaluated on test data originating from the same domain as the training data.
#### F1-score
| Loc | Org | Per | Nat | Rel | Tit | Total |
|------|------|------|------|------|------|-------|
| 0.91 | 0.88 | 0.96 | 0.95 | 0.91 | 0.84 | 0.92 |
| {"language": "sv", "license": "mit"} | RecordedFuture/Swedish-NER | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"sv",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
## Swedish BERT models for sentiment analysis, Sentiment targets.
[Recorded Future](https://www.recordedfuture.com/) together with [AI Sweden](https://www.ai.se/en) releases two language models for target/role assignment in Swedish. The two models are based on the [KB/bert-base-swedish-cased](https://huggingface.co/KB/bert-base-swedish-cased), the models as has been fine tuned to solve a Named Entety Recognition(NER) token classification task.
This is a downstream model to be used in conjunction with the [Swedish violence sentiment classifier](https://huggingface.co/RecordedFuture/Swedish-Sentiment-Violence) or [Swedish violence sentiment classifier](https://huggingface.co/RecordedFuture/Swedish-Sentiment-Fear). The models are trained to tag parts of sentences that has recieved a positive classification from the upstream sentiment classifier. The model will tag parts of sentences that contains the targets that the upstream model has activated on.
The NER sentiment target models do work as standalone models but their recommended application is downstreamfrom a sentence classification model.
The models are only trained on Swedish data and only supports inference of Swedish input texts. The models inference metrics for all non-Swedish inputs are not defined, these inputs are considered as out of domain data.
The current models are supported at Transformers version >= 4.3.3 and Torch version 1.8.0, compatibility with older versions are not verified.
### Fear targets
The model can be imported from the transformers library by running
from transformers import BertForSequenceClassification, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("RecordedFuture/Swedish-Sentiment-Fear-Targets")
classifier_fear_targets= BertForTokenClassification.from_pretrained("RecordedFuture/Swedish-Sentiment-Fear-Targets")
When the model and tokenizer are initialized the model can be used for inference.
#### Verification metrics
During training the Fear target model had the following verification metrics when using "any overlap" as the evaluation metric.
| F-score | Precision | Recall |
|:-------------------------:|:-------:|:---------:|:------:|
| 0.8361 | 0.7903 | 0.8876 |
#### Swedish-Sentiment-Violence
The model be can imported from the transformers library by running
from transformers import BertForSequenceClassification, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("RecordedFuture/Swedish-Sentiment-Violence-Targets")
classifier_violence_targets = BertForTokenClassification.from_pretrained("RecordedFuture/Swedish-Sentiment-Violence-Targets")
When the model and tokenizer are initialized the model can be used for inference.
#### Verification metrics
During training the Violence target model had the following verification metrics when using "any overlap" as the evaluation metric.
| F-score | Precision | Recall |
|:-------------------------:|:-------:|:---------:|:------:|
| 0.7831| 0.9155| 0.8442 | | {"language": "sv", "license": "mit"} | RecordedFuture/Swedish-Sentiment-Fear-Targets | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"token-classification",
"sv",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
## Swedish BERT models for sentiment analysis
[Recorded Future](https://www.recordedfuture.com/) together with [AI Sweden](https://www.ai.se/en) releases two language models for sentiment analysis in Swedish. The two models are based on the [KB\/bert-base-swedish-cased](https://huggingface.co/KB/bert-base-swedish-cased) model and has been fine-tuned to solve a multi-label sentiment analysis task.
The models have been fine-tuned for the sentiments fear and violence. The models output three floats corresponding to the labels "Negative", "Weak sentiment", and "Strong Sentiment" at the respective indexes.
The models have been trained on Swedish data with a conversational focus, collected from various internet sources and forums.
The models are only trained on Swedish data and only supports inference of Swedish input texts. The models inference metrics for all non-Swedish inputs are not defined, these inputs are considered as out of domain data.
The current models are supported at Transformers version >= 4.3.3 and Torch version 1.8.0, compatibility with older versions are not verified.
### Swedish-Sentiment-Fear
The model can be imported from the transformers library by running
from transformers import BertForSequenceClassification, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("RecordedFuture/Swedish-Sentiment-Fear")
classifier_fear= BertForSequenceClassification.from_pretrained("RecordedFuture/Swedish-Sentiment-Fear")
When the model and tokenizer are initialized the model can be used for inference.
#### Sentiment definitions
#### The strong sentiment includes but are not limited to
Texts that:
- Hold an expressive emphasis on fear and/ or anxiety
#### The weak sentiment includes but are not limited to
Texts that:
- Express fear and/ or anxiety in a neutral way
#### Verification metrics
During training, the model had maximized validation metrics at the following classification breakpoint.
| Classification Breakpoint | F-score | Precision | Recall |
|:-------------------------:|:-------:|:---------:|:------:|
| 0.45 | 0.8754 | 0.8618 | 0.8895 |
#### Swedish-Sentiment-Violence
The model be can imported from the transformers library by running
from transformers import BertForSequenceClassification, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("RecordedFuture/Swedish-Sentiment-Violence")
classifier_violence = BertForSequenceClassification.from_pretrained("RecordedFuture/Swedish-Sentiment-Violence")
When the model and tokenizer are initialized the model can be used for inference.
### Sentiment definitions
#### The strong sentiment includes but are not limited to
Texts that:
- Referencing highly violent acts
- Hold an aggressive tone
#### The weak sentiment includes but are not limited to
Texts that:
- Include general violent statements that do not fall under the strong sentiment
#### Verification metrics
During training, the model had maximized validation metrics at the following classification breakpoint.
| Classification Breakpoint | F-score | Precision | Recall |
|:-------------------------:|:-------:|:---------:|:------:|
| 0.35 | 0.7677 | 0.7456 | 0.791 | | {"language": "sv", "license": "mit"} | RecordedFuture/Swedish-Sentiment-Fear | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"sv",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
## Swedish BERT models for sentiment analysis, Sentiment targets.
[Recorded Future](https://www.recordedfuture.com/) together with [AI Sweden](https://www.ai.se/en) releases two language models for target/role assignment in Swedish. The two models are based on the [KB/bert-base-swedish-cased](https://huggingface.co/KB/bert-base-swedish-cased), the models as has been fine tuned to solve a Named Entety Recognition(NER) token classification task.
This is a downstream model to be used in conjunction with the [Swedish violence sentiment classifier](https://huggingface.co/RecordedFuture/Swedish-Sentiment-Violence) or [Swedish violence sentiment classifier](https://huggingface.co/RecordedFuture/Swedish-Sentiment-Fear). The models are trained to tag parts of sentences that has recieved a positive classification from the upstream sentiment classifier. The model will tag parts of sentences that contains the targets that the upstream model has activated on.
The NER sentiment target models do work as standalone models but their recommended application is downstreamfrom a sentence classification model.
The models are only trained on Swedish data and only supports inference of Swedish input texts. The models inference metrics for all non-Swedish inputs are not defined, these inputs are considered as out of domain data.
The current models are supported at Transformers version >= 4.3.3 and Torch version 1.8.0, compatibility with older versions are not verified.
### Fear targets
The model can be imported from the transformers library by running
from transformers import BertForSequenceClassification, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("RecordedFuture/Swedish-Sentiment-Fear-Targets")
classifier_fear_targets= BertForTokenClassification.from_pretrained("RecordedFuture/Swedish-Sentiment-Fear-Targets")
When the model and tokenizer are initialized the model can be used for inference.
#### Verification metrics
During training the Fear target model had the following verification metrics when using "any overlap" as the evaluation metric.
| F-score | Precision | Recall |
|:-------------------------:|:-------:|:---------:|:------:|
| 0.8361 | 0.7903 | 0.8876 |
#### Swedish-Sentiment-Violence
The model be can imported from the transformers library by running
from transformers import BertForSequenceClassification, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("RecordedFuture/Swedish-Sentiment-Violence-Targets")
classifier_violence_targets = BertForTokenClassification.from_pretrained("RecordedFuture/Swedish-Sentiment-Violence-Targets")
When the model and tokenizer are initialized the model can be used for inference.
#### Verification metrics
During training the Violence target model had the following verification metrics when using "any overlap" as the evaluation metric.
| F-score | Precision | Recall |
|:-------------------------:|:-------:|:---------:|:------:|
| 0.7831| 0.9155| 0.8442 | | {"language": "sv", "license": "mit"} | RecordedFuture/Swedish-Sentiment-Violence-Targets | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"token-classification",
"sv",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
## Swedish BERT models for sentiment analysis
[Recorded Future](https://www.recordedfuture.com/) together with [AI Sweden](https://www.ai.se/en) releases two language models for sentiment analysis in Swedish. The two models are based on the [KB\/bert-base-swedish-cased](https://huggingface.co/KB/bert-base-swedish-cased) model and has been fine-tuned to solve a multi-label sentiment analysis task.
The models have been fine-tuned for the sentiments fear and violence. The models output three floats corresponding to the labels "Negative", "Weak sentiment", and "Strong Sentiment" at the respective indexes.
The models have been trained on Swedish data with a conversational focus, collected from various internet sources and forums.
The models are only trained on Swedish data and only supports inference of Swedish input texts. The models inference metrics for all non-Swedish inputs are not defined, these inputs are considered as out of domain data.
The current models are supported at Transformers version >= 4.3.3 and Torch version 1.8.0, compatibility with older versions are not verified.
### Swedish-Sentiment-Fear
The model can be imported from the transformers library by running
from transformers import BertForSequenceClassification, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("RecordedFuture/Swedish-Sentiment-Fear")
classifier_fear= BertForSequenceClassification.from_pretrained("RecordedFuture/Swedish-Sentiment-Fear")
When the model and tokenizer are initialized the model can be used for inference.
#### Sentiment definitions
#### The strong sentiment includes but are not limited to
Texts that:
- Hold an expressive emphasis on fear and/ or anxiety
#### The weak sentiment includes but are not limited to
Texts that:
- Express fear and/ or anxiety in a neutral way
#### Verification metrics
During training, the model had maximized validation metrics at the following classification breakpoint.
| Classification Breakpoint | F-score | Precision | Recall |
|:-------------------------:|:-------:|:---------:|:------:|
| 0.45 | 0.8754 | 0.8618 | 0.8895 |
#### Swedish-Sentiment-Violence
The model be can imported from the transformers library by running
from transformers import BertForSequenceClassification, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("RecordedFuture/Swedish-Sentiment-Violence")
classifier_violence = BertForSequenceClassification.from_pretrained("RecordedFuture/Swedish-Sentiment-Violence")
When the model and tokenizer are initialized the model can be used for inference.
### Sentiment definitions
#### The strong sentiment includes but are not limited to
Texts that:
- Referencing highly violent acts
- Hold an aggressive tone
#### The weak sentiment includes but are not limited to
Texts that:
- Include general violent statements that do not fall under the strong sentiment
#### Verification metrics
During training, the model had maximized validation metrics at the following classification breakpoint.
| Classification Breakpoint | F-score | Precision | Recall |
|:-------------------------:|:-------:|:---------:|:------:|
| 0.35 | 0.7677 | 0.7456 | 0.791 | | {"language": "sv", "license": "mit"} | RecordedFuture/Swedish-Sentiment-Violence | null | [
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"text-classification",
"sv",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | RedPandaAINLP/distilbert-base-uncased-finetuned-squad | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers | #Rick DialoGPT Model.
>Following https://github.com/RuolinZheng08/twewy-discord-chatbot Tutorial. | {"tags": ["conversational"]} | Redolid/DialoGPT-small-Rick | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {"license": "cc"} | Rehash/1stmodel | null | [
"license:cc",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
# Steins Gate DialoGPT Model | {"tags": ["conversational"]} | Rei/DialoGPT-medium-kurisu | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Reifuku/KK-CB | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | RemiCj/DialoGPT-small-RickSanchez | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.