modelId
stringlengths
4
81
tags
list
pipeline_tag
stringclasses
17 values
config
dict
downloads
int64
0
59.7M
first_commit
timestamp[ns, tz=UTC]
card
stringlengths
51
438k
Blerrrry/Kkk
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-07-17T10:31:08Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion-ch02 results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.934 - name: F1 type: f1 value: 0.9341801255709286 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion-ch02 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1703 - Accuracy: 0.934 - F1: 0.9342 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.2923 | 1.0 | 250 | 0.2001 | 0.9275 | 0.9263 | | 0.1485 | 2.0 | 500 | 0.1703 | 0.934 | 0.9342 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
BlightZz/DialoGPT-medium-Kurisu
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
19
null
--- inference: false license: bigscience-bloom-rail-1.0 language: - ak - ar - as - bm - bn - ca - en - es - eu - fon - fr - gu - hi - id - ig - ki - kn - lg - ln - ml - mr - ne - nso - ny - or - pa - pt - rn - rw - sn - st - sw - ta - te - tn - ts - tum - tw - ur - vi - wo - xh - yo - zh - zu pipeline_tag: text-generation --- ### Quantized bigscience/bloom 1B3 with 8-bit weights Heavily inspired by [Hivemind's GPT-J-6B with 8-bit weights](https://huggingface.co/hivemind/gpt-j-6B-8bit), this is a version of [bigscience/bloom](https://huggingface.co/bigscience/bloom-1b3) a ~1 billion parameters language model that you run and fine-tune with less memory. Here, we also apply [LoRA (Low Rank Adaptation)](https://arxiv.org/abs/2106.09685) to reduce model size. ### How to fine-tune TBA ### How to use This model can be used by adapting Bloom original implementation. This is an adaptation from [Hivemind's GPT-J 8-bit](https://nbviewer.org/urls/huggingface.co/hivemind/gpt-j-6B-8bit/raw/main/convert-gpt-j.ipynb): ```python import transformers import torch import torch.nn as nn import torch.nn.functional as F from bitsandbytes.functional import quantize_blockwise, dequantize_blockwise from typing import Tuple from torch.cuda.amp import custom_fwd, custom_bwd class FrozenBNBLinear(nn.Module): def __init__(self, weight, absmax, code, bias=None): assert isinstance(bias, nn.Parameter) or bias is None super().__init__() self.out_features, self.in_features = weight.shape self.register_buffer("weight", weight.requires_grad_(False)) self.register_buffer("absmax", absmax.requires_grad_(False)) self.register_buffer("code", code.requires_grad_(False)) self.adapter = None self.bias = bias def forward(self, input): output = DequantizeAndLinear.apply(input, self.weight, self.absmax, self.code, self.bias) if self.adapter: output += self.adapter(input) return output @classmethod def from_linear(cls, linear: nn.Linear) -> "FrozenBNBLinear": weights_int8, state = quantize_blockise_lowmemory(linear.weight) return cls(weights_int8, *state, linear.bias) def __repr__(self): return f"{self.__class__.__name__}({self.in_features}, {self.out_features})" class DequantizeAndLinear(torch.autograd.Function): @staticmethod @custom_fwd def forward(ctx, input: torch.Tensor, weights_quantized: torch.ByteTensor, absmax: torch.FloatTensor, code: torch.FloatTensor, bias: torch.FloatTensor): weights_deq = dequantize_blockwise(weights_quantized, absmax=absmax, code=code) ctx.save_for_backward(input, weights_quantized, absmax, code) ctx._has_bias = bias is not None return F.linear(input, weights_deq, bias) @staticmethod @custom_bwd def backward(ctx, grad_output: torch.Tensor): assert not ctx.needs_input_grad[1] and not ctx.needs_input_grad[2] and not ctx.needs_input_grad[3] input, weights_quantized, absmax, code = ctx.saved_tensors # grad_output: [*batch, out_features] weights_deq = dequantize_blockwise(weights_quantized, absmax=absmax, code=code) grad_input = grad_output @ weights_deq grad_bias = grad_output.flatten(0, -2).sum(dim=0) if ctx._has_bias else None return grad_input, None, None, None, grad_bias class FrozenBNBEmbedding(nn.Module): def __init__(self, weight, absmax, code): super().__init__() self.num_embeddings, self.embedding_dim = weight.shape self.register_buffer("weight", weight.requires_grad_(False)) self.register_buffer("absmax", absmax.requires_grad_(False)) self.register_buffer("code", code.requires_grad_(False)) self.adapter = None def forward(self, input, **kwargs): with torch.no_grad(): # note: both quantuized weights and input indices are *not* differentiable weight_deq = dequantize_blockwise(self.weight, absmax=self.absmax, code=self.code) output = F.embedding(input, weight_deq, **kwargs) if self.adapter: output += self.adapter(input) return output @classmethod def from_embedding(cls, embedding: nn.Embedding) -> "FrozenBNBEmbedding": weights_int8, state = quantize_blockise_lowmemory(embedding.weight) return cls(weights_int8, *state) def __repr__(self): return f"{self.__class__.__name__}({self.num_embeddings}, {self.embedding_dim})" def quantize_blockise_lowmemory(matrix: torch.Tensor, chunk_size: int = 2 ** 20): assert chunk_size % 4096 == 0 code = None chunks = [] absmaxes = [] flat_tensor = matrix.view(-1) for i in range((matrix.numel() - 1) // chunk_size + 1): input_chunk = flat_tensor[i * chunk_size: (i + 1) * chunk_size].clone() quantized_chunk, (absmax_chunk, code) = quantize_blockwise(input_chunk, code=code) chunks.append(quantized_chunk) absmaxes.append(absmax_chunk) matrix_i8 = torch.cat(chunks).reshape_as(matrix) absmax = torch.cat(absmaxes) return matrix_i8, (absmax, code) def convert_to_int8(model): """Convert linear and embedding modules to 8-bit with optional adapters""" for module in list(model.modules()): for name, child in module.named_children(): if isinstance(child, nn.Linear): print(name, child) setattr( module, name, FrozenBNBLinear( weight=torch.zeros(child.out_features, child.in_features, dtype=torch.uint8), absmax=torch.zeros((child.weight.numel() - 1) // 4096 + 1), code=torch.zeros(256), bias=child.bias, ), ) elif isinstance(child, nn.Embedding): setattr( module, name, FrozenBNBEmbedding( weight=torch.zeros(child.num_embeddings, child.embedding_dim, dtype=torch.uint8), absmax=torch.zeros((child.weight.numel() - 1) // 4096 + 1), code=torch.zeros(256), ) ) class BloomBlock(transformers.models.bloom.modeling_bloom.BloomBlock): def __init__(self, config, layer_number=None): super().__init__(config, layer_number) convert_to_int8(self.self_attention) convert_to_int8(self.mlp) class BloomModel(transformers.models.bloom.modeling_bloom.BloomModel): def __init__(self, config): super().__init__(config) convert_to_int8(self) class BloomForCausalLM(transformers.models.bloom.modeling_bloom.BloomForCausalLM): def __init__(self, config): super().__init__(config) convert_to_int8(self) transformers.models.bloom.modeling_bloom.BloomBlock = BloomBlock model_name = 'mrm8488/bloom-1b3-8bit' model = BloomForCausalLM.from_pretrained(model_name, low_cpu_mem_usage=True) tokenizer = BloomTokenizerFast.from_pretrained(model_name) prompt = tokenizer("Given a table named salaries and columns id, created_at, salary, age. Creates a SQL to answer What is the average salary for 22 years old:", return_tensors='pt') out = model.generate(**prompt, min_length=10, do_sample=True) tokenizer.decode(out[0]) ```
BlueGamerBeast/DialoGPT-small-Morgana
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery-v01 results: - metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="roykoand/q-FrozenLake-v1-4x4-noSlippery-v01", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
BobBraico/distilbert-base-uncased-finetuned-imdb-accelerate
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.9245 - name: F1 type: f1 value: 0.9246080819022496 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2158 - Accuracy: 0.9245 - F1: 0.9246 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8152 | 1.0 | 250 | 0.2994 | 0.9095 | 0.9072 | | 0.2424 | 2.0 | 500 | 0.2158 | 0.9245 | 0.9246 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.12.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
BobBraico/distilbert-base-uncased-finetuned-imdb
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-07-17T12:51:23Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="reachrkr/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
Brona/model1
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-07-17T16:38:40Z
--- language: - en tags: - pytorch - ner - text generation - seq2seq inference: false license: mit datasets: - conll2003 metrics: - f1 --- # t5-base-qa-ner-conll Unofficial implementation of [InstructionNER](https://arxiv.org/pdf/2203.03903v1.pdf). t5-base model tuned on conll2003 dataset. https://github.com/ovbystrova/InstructionNER ## Inference ```shell git clone https://github.com/ovbystrova/InstructionNER cd InstructionNER ``` ```python from instruction_ner.model import Model model = Model( model_path_or_name="olgaduchovny/t5-base-ner-conll", tokenizer_path_or_name="olgaduchovny/t5-base-ner-conll" ) options = ["LOC", "PER", "ORG", "MISC"] instruction = "please extract entities and their types from the input sentence, " \ "all entity types are in options" text = "The protest , which attracted several thousand supporters , coincided with the 18th anniversary of Spain 's constitution ." generation_kwargs = { "num_beams": 2, "max_length": 128 } pred_text, pred_spans = model.predict( text=text, generation_kwargs=generation_kwargs, instruction=instruction, options=options ) >>> ('Spain is a Loc.', [(99, 104, 'LOC')]) ``` ## Prediction Sample ``` Sentence: The protest , which attracted several thousand supporters , coincided with the 18th anniversary of Spain 's constitution . Instruction: please extract entities and their types from the input sentence, all entity types are in options Options: ORG, PER, LOC Prediction (raw text): Spain is a LOC. Prediction (span): [(99, 104, 'LOC')] ```
Brona/poc_de
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - paraphrasing - generated_from_trainer datasets: - paws metrics: - rouge model-index: - name: pegasus-xsum-finetuned-paws results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: paws type: paws args: labeled_final metrics: - name: Rouge1 type: rouge value: 92.4371 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-xsum-finetuned-paws This model is a fine-tuned version of [google/pegasus-xsum](https://huggingface.co/google/pegasus-xsum) on the paws dataset. It achieves the following results on the evaluation set: - Loss: 2.1199 - Rouge1: 92.4371 - Rouge2: 75.4061 - Rougel: 84.1519 - Rougelsum: 84.1958 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP - label_smoothing_factor: 0.1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:| | 2.1481 | 1.46 | 1000 | 2.0112 | 93.7727 | 73.3021 | 84.2963 | 84.2506 | | 2.0113 | 2.93 | 2000 | 2.0579 | 93.813 | 73.4119 | 84.3674 | 84.2693 | | 2.054 | 4.39 | 3000 | 2.0890 | 93.3926 | 73.3727 | 84.2814 | 84.1649 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
BrunoNogueira/DialoGPT-kungfupanda
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
2022-07-17T16:58:50Z
--- language: ru datasets: - Common Voice metrics: - wer tags: - audio - speech - wav2vec2 - Russian-speech-corpus - automatic-speech-recognition - speech - PyTorch license: apache-2.0 model-index: - name: Edresson Casanova Wav2vec2 Large 100k Voxpopuli fine-tuned with a single-speaker dataset in Russian results: - task: name: Speech Recognition type: automatic-speech-recognition metrics: - name: Test Common Voice 7.0 WER type: wer value: 74.02 --- # Wav2vec2 Large 100k Voxpopuli fine-tuned with a single-speaker dataset in Russian [Wav2vec2 Large 100k Voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) fine-tuned in Russian using a single-speaker dataset. # Use this model ```python from transformers import AutoTokenizer, Wav2Vec2ForCTC tokenizer = AutoTokenizer.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-TTS-Dataset-russian") model = Wav2Vec2ForCTC.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-TTS-Dataset-russian") ``` # Results For the results check the [paper](https://arxiv.org/abs/2204.00618) # Example test with Common Voice Dataset ```python dataset = load_dataset("common_voice", "ru", split="test", data_dir="./cv-corpus-7.0-2021-07-21") resampler = torchaudio.transforms.Resampl(orig_freq=48_000, new_freq=16_000) def map_to_array(batch): speech, _ = torchaudio.load(batch["path"]) batch["speech"] = resampler.forward(speech.squeeze(0)).numpy() batch["sampling_rate"] = resampler.new_freq batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") return batch ``` ```python ds = dataset.map(map_to_array) result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys())) print(wer.compute(predictions=result["predicted"], references=result["target"])) ```
Brunomezenga/NN
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-07-17T17:18:45Z
--- language: pt datasets: - Common Voice metrics: - wer tags: - audio - speech - wav2vec2 - pt - portuguese-speech-corpus - automatic-speech-recognition - speech - PyTorch license: apache-2.0 model-index: - name: Edresson Casanova Wav2vec2 Large 100k Voxpopuli fine-tuned with a single-speaker dataset in Portuguese results: - task: name: Speech Recognition type: automatic-speech-recognition metrics: - name: Test Common Voice 7.0 WER type: wer value: 63.90 --- # Wav2vec2 Large 100k Voxpopuli fine-tuned with a single-speaker dataset plus Data Augmentation in Portuguese [Wav2vec2 Large 100k Voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) fine-tuned in Portuguese using a single-speaker dataset (TTS-Portuguese Corpus). # Use this model ```python from transformers import AutoTokenizer, Wav2Vec2ForCTC tokenizer = AutoTokenizer.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-TTS-Dataset-portuguese") model = Wav2Vec2ForCTC.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-TTS-Dataset-portuguese") ``` # Results For the results check the [paper](https://arxiv.org/abs/2204.00618) # Example test with Common Voice Dataset ```python dataset = load_dataset("common_voice", "pt", split="test", data_dir="./cv-corpus-7.0-2021-07-21") resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000) def map_to_array(batch): speech, _ = torchaudio.load(batch["path"]) batch["speech"] = resampler.forward(speech.squeeze(0)).numpy() batch["sampling_rate"] = resampler.new_freq batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'") return batch ``` ```python ds = dataset.map(map_to_array) result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys())) print(wer.compute(predictions=result["predicted"], references=result["target"])) ```
Bryan190/Aguy190
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-07-17T17:29:32Z
--- tags: - paraphrasing - generated_from_trainer datasets: - paws metrics: - rouge model-index: - name: pegasus-pubmed-finetuned-paws results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: paws type: paws args: labeled_final metrics: - name: Rouge1 type: rouge value: 56.8108 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-pubmed-finetuned-paws This model is a fine-tuned version of [google/pegasus-pubmed](https://huggingface.co/google/pegasus-pubmed) on the paws dataset. It achieves the following results on the evaluation set: - Loss: 3.5012 - Rouge1: 56.8108 - Rouge2: 36.2576 - Rougel: 51.1666 - Rougelsum: 51.2193 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP - label_smoothing_factor: 0.1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:| | No log | 0.73 | 1000 | 3.8839 | 51.2731 | 29.8072 | 45.767 | 45.5732 | | 4.071 | 1.47 | 2000 | 3.6459 | 52.756 | 31.9185 | 48.0092 | 48.0544 | | 3.5467 | 2.2 | 3000 | 3.5849 | 54.8127 | 33.1959 | 49.326 | 49.4971 | | 3.5467 | 2.93 | 4000 | 3.5267 | 55.387 | 33.9516 | 50.683 | 50.6313 | | 3.3654 | 3.66 | 5000 | 3.5031 | 57.5279 | 35.2664 | 51.9903 | 52.258 | | 3.2844 | 4.4 | 6000 | 3.5296 | 56.0536 | 33.395 | 50.9909 | 51.244 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
Bryanwong/wangchanberta-ner
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="crdealme/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
Brykee/BrykeeBot
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-07-17T19:32:21Z
--- license: cc-by-nc-sa-4.0 tags: - grammar - spelling - punctuation - error-correction - grammar synthesis datasets: - jfleg widget: - text: "i can has cheezburger" example_title: "cheezburger" - text: "There car broke down so their hitching a ride to they're class." example_title: "compound-1" - text: "so em if we have an now so with fito ringina know how to estimate the tren given the ereafte mylite trend we can also em an estimate is nod s i again tort watfettering an we have estimated the trend an called wot to be called sthat of exty right now we can and look at wy this should not hare a trend i becan we just remove the trend an and we can we now estimate tesees ona effect of them exty" example_title: "Transcribed Audio Example 2" - text: "My coworker said he used a financial planner to help choose his stocks so he wouldn't loose money." example_title: "incorrect word choice (context)" - text: "good so hve on an tadley i'm not able to make it to the exla session on monday this week e which is why i am e recording pre recording an this excelleision and so to day i want e to talk about two things and first of all em i wont em wene give a summary er about ta ohow to remove trents in these nalitives from time series" example_title: "lowercased audio transcription output" - text: "Semo eaxmeslp of bda gmaramr ttah occru deu to nounprnooun ageremten errrso inlceud Anan adn Pat aer mairred he has bnee togethre fro 20 yaesr Anna and Pta aer plraul wheil he is sniurgla Teh sentecne suhold rdea Aann adn Pat are mraried tyhe heav" example_title: "descramble unintelligible text" - text: "Most of the course is about semantic or content of language but there are also interesting topics to be learned from the servicefeatures except statistics in characters in documents. At this point, Elvthos introduces himself as his native English speaker and goes on to say that if you continue to work on social scnce," example_title: "social science ASR summary output" parameters: max_length: 128 min_length: 2 num_beams: 8 repetition_penalty: 1.5 length_penalty: 0.95 early_stopping: True --- # grammar-synthesis-base (beta) a fine-tuned version of [google/t5-base-lm-adapt](https://huggingface.co/google/t5-base-lm-adapt) for grammar correction on an expanded version of the [JFLEG](https://paperswithcode.com/dataset/jfleg) dataset. Check out a [demo notebook on Colab here](https://colab.research.google.com/gist/pszemraj/91abb08aa99a14d9fdc59e851e8aed66/demo-for-grammar-synthesis-base.ipynb). usage in Python (after `pip install transformers`): ```python from transformers import pipeline corrector = pipeline( 'text2text-generation', 'pszemraj/grammar-synthesis-base', ) raw_text = 'i can has cheezburger' results = corrector(raw_text) print(results) ``` ## Model description The intent is to create a text2text language model that successfully completes "single-shot grammar correction" on a potentially grammatically incorrect text **that could have a lot of mistakes** with the important qualifier of **it does not semantically change text/information that IS grammatically correct.** Compare some of the heavier-error examples on [other grammar correction models](https://huggingface.co/models?dataset=dataset:jfleg) to see the difference :) ## Limitations - dataset: `cc-by-nc-sa-4.0` - model: `apache-2.0` - this is **still a work-in-progress** and while probably useful for "single-shot grammar correction" in a lot of cases, **give the outputs a glance for correctness ok?** ## Use Cases Obviously, this section is quite general as there are many things one can use "general single-shot grammar correction" for. Some ideas or use cases: 1. Correcting highly error-prone LM outputs. Some examples would be audio transcription (ASR) (this is literally some of the examples) or something like handwriting OCR. - To be investigated further, depending on what model/system is used it _might_ be worth it to apply this after OCR on typed characters. 2. Correcting/infilling text generated by text generation models to be cohesive/remove obvious errors that break the conversation immersion. I use this on the outputs of [this OPT 2.7B chatbot-esque model of myself](https://huggingface.co/pszemraj/opt-peter-2.7B). > An example of this model running on CPU with beam search: ``` original response: ive heard it attributed to a bunch of different philosophical schools, including stoicism, pragmatism, existentialism and even some forms of post-structuralism. i think one of the most interesting (and most difficult) philosophical problems is trying to let dogs (or other animals) out of cages. the reason why this is a difficult problem is because it seems to go against our grain (so to synthesizing took 306.12 seconds Final response in 1294.857 s: I've heard it attributed to a bunch of different philosophical schools, including solipsism, pragmatism, existentialism and even some forms of post-structuralism. i think one of the most interesting (and most difficult) philosophical problems is trying to let dogs (or other animals) out of cages. the reason why this is a difficult problem is because it seems to go against our grain (so to speak) ``` _Note: that I have some other logic that removes any periods at the end of the final sentence in this chatbot setting [to avoid coming off as passive aggressive](https://www.npr.org/2020/09/05/909969004/before-texting-your-kid-make-sure-to-double-check-your-punctuation)_ 3. Somewhat related to #2 above, fixing/correcting so-called [tortured-phrases](https://arxiv.org/abs/2107.06751) that are dead giveaways text was generated by a language model. _Note that _SOME_ of these are not fixed, especially as they venture into domain-specific terminology (i.e. irregular timberland instead of Random Forest)._ ## Training and evaluation data More information needed 😉 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 64 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.02 - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
Bryson575x/riceboi
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-07-17T20:29:58Z
--- language: - es library_name: pysentimiento tags: - twitter - named-entity-recognition - ner datasets: - lince --- # Named Entity Recognition model for Spanish/English ## robertuito-ner Repository: [https://github.com/pysentimiento/pysentimiento/](https://github.com/finiteautomata/pysentimiento/) Model trained with the Spanish/English split of the [LinCE NER corpus](https://ritual.uh.edu/lince/), a code-switched benchmark . Base model is [RoBERTuito](https://github.com/pysentimiento/robertuito), a RoBERTa model trained in Spanish tweets. ## Usage If you want to use this model, we suggest you use it directly from the `pysentimiento` library as it is not working properly with the pipeline due to tokenization issues ```python from pysentimiento import create_analyzer ner_analyzer = create_analyzer("ner", lang="es") ner_analyzer.predict( "rindanse ante el mejor, leonel andres messi cuccitini. serresiete no existis, segui en al-nassr" ) # [{'type': 'PER', # 'text': 'leonel andres messi cuccitini', # 'start': 24, # 'end': 53}, # {'type': 'PER', 'text': 'serresiete', 'start': 55, 'end': 65}, # {'type': 'LOC', 'text': 'al-nassr', 'start': 108, 'end': 116}] ``` ## Results Results are taken from the LinCE leaderboard | Model | Sentiment | NER | POS | |:-----------------------|:----------------|:-------------------|:--------| | RoBERTuito | **60.6** | 68.5 | 97.2 | | XLM Large | -- | **69.5** | **97.2** | | XLM Base | -- | 64.9 | 97.0 | | C2S mBERT | 59.1 | 64.6 | 96.9 | | mBERT | 56.4 | 64.0 | 97.1 | | BERT | 58.4 | 61.1 | 96.9 | | BETO | 56.5 | -- | -- | ## Citation If you use this model in your research, please cite pysentimiento, RoBERTuito and LinCE papers: ``` @misc{perez2021pysentimiento, title={pysentimiento: A Python Toolkit for Sentiment Analysis and SocialNLP tasks}, author={Juan Manuel Pérez and Juan Carlos Giudici and Franco Luque}, year={2021}, eprint={2106.09462}, archivePrefix={arXiv}, primaryClass={cs.CL} } @inproceedings{perez2022robertuito, title={RoBERTuito: a pre-trained language model for social media text in Spanish}, author={P{\'e}rez, Juan Manuel and Furman, Dami{\'a}n Ariel and Alemany, Laura Alonso and Luque, Franco M}, booktitle={Proceedings of the Thirteenth Language Resources and Evaluation Conference}, pages={7235--7243}, year={2022} } @inproceedings{aguilar2020lince, title={LinCE: A Centralized Benchmark for Linguistic Code-switching Evaluation}, author={Aguilar, Gustavo and Kar, Sudipta and Solorio, Thamar}, booktitle={Proceedings of the 12th Language Resources and Evaluation Conference}, pages={1803--1813}, year={2020} } ```
Bubb-les/DisloGPT-medium-HarryPotter
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: apache-2.0 tags: - image-classification - pytorch - onnx datasets: - pyronear/openfire --- # ReXNet-1.3x model Pretrained on a dataset for wildfire binary classification (soon to be shared). The ReXNet architecture was introduced in [this paper](https://arxiv.org/pdf/2007.00992.pdf). ## Model description The core idea of the author is to add a customized Squeeze-Excitation layer in the residual blocks that will prevent channel redundancy. ## Installation ### Prerequisites Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install PyroVision. ### Latest stable release You can install the last stable release of the package using [pypi](https://pypi.org/project/pyrovision/) as follows: ```shell pip install pyrovision ``` or using [conda](https://anaconda.org/pyronear/pyrovision): ```shell conda install -c pyronear pyrovision ``` ### Developer mode Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*: ```shell git clone https://github.com/pyronear/pyro-vision.git pip install -e pyro-vision/. ``` ## Usage instructions ```python from PIL import Image from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize from torchvision.transforms.functional import InterpolationMode from pyrovision.models import model_from_hf_hub model = model_from_hf_hub("pyronear/rexnet1_3x").eval() img = Image.open(path_to_an_image).convert("RGB") # Preprocessing config = model.default_cfg transform = Compose([ Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR), PILToTensor(), ConvertImageDtype(torch.float32), Normalize(config['mean'], config['std']) ]) input_tensor = transform(img).unsqueeze(0) # Inference with torch.inference_mode(): output = model(input_tensor) probs = output.squeeze(0).softmax(dim=0) ``` ## Citation Original paper ```bibtex @article{DBLP:journals/corr/abs-2007-00992, author = {Dongyoon Han and Sangdoo Yun and Byeongho Heo and Young Joon Yoo}, title = {ReXNet: Diminishing Representational Bottleneck on Convolutional Neural Network}, journal = {CoRR}, volume = {abs/2007.00992}, year = {2020}, url = {https://arxiv.org/abs/2007.00992}, eprinttype = {arXiv}, eprint = {2007.00992}, timestamp = {Mon, 06 Jul 2020 15:26:01 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2007-00992.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` Source of this implementation ```bibtex @software{Fernandez_Holocron_2020, author = {Fernandez, François-Guillaume}, month = {5}, title = {{Holocron}}, url = {https://github.com/frgfm/Holocron}, year = {2020} } ```
BumBelDumBel/TRUMP
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers", "generated_from_trainer", "license:mit" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
2022-07-17T20:30:58Z
--- license: apache-2.0 tags: - image-classification - pytorch - onnx datasets: - pyronear/openfire --- # ReXNet-1.5x model Pretrained on a dataset for wildfire binary classification (soon to be shared). The ReXNet architecture was introduced in [this paper](https://arxiv.org/pdf/2007.00992.pdf). ## Model description The core idea of the author is to add a customized Squeeze-Excitation layer in the residual blocks that will prevent channel redundancy. ## Installation ### Prerequisites Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install PyroVision. ### Latest stable release You can install the last stable release of the package using [pypi](https://pypi.org/project/pyrovision/) as follows: ```shell pip install pyrovision ``` or using [conda](https://anaconda.org/pyronear/pyrovision): ```shell conda install -c pyronear pyrovision ``` ### Developer mode Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*: ```shell git clone https://github.com/pyronear/pyro-vision.git pip install -e pyro-vision/. ``` ## Usage instructions ```python from PIL import Image from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize from torchvision.transforms.functional import InterpolationMode from pyrovision.models import model_from_hf_hub model = model_from_hf_hub("pyronear/rexnet1_5x").eval() img = Image.open(path_to_an_image).convert("RGB") # Preprocessing config = model.default_cfg transform = Compose([ Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR), PILToTensor(), ConvertImageDtype(torch.float32), Normalize(config['mean'], config['std']) ]) input_tensor = transform(img).unsqueeze(0) # Inference with torch.inference_mode(): output = model(input_tensor) probs = output.squeeze(0).softmax(dim=0) ``` ## Citation Original paper ```bibtex @article{DBLP:journals/corr/abs-2007-00992, author = {Dongyoon Han and Sangdoo Yun and Byeongho Heo and Young Joon Yoo}, title = {ReXNet: Diminishing Representational Bottleneck on Convolutional Neural Network}, journal = {CoRR}, volume = {abs/2007.00992}, year = {2020}, url = {https://arxiv.org/abs/2007.00992}, eprinttype = {arXiv}, eprint = {2007.00992}, timestamp = {Mon, 06 Jul 2020 15:26:01 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2007-00992.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` Source of this implementation ```bibtex @software{Fernandez_Holocron_2020, author = {Fernandez, François-Guillaume}, month = {5}, title = {{Holocron}}, url = {https://github.com/frgfm/Holocron}, year = {2020} } ```
BumBelDumBel/ZORK-AI-TEST
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers", "generated_from_trainer", "license:mit" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- license: apache-2.0 tags: - image-classification - pytorch - onnx datasets: - pyronear/openfire --- # MobileNet V3 - Large model Pretrained on a dataset for wildfire binary classification (soon to be shared). The MobileNet V3 architecture was introduced in [this paper](https://arxiv.org/pdf/1905.02244.pdf). ## Model description The core idea of the author is to simplify the final stage, while using SiLU as activations and making Squeeze-and-Excite blocks larger. ## Installation ### Prerequisites Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install PyroVision. ### Latest stable release You can install the last stable release of the package using [pypi](https://pypi.org/project/pyrovision/) as follows: ```shell pip install pyrovision ``` or using [conda](https://anaconda.org/pyronear/pyrovision): ```shell conda install -c pyronear pyrovision ``` ### Developer mode Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*: ```shell git clone https://github.com/pyronear/pyro-vision.git pip install -e pyro-vision/. ``` ## Usage instructions ```python from PIL import Image from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize from torchvision.transforms.functional import InterpolationMode from pyrovision.models import model_from_hf_hub model = model_from_hf_hub("pyronear/mobilenet_v3_large").eval() img = Image.open(path_to_an_image).convert("RGB") # Preprocessing config = model.default_cfg transform = Compose([ Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR), PILToTensor(), ConvertImageDtype(torch.float32), Normalize(config['mean'], config['std']) ]) input_tensor = transform(img).unsqueeze(0) # Inference with torch.inference_mode(): output = model(input_tensor) probs = output.squeeze(0).softmax(dim=0) ``` ## Citation Original paper ```bibtex @article{DBLP:journals/corr/abs-1905-02244, author = {Andrew Howard and Mark Sandler and Grace Chu and Liang{-}Chieh Chen and Bo Chen and Mingxing Tan and Weijun Wang and Yukun Zhu and Ruoming Pang and Vijay Vasudevan and Quoc V. Le and Hartwig Adam}, title = {Searching for MobileNetV3}, journal = {CoRR}, volume = {abs/1905.02244}, year = {2019}, url = {http://arxiv.org/abs/1905.02244}, eprinttype = {arXiv}, eprint = {1905.02244}, timestamp = {Thu, 27 May 2021 16:20:51 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-1905-02244.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` Source of this implementation ```bibtex @software{chintala_torchvision_2017, author = {Chintala, Soumith}, month = {4}, title = {{Torchvision}}, url = {https://github.com/pytorch/vision}, year = {2017} } ```
BumBelDumBel/ZORK_AI_SCIFI
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers", "generated_from_trainer" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
null
--- license: apache-2.0 tags: - image-classification - pytorch - onnx datasets: - pyronear/openfire --- # ResNet-18 model Pretrained on a dataset for wildfire binary classification (soon to be shared). ## Model description The core idea of the author is to help the gradient propagation through numerous layers by adding a skip connection. ## Installation ### Prerequisites Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install PyroVision. ### Latest stable release You can install the last stable release of the package using [pypi](https://pypi.org/project/pyrovision/) as follows: ```shell pip install pyrovision ``` or using [conda](https://anaconda.org/pyronear/pyrovision): ```shell conda install -c pyronear pyrovision ``` ### Developer mode Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*: ```shell git clone https://github.com/pyronear/pyro-vision.git pip install -e pyro-vision/. ``` ## Usage instructions ```python from PIL import Image from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize from torchvision.transforms.functional import InterpolationMode from pyrovision.models import model_from_hf_hub model = model_from_hf_hub("pyronear/resnet18").eval() img = Image.open(path_to_an_image).convert("RGB") # Preprocessing config = model.default_cfg transform = Compose([ Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR), PILToTensor(), ConvertImageDtype(torch.float32), Normalize(config['mean'], config['std']) ]) input_tensor = transform(img).unsqueeze(0) # Inference with torch.inference_mode(): output = model(input_tensor) probs = output.squeeze(0).softmax(dim=0) ``` ## Citation Original paper ```bibtex @article{DBLP:journals/corr/HeZRS15, author = {Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun}, title = {Deep Residual Learning for Image Recognition}, journal = {CoRR}, volume = {abs/1512.03385}, year = {2015}, url = {http://arxiv.org/abs/1512.03385}, eprinttype = {arXiv}, eprint = {1512.03385}, timestamp = {Wed, 17 Apr 2019 17:23:45 +0200}, biburl = {https://dblp.org/rec/journals/corr/HeZRS15.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` Source of this implementation ```bibtex @software{chintala_torchvision_2017, author = {Chintala, Soumith}, month = {4}, title = {{Torchvision}}, url = {https://github.com/pytorch/vision}, year = {2017} } ```
BunakovD/sd
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-07-17T21:07:12Z
--- license: apache-2.0 tags: - image-classification - pytorch - onnx datasets: - pyronear/openfire --- # ResNet-34 model Pretrained on a dataset for wildfire binary classification (soon to be shared). ## Model description The core idea of the author is to help the gradient propagation through numerous layers by adding a skip connection. ## Installation ### Prerequisites Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install PyroVision. ### Latest stable release You can install the last stable release of the package using [pypi](https://pypi.org/project/pyrovision/) as follows: ```shell pip install pyrovision ``` or using [conda](https://anaconda.org/pyronear/pyrovision): ```shell conda install -c pyronear pyrovision ``` ### Developer mode Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*: ```shell git clone https://github.com/pyronear/pyro-vision.git pip install -e pyro-vision/. ``` ## Usage instructions ```python from PIL import Image from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize from torchvision.transforms.functional import InterpolationMode from pyrovision.models import model_from_hf_hub model = model_from_hf_hub("pyronear/resnet34").eval() img = Image.open(path_to_an_image).convert("RGB") # Preprocessing config = model.default_cfg transform = Compose([ Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR), PILToTensor(), ConvertImageDtype(torch.float32), Normalize(config['mean'], config['std']) ]) input_tensor = transform(img).unsqueeze(0) # Inference with torch.inference_mode(): output = model(input_tensor) probs = output.squeeze(0).softmax(dim=0) ``` ## Citation Original paper ```bibtex @article{DBLP:journals/corr/HeZRS15, author = {Kaiming He and Xiangyu Zhang and Shaoqing Ren and Jian Sun}, title = {Deep Residual Learning for Image Recognition}, journal = {CoRR}, volume = {abs/1512.03385}, year = {2015}, url = {http://arxiv.org/abs/1512.03385}, eprinttype = {arXiv}, eprint = {1512.03385}, timestamp = {Wed, 17 Apr 2019 17:23:45 +0200}, biburl = {https://dblp.org/rec/journals/corr/HeZRS15.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` Source of this implementation ```bibtex @software{chintala_torchvision_2017, author = {Chintala, Soumith}, month = {4}, title = {{Torchvision}}, url = {https://github.com/pytorch/vision}, year = {2017} } ```
Buntan/xlm-roberta-base-finetuned-marc-en
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - es tags: - twitter - pos-tagging --- # POS Tagging model for Spanish/English ## robertuito-pos Repository: [https://github.com/pysentimiento/pysentimiento/](https://github.com/finiteautomata/pysentimiento/) Model trained with the Spanish/English split of the [LinCE NER corpus](https://ritual.uh.edu/lince/), a code-switched benchmark . Base model is [RoBERTuito](https://github.com/pysentimiento/robertuito), a RoBERTa model trained in Spanish tweets. ## Usage If you want to use this model, we suggest you use it directly from the `pysentimiento` library as it is not working properly with the pipeline due to tokenization issues ```python from pysentimiento import create_analyzer pos_analyzer = create_analyzer("pos", lang="es") pos_analyzer.predict("Quiero que esto funcione correctamente! @perezjotaeme") >[{'type': 'PROPN', 'text': 'Quiero', 'start': 0, 'end': 6}, > {'type': 'SCONJ', 'text': 'que', 'start': 7, 'end': 10}, > {'type': 'PRON', 'text': 'esto', 'start': 11, 'end': 15}, > {'type': 'VERB', 'text': 'funcione', 'start': 16, 'end': 24}, > {'type': 'ADV', 'text': 'correctamente', 'start': 25, 'end': 38}, > {'type': 'PUNCT', 'text': '!', 'start': 38, 'end': 39}, > {'type': 'NOUN', 'text': '@perezjotaeme', 'start': 40, 'end': 53}] ``` ## Results Results are taken from the LinCE leaderboard | Model | Sentiment | NER | POS | |:-----------------------|:----------------|:-------------------|:--------| | RoBERTuito | **60.6** | 68.5 | 97.2 | | XLM Large | -- | **69.5** | **97.2** | | XLM Base | -- | 64.9 | 97.0 | | C2S mBERT | 59.1 | 64.6 | 96.9 | | mBERT | 56.4 | 64.0 | 97.1 | | BERT | 58.4 | 61.1 | 96.9 | | BETO | 56.5 | -- | -- | ## Citation If you use this model in your research, please cite pysentimiento, RoBERTuito and LinCE papers: ``` @misc{perez2021pysentimiento, title={pysentimiento: A Python Toolkit for Sentiment Analysis and SocialNLP tasks}, author={Juan Manuel Pérez and Juan Carlos Giudici and Franco Luque}, year={2021}, eprint={2106.09462}, archivePrefix={arXiv}, primaryClass={cs.CL} } @inproceedings{ortega2019overview, title={Overview of the task on irony detection in Spanish variants}, author={Ortega-Bueno, Reynier and Rangel, Francisco and Hern{\'a}ndez Far{\i}as, D and Rosso, Paolo and Montes-y-G{\'o}mez, Manuel and Medina Pagola, Jos{\'e} E}, booktitle={Proceedings of the Iberian languages evaluation forum (IberLEF 2019), co-located with 34th conference of the Spanish Society for natural language processing (SEPLN 2019). CEUR-WS. org}, volume={2421}, pages={229--256}, year={2019} } @inproceedings{aguilar2020lince, title={LinCE: A Centralized Benchmark for Linguistic Code-switching Evaluation}, author={Aguilar, Gustavo and Kar, Sudipta and Solorio, Thamar}, booktitle={Proceedings of the 12th Language Resources and Evaluation Conference}, pages={1803--1813}, year={2020} } ```
CALM/CALM
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-PixelCopter results: - metrics: - type: mean_reward value: 9.30 +/- 8.80 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
CAMeL-Lab/bert-base-arabic-camelbert-ca-ner
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
85
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 args: conll2003 metrics: - name: Precision type: precision value: 0.934260639178672 - name: Recall type: recall value: 0.9495119488387749 - name: F1 type: f1 value: 0.9418245555462816 - name: Accuracy type: accuracy value: 0.9868281627126626 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0573 - Precision: 0.9343 - Recall: 0.9495 - F1: 0.9418 - Accuracy: 0.9868 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0854 | 1.0 | 1756 | 0.0639 | 0.9148 | 0.9329 | 0.9238 | 0.9822 | | 0.0403 | 2.0 | 3512 | 0.0542 | 0.9370 | 0.9512 | 0.9440 | 0.9866 | | 0.0204 | 3.0 | 5268 | 0.0573 | 0.9343 | 0.9495 | 0.9418 | 0.9868 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
CAMeL-Lab/bert-base-arabic-camelbert-ca-poetry
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:1905.05700", "arxiv:2103.06678", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
42
2022-07-17T23:12:15Z
--- tags: - generated_from_trainer datasets: - becasv2 model-index: - name: roberta-base-spanish-squades-modelo-robertav1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-spanish-squades-modelo-robertav1 This model is a fine-tuned version of [IIC/roberta-base-spanish-squades](https://huggingface.co/IIC/roberta-base-spanish-squades) on the becasv2 dataset. It achieves the following results on the evaluation set: - Loss: 2.7840 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 5 | 1.9356 | | No log | 2.0 | 10 | 1.7489 | | No log | 3.0 | 15 | 2.0573 | | No log | 4.0 | 20 | 2.3975 | | No log | 5.0 | 25 | 2.6796 | | No log | 6.0 | 30 | 2.7238 | | No log | 7.0 | 35 | 2.7616 | | No log | 8.0 | 40 | 2.7840 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-egy
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
16,451
null
--- license: cc-by-4.0 metrics: - bleu4 - meteor - rouge-l - bertscore - moverscore language: fr datasets: - lmqg/qg_frquad pipeline_tag: text2text-generation tags: - question generation widget: - text: "Créateur » (Maker), lui aussi au singulier, « <hl> le Suprême Berger <hl> » (The Great Shepherd) ; de l'autre, des réminiscences de la théologie de l'Antiquité : le tonnerre, voix de Jupiter, « Et souvent ta voix gronde en un tonnerre terrifiant », etc." example_title: "Question Generation Example 1" - text: "Ce black dog peut être lié à des évènements traumatisants issus du monde extérieur, tels que son renvoi de l'Amirauté après la catastrophe des Dardanelles, lors de la <hl> Grande Guerre <hl> de 14-18, ou son rejet par l'électorat en juillet 1945." example_title: "Question Generation Example 2" - text: "contre <hl> Normie Smith <hl> et 15 000 dollars le 28 novembre 1938." example_title: "Question Generation Example 3" model-index: - name: lmqg/mt5-base-frquad-qg results: - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_frquad type: default args: default metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 6.14 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 25.88 - name: METEOR (Question Generation) type: meteor_question_generation value: 15.55 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 77.81 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 54.58 - name: QAAlignedF1Score-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] type: qa_aligned_f1_score_bertscore_question_answer_generation_with_gold_answer_gold_answer value: 86.41 - name: QAAlignedRecall-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] type: qa_aligned_recall_bertscore_question_answer_generation_with_gold_answer_gold_answer value: 86.4 - name: QAAlignedPrecision-BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] type: qa_aligned_precision_bertscore_question_answer_generation_with_gold_answer_gold_answer value: 86.42 - name: QAAlignedF1Score-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] type: qa_aligned_f1_score_moverscore_question_answer_generation_with_gold_answer_gold_answer value: 60.19 - name: QAAlignedRecall-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] type: qa_aligned_recall_moverscore_question_answer_generation_with_gold_answer_gold_answer value: 60.18 - name: QAAlignedPrecision-MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] type: qa_aligned_precision_moverscore_question_answer_generation_with_gold_answer_gold_answer value: 60.19 - name: QAAlignedF1Score-BERTScore (Question & Answer Generation) [Gold Answer] type: qa_aligned_f1_score_bertscore_question_answer_generation_gold_answer value: 68.59 - name: QAAlignedRecall-BERTScore (Question & Answer Generation) [Gold Answer] type: qa_aligned_recall_bertscore_question_answer_generation_gold_answer value: 69.69 - name: QAAlignedPrecision-BERTScore (Question & Answer Generation) [Gold Answer] type: qa_aligned_precision_bertscore_question_answer_generation_gold_answer value: 67.59 - name: QAAlignedF1Score-MoverScore (Question & Answer Generation) [Gold Answer] type: qa_aligned_f1_score_moverscore_question_answer_generation_gold_answer value: 47.87 - name: QAAlignedRecall-MoverScore (Question & Answer Generation) [Gold Answer] type: qa_aligned_recall_moverscore_question_answer_generation_gold_answer value: 48.36 - name: QAAlignedPrecision-MoverScore (Question & Answer Generation) [Gold Answer] type: qa_aligned_precision_moverscore_question_answer_generation_gold_answer value: 47.42 --- # Model Card of `lmqg/mt5-base-frquad-qg` This model is fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) for question generation task on the [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [google/mt5-base](https://huggingface.co/google/mt5-base) - **Language:** fr - **Training data:** [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="fr", model="lmqg/mt5-base-frquad-qg") # model prediction questions = model.generate_q(list_context="Créateur » (Maker), lui aussi au singulier, « le Suprême Berger » (The Great Shepherd) ; de l'autre, des réminiscences de la théologie de l'Antiquité : le tonnerre, voix de Jupiter, « Et souvent ta voix gronde en un tonnerre terrifiant », etc.", list_answer="le Suprême Berger") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/mt5-base-frquad-qg") output = pipe("Créateur » (Maker), lui aussi au singulier, « <hl> le Suprême Berger <hl> » (The Great Shepherd) ; de l'autre, des réminiscences de la théologie de l'Antiquité : le tonnerre, voix de Jupiter, « Et souvent ta voix gronde en un tonnerre terrifiant », etc.") ``` ## Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-base-frquad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_frquad.default.json) | | Score | Type | Dataset | |:-----------|--------:|:--------|:-----------------------------------------------------------------| | BERTScore | 77.81 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | Bleu_1 | 25.06 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | Bleu_2 | 13.73 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | Bleu_3 | 8.93 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | Bleu_4 | 6.14 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | METEOR | 15.55 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | MoverScore | 54.58 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | ROUGE_L | 25.88 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | - ***Metric (Question & Answer Generation, Reference Answer)***: Each question is generated from *the gold answer*. [raw metric file](https://huggingface.co/lmqg/mt5-base-frquad-qg/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_frquad.default.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 86.41 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | QAAlignedF1Score (MoverScore) | 60.19 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | QAAlignedPrecision (BERTScore) | 86.42 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | QAAlignedPrecision (MoverScore) | 60.19 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | QAAlignedRecall (BERTScore) | 86.4 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | QAAlignedRecall (MoverScore) | 60.18 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | - ***Metric (Question & Answer Generation, Pipeline Approach)***: Each question is generated on the answer generated by [`lmqg/mt5-base-frquad-ae`](https://huggingface.co/lmqg/mt5-base-frquad-ae). [raw metric file](https://huggingface.co/lmqg/mt5-base-frquad-qg/raw/main/eval_pipeline/metric.first.answer.paragraph.questions_answers.lmqg_qg_frquad.default.lmqg_mt5-base-frquad-ae.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:-----------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 68.59 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | QAAlignedF1Score (MoverScore) | 47.87 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | QAAlignedPrecision (BERTScore) | 67.59 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | QAAlignedPrecision (MoverScore) | 47.42 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | QAAlignedRecall (BERTScore) | 69.69 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | | QAAlignedRecall (MoverScore) | 48.36 | default | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_frquad - dataset_name: default - input_types: ['paragraph_answer'] - output_types: ['question'] - prefix_types: None - model: google/mt5-base - max_length: 512 - max_length_output: 32 - epoch: 24 - batch: 4 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 16 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-base-frquad-qg/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-glf
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
18
2022-07-18T00:13:36Z
--- language: - py metrics: - f1 --- To use our fine-tuned BioBERT model to predict whether a sentence from a radiology reports makes reference to priors, run the following: ```python from transformers import AutoTokenizer, AutoModelForTokenClassification modelname = "rajpurkarlab/filbert" tokenizer = AutoTokenizer.from_pretrained(modelname) model = AutoModelForTokenClassification.from_pretrained(modelname) ```
CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-msa
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
71
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: namwoo/distilbert-base-uncased-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # namwoo/distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0339 - Validation Loss: 0.0623 - Train Precision: 0.9239 - Train Recall: 0.9335 - Train F1: 0.9287 - Train Accuracy: 0.9829 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2631, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch | |:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:| | 0.1982 | 0.0715 | 0.9040 | 0.9218 | 0.9128 | 0.9799 | 0 | | 0.0537 | 0.0618 | 0.9202 | 0.9305 | 0.9254 | 0.9827 | 1 | | 0.0339 | 0.0623 | 0.9239 | 0.9335 | 0.9287 | 0.9829 | 2 | ### Framework versions - Transformers 4.20.1 - TensorFlow 2.8.2 - Datasets 2.3.2 - Tokenizers 0.12.1
CAMeL-Lab/bert-base-arabic-camelbert-da-pos-egy
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
32
2022-07-18T02:51:58Z
## GPT Neo 125m fine-tuned #### Pushing model to repo 1. Login to hugging face, ``` from huggingface_hub import notebook_login notebook_login() ``` 2. Then push model to repo. ``` model.push_to_hub("gpt-neo-125m-finetuned", use_temp_dir=True) tokenizer.push_to_hub("gpt-neo-125m-finetuned", use_temp_dir=True) ``` --- #### Using the Model Load the model along with the tokenizer: ``` tokenizer = GPT2Tokenizer.from_pretrained("ytling/gpt-neo-125m-finetuned", bos_token='<|startoftext|>',eos_token='<|endoftext|>', pad_token='<|pad|>') gpt_model = GPTNeoForCausalLM.from_pretrained("ytling/gpt-neo-125m-finetuned").cuda() gpt_model.resize_token_embeddings(len(tokenizer)) ``` To use model, pass text, the loaded model and tokenizer into the `gpt_model()` function, ``` def gpt_model(block_text, model, tokenizer): block_dict = { # add labels here "Use Case":None } for label in block_dict: prompt = f"<|startoftext|>Text: {block_text}\n{label}: " token_prompt = tokenizer(f"{prompt}", return_tensors='pt', padding=True).input_ids.cuda() output = model.generate(token_prompt, do_sample=False, top_k=50, max_length=512, top_p=0.80, temperature=1.08, num_return_sequences=1, pad_token_id=tokenizer.pad_token_id) decode_output = tokenizer.decode(output[0], skip_special_tokens=True) try: block_dict[label] = re.findall(f"\n{label}: (.*)", decode_output)[-1] except: pass return block_dict ``` returns dict containing predicted entities within text. ``` # Eg. {'Use Case': "'unify contact centre, unified communications, and real-time communications API capabilities within a single software solution.'"} ```
CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar-corpus6
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
34
2022-07-18T03:46:21Z
--- language: zh datasets: CLUECorpusSmall widget: - text: "北京是[MASK]国的首都。" --- # Chinese Whole Word Masking RoBERTa Miniatures ## Model description This is the set of 6 Chinese Whole Word Masking RoBERTa models pre-trained by [UER-py](https://arxiv.org/abs/1909.05658). [Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released the 6 Chinese Whole Word Masking RoBERTa models. In order to facilitate users to reproduce the results, we used the publicly available corpus and word segmentation tool, and provided all training details. You can download the 6 Chinese RoBERTa miniatures either from the [UER-py Github page](https://github.com/dbiir/UER-py/), or via HuggingFace from the links below: | | Link | | -------- | :-----------------------: | | **Tiny** | [**2/128 (Tiny)**][2_128] | | **Mini** | [**4/256 (Mini)**][4_256] | | **Small** | [**4/512 (Small)**][4_512] | | **Medium** | [**8/512 (Medium)**][8_512] | | **Base** | [**12/768 (Base)**][12_768] | | **Large** | [**24/1024 (Large)**][24_1024] | Here are scores on the devlopment set of six Chinese tasks: | Model | Score | book_review | chnsenticorp | lcqmc | tnews(CLUE) | iflytek(CLUE) | ocnli(CLUE) | | ------------------ | :---: | :----: | :----------: | :---: | :---------: | :-----------: | :---------: | | RoBERTa-Tiny-WWM | 72.2 | 83.6 | 91.8 | 81.8 | 62.1 | 55.4 | 58.6 | | RoBERTa-Mini-WWM | 76.3 | 86.2 | 93.0 | 86.8 | 64.4 | 58.7 | 68.8 | | RoBERTa-Small-WWM | 77.6 | 88.1 | 93.8 | 87.2 | 65.2 | 59.6 | 71.4 | | RoBERTa-Medium-WWM | 78.6 | 89.5 | 94.4 | 88.8 | 66.0 | 59.9 | 73.2 | | RoBERTa-Base-WWM | 80.2 | 90.3 | 95.8 | 89.4 | 67.5 | 61.8 | 76.2 | | RoBERTa-Large-WWM | 81.1 | 91.3 | 95.8 | 90.0 | 68.5 | 62.1 | 79.1 | For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained with the sequence length of 128: - epochs: 3, 5, 8 - batch sizes: 32, 64 - learning rates: 3e-5, 1e-4, 3e-4 ## How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='uer/roberta-tiny-wwm-chinese-cluecorpussmall') >>> unmasker("北京是[MASK]国的首都。") [ {'score': 0.294228732585907, 'token': 704, 'token_str': '中', 'sequence': '北 京 是 中 国 的 首 都 。'}, {'score': 0.19691626727581024, 'token': 1266, 'token_str': '北', 'sequence': '北 京 是 北 国 的 首 都 。'}, {'score': 0.1070084273815155, 'token': 7506, 'token_str': '韩', 'sequence': '北 京 是 韩 国 的 首 都 。'}, {'score': 0.031527262181043625, 'token': 2769, 'token_str': '我', 'sequence': '北 京 是 我 国 的 首 都 。'}, {'score': 0.023054633289575577, 'token': 1298, 'token_str': '南', 'sequence': '北 京 是 南 国 的 首 都 。'} ] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('uer/roberta-base-wwm-chinese-cluecorpussmall') model = BertModel.from_pretrained("uer/roberta-base-wwm-chinese-cluecorpussmall") text = "用你喜欢的任何文本替换我。" encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('uer/roberta-base-wwm-chinese-cluecorpussmall') model = TFBertModel.from_pretrained("uer/roberta-base-wwm-chinese-cluecorpussmall") text = "用你喜欢的任何文本替换我。" encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Training data [CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data. ## Training procedure Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes. [jieba](https://github.com/fxsjy/jieba) is used as word segmentation tool. Taking the case of Whole Word Masking RoBERTa-Medium Stage1: ``` python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \ --vocab_path models/google_zh_vocab.txt \ --dataset_path cluecorpussmall_seq128_dataset.pt \ --processes_num 32 --seq_length 128 \ --dynamic_masking --data_processor mlm ``` ``` python3 pretrain.py --dataset_path cluecorpussmall_word_seq128_dataset.pt \ --vocab_path models/google_zh_vocab.txt \ --config_path models/bert/medium_config.json \ --output_model_path models/cluecorpussmall_wwm_roberta_medium_seq128_model.bin \ --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \ --total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \ --learning_rate 1e-4 --batch_size 64 \ --whole_word_masking \ --data_processor mlm --target mlm ``` Stage2: ``` python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \ --vocab_path models/google_zh_vocab.txt \ --dataset_path cluecorpussmall_seq512_dataset.pt \ --processes_num 32 --seq_length 512 \ --dynamic_masking --data_processor mlm ``` ``` python3 pretrain.py --dataset_path cluecorpussmall_seq512_dataset.pt \ --vocab_path models/google_zh_vocab.txt \ --pretrained_model_path models/cluecorpussmall_wwm_roberta_medium_seq128_model.bin-1000000 \ --config_path models/bert/medium_config.json \ --output_model_path models/cluecorpussmall_wwm_roberta_medium_seq512_model.bin \ --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \ --total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \ --learning_rate 5e-5 --batch_size 16 \ --whole_word_masking \ --data_processor mlm --target mlm ``` Finally, we convert the pre-trained model into Huggingface's format: ``` python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_wwm_roberta_medium_seq512_model.bin \ --output_model_path pytorch_model.bin \ --layers_num 8 --type mlm ``` ### BibTeX entry and citation info ``` @article{zhao2019uer, title={UER: An Open-Source Toolkit for Pre-training Models}, author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong}, journal={EMNLP-IJCNLP 2019}, pages={241}, year={2019} } ``` [2_128]:https://huggingface.co/uer/roberta-tiny-wwm-chinese-cluecorpussmall [4_256]:https://huggingface.co/uer/roberta-mini-wwm-chinese-cluecorpussmall [4_512]:https://huggingface.co/uer/roberta-small-wwm-chinese-cluecorpussmall [8_512]:https://huggingface.co/uer/roberta-medium-wwm-chinese-cluecorpussmall [12_768]:https://huggingface.co/uer/roberta-base-wwm-chinese-cluecorpussmall [24_1024]:https://huggingface.co/uer/roberta-large-wwm-chinese-cluecorpussmall
CAMeL-Lab/bert-base-arabic-camelbert-mix-did-nadi
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
63
null
--- tags: - generated_from_trainer model-index: - name: phobert-base-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # phobert-base-finetuned-imdb This model is a fine-tuned version of [vinai/phobert-base](https://huggingface.co/vinai/phobert-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2510 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.7146 | 1.0 | 2500 | 1.4245 | | 1.3821 | 2.0 | 5000 | 1.2666 | | 1.3308 | 3.0 | 7500 | 1.2564 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-glf
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
132
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="spacestar1705/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-msa
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1,862
2022-07-18T05:36:20Z
--- language: zh datasets: CLUECorpusSmall widget: - text: "北京是[MASK]国的首都。" --- # Chinese Whole Word Masking RoBERTa Miniatures ## Model description This is the set of 6 Chinese Whole Word Masking RoBERTa models pre-trained by [UER-py](https://arxiv.org/abs/1909.05658). [Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released the 6 Chinese Whole Word Masking RoBERTa models. In order to facilitate users to reproduce the results, we used the publicly available corpus and word segmentation tool, and provided all training details. You can download the 6 Chinese RoBERTa miniatures either from the [UER-py Github page](https://github.com/dbiir/UER-py/), or via HuggingFace from the links below: | | Link | | -------- | :-----------------------: | | **Tiny** | [**2/128 (Tiny)**][2_128] | | **Mini** | [**4/256 (Mini)**][4_256] | | **Small** | [**4/512 (Small)**][4_512] | | **Medium** | [**8/512 (Medium)**][8_512] | | **Base** | [**12/768 (Base)**][12_768] | | **Large** | [**24/1024 (Large)**][24_1024] | Here are scores on the devlopment set of six Chinese tasks: | Model | Score | book_review | chnsenticorp | lcqmc | tnews(CLUE) | iflytek(CLUE) | ocnli(CLUE) | | ------------------ | :---: | :----: | :----------: | :---: | :---------: | :-----------: | :---------: | | RoBERTa-Tiny-WWM | 72.2 | 83.6 | 91.8 | 81.8 | 62.1 | 55.4 | 58.6 | | RoBERTa-Mini-WWM | 76.3 | 86.2 | 93.0 | 86.8 | 64.4 | 58.7 | 68.8 | | RoBERTa-Small-WWM | 77.6 | 88.1 | 93.8 | 87.2 | 65.2 | 59.6 | 71.4 | | RoBERTa-Medium-WWM | 78.6 | 89.5 | 94.4 | 88.8 | 66.0 | 59.9 | 73.2 | | RoBERTa-Base-WWM | 80.2 | 90.3 | 95.8 | 89.4 | 67.5 | 61.8 | 76.2 | | RoBERTa-Large-WWM | 81.1 | 91.3 | 95.8 | 90.0 | 68.5 | 62.1 | 79.1 | For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained with the sequence length of 128: - epochs: 3, 5, 8 - batch sizes: 32, 64 - learning rates: 3e-5, 1e-4, 3e-4 ## How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='uer/roberta-tiny-wwm-chinese-cluecorpussmall') >>> unmasker("北京是[MASK]国的首都。") [ {'score': 0.294228732585907, 'token': 704, 'token_str': '中', 'sequence': '北 京 是 中 国 的 首 都 。'}, {'score': 0.19691626727581024, 'token': 1266, 'token_str': '北', 'sequence': '北 京 是 北 国 的 首 都 。'}, {'score': 0.1070084273815155, 'token': 7506, 'token_str': '韩', 'sequence': '北 京 是 韩 国 的 首 都 。'}, {'score': 0.031527262181043625, 'token': 2769, 'token_str': '我', 'sequence': '北 京 是 我 国 的 首 都 。'}, {'score': 0.023054633289575577, 'token': 1298, 'token_str': '南', 'sequence': '北 京 是 南 国 的 首 都 。'} ] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('uer/roberta-base-wwm-chinese-cluecorpussmall') model = BertModel.from_pretrained("uer/roberta-base-wwm-chinese-cluecorpussmall") text = "用你喜欢的任何文本替换我。" encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('uer/roberta-base-wwm-chinese-cluecorpussmall') model = TFBertModel.from_pretrained("uer/roberta-base-wwm-chinese-cluecorpussmall") text = "用你喜欢的任何文本替换我。" encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Training data [CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data. ## Training procedure Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes. [jieba](https://github.com/fxsjy/jieba) is used as word segmentation tool. Taking the case of Whole Word Masking RoBERTa-Medium Stage1: ``` python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \ --vocab_path models/google_zh_vocab.txt \ --dataset_path cluecorpussmall_seq128_dataset.pt \ --processes_num 32 --seq_length 128 \ --dynamic_masking --data_processor mlm ``` ``` python3 pretrain.py --dataset_path cluecorpussmall_word_seq128_dataset.pt \ --vocab_path models/google_zh_vocab.txt \ --config_path models/bert/medium_config.json \ --output_model_path models/cluecorpussmall_wwm_roberta_medium_seq128_model.bin \ --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \ --total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \ --learning_rate 1e-4 --batch_size 64 \ --whole_word_masking \ --data_processor mlm --target mlm ``` Stage2: ``` python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \ --vocab_path models/google_zh_vocab.txt \ --dataset_path cluecorpussmall_seq512_dataset.pt \ --processes_num 32 --seq_length 512 \ --dynamic_masking --data_processor mlm ``` ``` python3 pretrain.py --dataset_path cluecorpussmall_seq512_dataset.pt \ --vocab_path models/google_zh_vocab.txt \ --pretrained_model_path models/cluecorpussmall_wwm_roberta_medium_seq128_model.bin-1000000 \ --config_path models/bert/medium_config.json \ --output_model_path models/cluecorpussmall_wwm_roberta_medium_seq512_model.bin \ --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \ --total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \ --learning_rate 5e-5 --batch_size 16 \ --whole_word_masking \ --data_processor mlm --target mlm ``` Finally, we convert the pre-trained model into Huggingface's format: ``` python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_wwm_roberta_medium_seq512_model.bin \ --output_model_path pytorch_model.bin \ --layers_num 8 --type mlm ``` ### BibTeX entry and citation info ``` @article{zhao2019uer, title={UER: An Open-Source Toolkit for Pre-training Models}, author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong}, journal={EMNLP-IJCNLP 2019}, pages={241}, year={2019} } ``` [2_128]:https://huggingface.co/uer/roberta-tiny-wwm-chinese-cluecorpussmall [4_256]:https://huggingface.co/uer/roberta-mini-wwm-chinese-cluecorpussmall [4_512]:https://huggingface.co/uer/roberta-small-wwm-chinese-cluecorpussmall [8_512]:https://huggingface.co/uer/roberta-medium-wwm-chinese-cluecorpussmall [12_768]:https://huggingface.co/uer/roberta-base-wwm-chinese-cluecorpussmall [24_1024]:https://huggingface.co/uer/roberta-large-wwm-chinese-cluecorpussmall
CAMeL-Lab/bert-base-arabic-camelbert-msa-did-madar-twitter5
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
75
null
--- language: zh datasets: CLUECorpusSmall widget: - text: "北京是[MASK]国的首都。" --- # Chinese Whole Word Masking RoBERTa Miniatures ## Model description This is the set of 6 Chinese Whole Word Masking RoBERTa models pre-trained by [UER-py](https://arxiv.org/abs/1909.05658). [Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released the 6 Chinese Whole Word Masking RoBERTa models. In order to facilitate users to reproduce the results, we used the publicly available corpus and word segmentation tool, and provided all training details. You can download the 6 Chinese RoBERTa miniatures either from the [UER-py Github page](https://github.com/dbiir/UER-py/), or via HuggingFace from the links below: | | Link | | -------- | :-----------------------: | | **Tiny** | [**2/128 (Tiny)**][2_128] | | **Mini** | [**4/256 (Mini)**][4_256] | | **Small** | [**4/512 (Small)**][4_512] | | **Medium** | [**8/512 (Medium)**][8_512] | | **Base** | [**12/768 (Base)**][12_768] | | **Large** | [**24/1024 (Large)**][24_1024] | Here are scores on the devlopment set of six Chinese tasks: | Model | Score | book_review | chnsenticorp | lcqmc | tnews(CLUE) | iflytek(CLUE) | ocnli(CLUE) | | ------------------ | :---: | :----: | :----------: | :---: | :---------: | :-----------: | :---------: | | RoBERTa-Tiny-WWM | 72.2 | 83.6 | 91.8 | 81.8 | 62.1 | 55.4 | 58.6 | | RoBERTa-Mini-WWM | 76.3 | 86.2 | 93.0 | 86.8 | 64.4 | 58.7 | 68.8 | | RoBERTa-Small-WWM | 77.6 | 88.1 | 93.8 | 87.2 | 65.2 | 59.6 | 71.4 | | RoBERTa-Medium-WWM | 78.6 | 89.5 | 94.4 | 88.8 | 66.0 | 59.9 | 73.2 | | RoBERTa-Base-WWM | 80.2 | 90.3 | 95.8 | 89.4 | 67.5 | 61.8 | 76.2 | | RoBERTa-Large-WWM | 81.1 | 91.3 | 95.8 | 90.0 | 68.5 | 62.1 | 79.1 | For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained with the sequence length of 128: - epochs: 3, 5, 8 - batch sizes: 32, 64 - learning rates: 3e-5, 1e-4, 3e-4 ## How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='uer/roberta-tiny-wwm-chinese-cluecorpussmall') >>> unmasker("北京是[MASK]国的首都。") [ {'score': 0.294228732585907, 'token': 704, 'token_str': '中', 'sequence': '北 京 是 中 国 的 首 都 。'}, {'score': 0.19691626727581024, 'token': 1266, 'token_str': '北', 'sequence': '北 京 是 北 国 的 首 都 。'}, {'score': 0.1070084273815155, 'token': 7506, 'token_str': '韩', 'sequence': '北 京 是 韩 国 的 首 都 。'}, {'score': 0.031527262181043625, 'token': 2769, 'token_str': '我', 'sequence': '北 京 是 我 国 的 首 都 。'}, {'score': 0.023054633289575577, 'token': 1298, 'token_str': '南', 'sequence': '北 京 是 南 国 的 首 都 。'} ] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('uer/roberta-base-wwm-chinese-cluecorpussmall') model = BertModel.from_pretrained("uer/roberta-base-wwm-chinese-cluecorpussmall") text = "用你喜欢的任何文本替换我。" encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('uer/roberta-base-wwm-chinese-cluecorpussmall') model = TFBertModel.from_pretrained("uer/roberta-base-wwm-chinese-cluecorpussmall") text = "用你喜欢的任何文本替换我。" encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Training data [CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data. ## Training procedure Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes. [jieba](https://github.com/fxsjy/jieba) is used as word segmentation tool. Taking the case of Whole Word Masking RoBERTa-Medium Stage1: ``` python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \ --vocab_path models/google_zh_vocab.txt \ --dataset_path cluecorpussmall_seq128_dataset.pt \ --processes_num 32 --seq_length 128 \ --dynamic_masking --data_processor mlm ``` ``` python3 pretrain.py --dataset_path cluecorpussmall_word_seq128_dataset.pt \ --vocab_path models/google_zh_vocab.txt \ --config_path models/bert/medium_config.json \ --output_model_path models/cluecorpussmall_wwm_roberta_medium_seq128_model.bin \ --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \ --total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \ --learning_rate 1e-4 --batch_size 64 \ --whole_word_masking \ --data_processor mlm --target mlm ``` Stage2: ``` python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \ --vocab_path models/google_zh_vocab.txt \ --dataset_path cluecorpussmall_seq512_dataset.pt \ --processes_num 32 --seq_length 512 \ --dynamic_masking --data_processor mlm ``` ``` python3 pretrain.py --dataset_path cluecorpussmall_seq512_dataset.pt \ --vocab_path models/google_zh_vocab.txt \ --pretrained_model_path models/cluecorpussmall_wwm_roberta_medium_seq128_model.bin-1000000 \ --config_path models/bert/medium_config.json \ --output_model_path models/cluecorpussmall_wwm_roberta_medium_seq512_model.bin \ --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \ --total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \ --learning_rate 5e-5 --batch_size 16 \ --whole_word_masking \ --data_processor mlm --target mlm ``` Finally, we convert the pre-trained model into Huggingface's format: ``` python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_wwm_roberta_medium_seq512_model.bin \ --output_model_path pytorch_model.bin \ --layers_num 8 --type mlm ``` ### BibTeX entry and citation info ``` @article{zhao2019uer, title={UER: An Open-Source Toolkit for Pre-training Models}, author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong}, journal={EMNLP-IJCNLP 2019}, pages={241}, year={2019} } ``` [2_128]:https://huggingface.co/uer/roberta-tiny-wwm-chinese-cluecorpussmall [4_256]:https://huggingface.co/uer/roberta-mini-wwm-chinese-cluecorpussmall [4_512]:https://huggingface.co/uer/roberta-small-wwm-chinese-cluecorpussmall [8_512]:https://huggingface.co/uer/roberta-medium-wwm-chinese-cluecorpussmall [12_768]:https://huggingface.co/uer/roberta-base-wwm-chinese-cluecorpussmall [24_1024]:https://huggingface.co/uer/roberta-large-wwm-chinese-cluecorpussmall
CAMeL-Lab/bert-base-arabic-camelbert-msa-did-nadi
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
71
null
--- language: zh datasets: CLUECorpusSmall widget: - text: "北京是[MASK]国的首都。" --- # Chinese Whole Word Masking RoBERTa Miniatures ## Model description This is the set of 6 Chinese Whole Word Masking RoBERTa models pre-trained by [UER-py](https://arxiv.org/abs/1909.05658). [Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released the 6 Chinese Whole Word Masking RoBERTa models. In order to facilitate users to reproduce the results, we used the publicly available corpus and word segmentation tool, and provided all training details. You can download the 6 Chinese RoBERTa miniatures either from the [UER-py Github page](https://github.com/dbiir/UER-py/), or via HuggingFace from the links below: | | Link | | -------- | :-----------------------: | | **Tiny** | [**2/128 (Tiny)**][2_128] | | **Mini** | [**4/256 (Mini)**][4_256] | | **Small** | [**4/512 (Small)**][4_512] | | **Medium** | [**8/512 (Medium)**][8_512] | | **Base** | [**12/768 (Base)**][12_768] | | **Large** | [**24/1024 (Large)**][24_1024] | Here are scores on the devlopment set of six Chinese tasks: | Model | Score | book_review | chnsenticorp | lcqmc | tnews(CLUE) | iflytek(CLUE) | ocnli(CLUE) | | ------------------ | :---: | :----: | :----------: | :---: | :---------: | :-----------: | :---------: | | RoBERTa-Tiny-WWM | 72.2 | 83.6 | 91.8 | 81.8 | 62.1 | 55.4 | 58.6 | | RoBERTa-Mini-WWM | 76.3 | 86.2 | 93.0 | 86.8 | 64.4 | 58.7 | 68.8 | | RoBERTa-Small-WWM | 77.6 | 88.1 | 93.8 | 87.2 | 65.2 | 59.6 | 71.4 | | RoBERTa-Medium-WWM | 78.6 | 89.5 | 94.4 | 88.8 | 66.0 | 59.9 | 73.2 | | RoBERTa-Base-WWM | 80.2 | 90.3 | 95.8 | 89.4 | 67.5 | 61.8 | 76.2 | | RoBERTa-Large-WWM | 81.1 | 91.3 | 95.8 | 90.0 | 68.5 | 62.1 | 79.1 | For each task, we selected the best fine-tuning hyperparameters from the lists below, and trained with the sequence length of 128: - epochs: 3, 5, 8 - batch sizes: 32, 64 - learning rates: 3e-5, 1e-4, 3e-4 ## How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='uer/roberta-tiny-wwm-chinese-cluecorpussmall') >>> unmasker("北京是[MASK]国的首都。") [ {'score': 0.294228732585907, 'token': 704, 'token_str': '中', 'sequence': '北 京 是 中 国 的 首 都 。'}, {'score': 0.19691626727581024, 'token': 1266, 'token_str': '北', 'sequence': '北 京 是 北 国 的 首 都 。'}, {'score': 0.1070084273815155, 'token': 7506, 'token_str': '韩', 'sequence': '北 京 是 韩 国 的 首 都 。'}, {'score': 0.031527262181043625, 'token': 2769, 'token_str': '我', 'sequence': '北 京 是 我 国 的 首 都 。'}, {'score': 0.023054633289575577, 'token': 1298, 'token_str': '南', 'sequence': '北 京 是 南 国 的 首 都 。'} ] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('uer/roberta-base-wwm-chinese-cluecorpussmall') model = BertModel.from_pretrained("uer/roberta-base-wwm-chinese-cluecorpussmall") text = "用你喜欢的任何文本替换我。" encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('uer/roberta-base-wwm-chinese-cluecorpussmall') model = TFBertModel.from_pretrained("uer/roberta-base-wwm-chinese-cluecorpussmall") text = "用你喜欢的任何文本替换我。" encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ## Training data [CLUECorpusSmall](https://github.com/CLUEbenchmark/CLUECorpus2020/) is used as training data. ## Training procedure Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes. [jieba](https://github.com/fxsjy/jieba) is used as word segmentation tool. Taking the case of Whole Word Masking RoBERTa-Medium Stage1: ``` python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \ --vocab_path models/google_zh_vocab.txt \ --dataset_path cluecorpussmall_seq128_dataset.pt \ --processes_num 32 --seq_length 128 \ --dynamic_masking --data_processor mlm ``` ``` python3 pretrain.py --dataset_path cluecorpussmall_word_seq128_dataset.pt \ --vocab_path models/google_zh_vocab.txt \ --config_path models/bert/medium_config.json \ --output_model_path models/cluecorpussmall_wwm_roberta_medium_seq128_model.bin \ --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \ --total_steps 1000000 --save_checkpoint_steps 100000 --report_steps 50000 \ --learning_rate 1e-4 --batch_size 64 \ --whole_word_masking \ --data_processor mlm --target mlm ``` Stage2: ``` python3 preprocess.py --corpus_path corpora/cluecorpussmall.txt \ --vocab_path models/google_zh_vocab.txt \ --dataset_path cluecorpussmall_seq512_dataset.pt \ --processes_num 32 --seq_length 512 \ --dynamic_masking --data_processor mlm ``` ``` python3 pretrain.py --dataset_path cluecorpussmall_seq512_dataset.pt \ --vocab_path models/google_zh_vocab.txt \ --pretrained_model_path models/cluecorpussmall_wwm_roberta_medium_seq128_model.bin-1000000 \ --config_path models/bert/medium_config.json \ --output_model_path models/cluecorpussmall_wwm_roberta_medium_seq512_model.bin \ --world_size 8 --gpu_ranks 0 1 2 3 4 5 6 7 \ --total_steps 250000 --save_checkpoint_steps 50000 --report_steps 10000 \ --learning_rate 5e-5 --batch_size 16 \ --whole_word_masking \ --data_processor mlm --target mlm ``` Finally, we convert the pre-trained model into Huggingface's format: ``` python3 scripts/convert_bert_from_uer_to_huggingface.py --input_model_path models/cluecorpussmall_wwm_roberta_medium_seq512_model.bin \ --output_model_path pytorch_model.bin \ --layers_num 8 --type mlm ``` ### BibTeX entry and citation info ``` @article{zhao2019uer, title={UER: An Open-Source Toolkit for Pre-training Models}, author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong}, journal={EMNLP-IJCNLP 2019}, pages={241}, year={2019} } ``` [2_128]:https://huggingface.co/uer/roberta-tiny-wwm-chinese-cluecorpussmall [4_256]:https://huggingface.co/uer/roberta-mini-wwm-chinese-cluecorpussmall [4_512]:https://huggingface.co/uer/roberta-small-wwm-chinese-cluecorpussmall [8_512]:https://huggingface.co/uer/roberta-medium-wwm-chinese-cluecorpussmall [12_768]:https://huggingface.co/uer/roberta-base-wwm-chinese-cluecorpussmall [24_1024]:https://huggingface.co/uer/roberta-large-wwm-chinese-cluecorpussmall
CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-glf
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
21
2022-07-18T07:21:23Z
--- license: cc-by-sa-4.0 --- WORK IN PROGRESS AND NOT YET RELEASED! I'm still working out the optimum settings :)
CAMeL-Lab/bert-base-arabic-camelbert-msa-pos-msa
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
133
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.834 - name: F1 type: f1 value: 0.8171742650957551 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.5401 - Accuracy: 0.834 - F1: 0.8172 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 192 - eval_batch_size: 192 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 84 | 0.7993 | 0.74 | 0.6827 | | No log | 2.0 | 168 | 0.5401 | 0.834 | 0.8172 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
CAMeL-Lab/bert-base-arabic-camelbert-msa-quarter
[ "pytorch", "tf", "jax", "bert", "fill-mask", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
2022-07-18T07:29:28Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.923 - name: F1 type: f1 value: 0.9232089605669606 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2185 - Accuracy: 0.923 - F1: 0.9232 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8274 | 1.0 | 250 | 0.3172 | 0.907 | 0.9036 | | 0.2501 | 2.0 | 500 | 0.2185 | 0.923 | 0.9232 | ### Framework versions - Transformers 4.13.0 - Pytorch 1.12.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
CAMeL-Lab/bert-base-arabic-camelbert-msa
[ "pytorch", "tf", "jax", "bert", "fill-mask", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2,967
null
--- license: bigscience-bloom-rail-1.0 language: - ak - ar - as - bm - bn - ca - code - en - es - eu - fon - fr - gu - hi - id - ig - ki - kn - lg - ln - ml - mr - ne - nso - ny - or - pa - pt - rn - rw - sn - st - sw - ta - te - tn - ts - tum - tw - ur - vi - wo - xh - yo - zh - zhs - zht - zu pipeline_tag: text-generation --- # <span style="color:red"><b>WARNING:</b> This is an <b>intermediary checkpoint</b> and WIP project. It is not fully trained yet. You might want to use [Bloom-1B3](https://huggingface.co/bigscience/bloom-1b3) if you want a model that has completed training. This model is a distilled version of [Bloom-1B3](https://huggingface.co/bigscience/bloom-1b3) </span> <h1 style='text-align: center '>BLOOM LM</h1> <h2 style='text-align: center '><em>BigScience Large Open-science Open-access Multilingual Language Model</em> </h2> <h3 style='text-align: center '>Model Card</h3> <img src="https://s3.amazonaws.com/moonup/production/uploads/1657124309515-5f17f0a0925b9863e28ad517.png" alt="BigScience Logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/> Version 1.0 / 18.Jul.2022 ## Table of Contents 1. [Model Details](#model-details) 2. [Uses](#uses) 3. [Training Data](#training-data) 4. [Risks and Limitations](#risks-and-limitations) 5. [Evaluation](#evaluation) 6. [Recommendations](#recommendations) 7. [Glossary and Calculations](#glossary-and-calculations) 8. [More Information](#more-information) 9. [Model Card Authors](#model-card-authors) ## Model Details ### Basics *This section provides information for anyone who wants to know about the model.* <details> <summary>Click to expand</summary> <br/> **Developed by:** BigScience ([website](https://bigscience.huggingface.co)) * All collaborators are either volunteers or have an agreement with their employer. *(Further breakdown of participants forthcoming.)* **Model Type:** Transformer-based Language Model **Version:** 1.0.0 **Languages:** Multiple; see [training data](#training-data) **License:** RAIL License v1.0 ([link](https://huggingface.co/spaces/bigscience/license)) **Release Date Estimate:** Monday, 11.July.2022 **Send Questions to:** [email protected] **Cite as:** BigScience, _BigScience Language Open-science Open-access Multilingual (BLOOM) Language Model_. International, May 2021-May 2022 **Funded by:** * The French government. * Hugging Face ([website](https://huggingface.co)). * Organizations of contributors. *(Further breakdown of organizations forthcoming.)* </details> ### Technical Specifications *This section provides information for people who work on model development.* <details> <summary>Click to expand</summary><br/> Please see [the BLOOM training README](https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml#readme) for full details on replicating training. **Model Architecture:** Modified from Megatron-LM GPT2 (see [paper](https://arxiv.org/abs/1909.08053), [BLOOM Megatron code](https://github.com/bigscience-workshop/Megatron-DeepSpeed)): * Decoder-only architecture * Layer normalization applied to word embeddings layer (`StableEmbedding`; see [code](https://github.com/facebookresearch/bitsandbytes), [paper](https://arxiv.org/pdf/2110.02861.pdf)) * ALiBI positional encodings (see [paper](https://arxiv.org/pdf/2108.12409.pdf)), with GeLU activation functions * 288 million parameters: * 12 layers, 8 attention heads * Hidden layers are 1024-dimensional * Sequence length of 2048 tokens used (see [BLOOM tokenizer](https://huggingface.co/bigscience/tokenizer), [tokenizer description](#tokenization)) **Objective Function:** Cross Entropy with mean reduction (see [API documentation](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss)). **Compute infrastructure:** Jean Zay Public Supercomputer, provided by the French government (see [announcement](https://www.enseignementsup-recherche.gouv.fr/fr/signature-du-marche-d-acquisition-de-l-un-des-supercalculateurs-les-plus-puissants-d-europe-46733)). * Hardware: 384 A100 80GB GPUs (48 nodes): * Additional 32 A100 80GB GPUs (4 nodes) in reserve * 8 GPUs per node Using NVLink 4 inter-gpu connects, 4 OmniPath links * CPU: AMD * CPU memory: 512GB per node * GPU memory: 640GB per node * Inter-node connect: Omni-Path Architecture (OPA) * NCCL-communications network: a fully dedicated subnet * Disc IO network: shared network with other types of nodes * Software: * Megatron-DeepSpeed ([Github link](https://github.com/bigscience-workshop/Megatron-DeepSpeed)) * DeepSpeed ([Github link](https://github.com/microsoft/DeepSpeed)) * PyTorch (pytorch-1.11 w/ CUDA-11.5; see [Github link](https://github.com/pytorch/pytorch)) * apex ([Github link](https://github.com/NVIDIA/apex)) #### **Training** _In progress._ Current training logs: [Tensorboard link](https://huggingface.co/tensorboard/bigscience/tr11-176B-ml-logs/) - Checkpoint size: - Bf16 weights: 329GB - Full checkpoint with optimizer states: 2.3TB - Training throughput: About 150 TFLOP per GPU per second - Number of epochs: 1 (*current target*) - Dates: - Started 11th March, 2022 11:42am PST - Estimated end: 5th July, 2022 - Estimated cost of training: Equivalent of $2-5M in cloud computing (including preliminary experiments) - Server training location: Île-de-France, France #### **Tokenization** The BLOOM tokenizer ([link](https://huggingface.co/bigscience/tokenizer)) is a learned subword tokenizer trained using: - A byte-level Byte Pair Encoding (BPE) algorithm - A simple pre-tokenization rule, no normalization - A vocabulary size of 250,680 It was trained on a subset of a preliminary version of the corpus using alpha-weighting per language. </details> ### Environmental Impact <details> <summary>Click to expand</summary><br/> The training supercomputer, Jean Zay ([website](http://www.idris.fr/eng/jean-zay/jean-zay-presentation-eng.html)), uses mostly nuclear energy. The heat generated by it is reused for heating campus housing. **Estimated carbon emissions:** *(Forthcoming upon completion of training.)* **Estimated electricity usage:** *(Forthcoming upon completion of training.)* </details> <p>&nbsp;</p> ## Uses *This section addresses questions around how the model is intended to be used, discusses the foreseeable users of the model (including those affected by the model), and describes uses that are considered out of scope or misuse of the model. It provides information for anyone considering using the model or who is affected by the model.* <details> <summary>Click to expand</summary><br/> ### Intended Use This model is being created in order to enable public research on large language models (LLMs). LLMs are intended to be used for language generation or as a pretrained base model that can be further fine-tuned for specific tasks. Use cases below are not exhaustive. #### **Direct Use** - Text generation - Exploring characteristics of language generated by a language model - Examples: Cloze tests, counterfactuals, generations with reframings #### **Downstream Use** - Tasks that leverage language models include: Information Extraction, Question Answering, Summarization ### Misuse and Out-of-scope Use *This section addresses what users ought not do with the model.* See the [BLOOM License](https://huggingface.co/spaces/bigscience/license), Attachment A, for detailed usage restrictions. The below list is non-exhaustive, but lists some easily foreseeable problematic use cases. #### **Out-of-scope Uses** Using the model in [high-stakes](#high-stakes) settings is out of scope for this model.  The model is not designed for [critical decisions](#critical-decisions) nor uses with any material consequences on an individual's livelihood or wellbeing. The model outputs content that appears factual but is not correct. ##### Out-of-scope Uses Include: - Usage in biomedical domains, political and legal domains, or finance domains - Usage for evaluating or scoring individuals, such as for employment, education, or credit - Applying the model for critical automatic decisions, generating factual content, creating reliable summaries, or generating predictions that must be correct #### **Misuse** Intentionally using the model for harm, violating [human rights](#human-rights), or other kinds of malicious activities, is a misuse of this model. This includes: - Spam generation - Disinformation and influence operations - Disparagement and defamation - Harassment and abuse - [Deception](#deception) - Unconsented impersonation and imitation - Unconsented surveillance - Generating content without attribution to the model, as specified in the [RAIL License, Use Restrictions](https://huggingface.co/spaces/bigscience/license) ### Intended Users #### **Direct Users** - General Public - Researchers - Students - Educators - Engineers/developers - Non-commercial entities - Community advocates, including human and civil rights groups #### Indirect Users - Users of derivatives created by Direct Users, such as those using software with an [intended use](#intended-use) - Users of [Derivatives of the Model, as described in the License](https://huggingface.co/spaces/bigscience/license) #### Others Affected (Parties Prenantes) - People and groups referred to by the LLM - People and groups exposed to outputs of, or decisions based on, the LLM - People and groups whose original work is included in the LLM </details> <p>&nbsp;</p> ## Training Data *This section provides a high-level overview of the training data. It is relevant for anyone who wants to know the basics of what the model is learning.* <details> <summary>Click to expand</summary><br/> Details for each dataset are provided in individual [Data Cards](https://huggingface.co/spaces/bigscience/BigScienceCorpus). Training data includes: - 45 natural languages - 12 programming languages - In 1.5TB of pre-processed text, converted into 350B unique tokens (see [the tokenizer section](#tokenization) for more.) #### **Languages** The pie chart shows the distribution of languages in training data. ![pie chart showing the distribution of languages in training data](https://github.com/bigscience-workshop/model_card/blob/main/assets/data/pie_chart.svg?raw=true) The following table shows the further distribution of Niger-Congo and Indic languages in the training data. <details> <summary>Click to expand</summary><br/> | Niger Congo | Percentage | | Indic | Percentage | |----------------|------------ |------ |-----------|------------| | Chi Tumbuka | 0.00002 | | Assamese | 0.01 | | Kikuyu | 0.00004 | | Odia | 0.04 | | Bambara | 0.00004 | | Gujarati | 0.04 | | Akan | 0.00007 | | Marathi | 0.05 | | Xitsonga | 0.00007 | | Punjabi | 0.05 | | Sesotho | 0.00007 | | Kannada | 0.06 | | Chi Chewa | 0.0001 | | Nepali | 0.07 | | Setswana | 0.0002 | | Telugu | 0.09 | | Northern Sotho | 0.0002 | | Malayalam | 0.10 | | Fon | 0.0002 | | Urdu | 0.10 | | Kirundi | 0.0003 | | Tamil | 0.20 | | Wolof | 0.0004 | | Bengali | 0.50 | | Kuganda | 0.0004 | | Hindi | 0.70 | | Chi Shona | 0.001 | | Isi Zulu | 0.001 | | Igbo | 0.001 | | Xhosa | 0.001 | | Kinyarwanda | 0.003 | | Yoruba | 0.006 | | Swahili | 0.02 | </details> The following table shows the distribution of programming languages. <details> <summary>Click to expand</summary><br/> | Extension | Language | Number of files | |----------------|------------|-----------------| | java | Java | 5,407,724 | | php | PHP | 4,942,186 | | cpp | C++ | 2,503,930 | | py | Python | 2,435,072 | | js | JavaScript | 1,905,518 | | cs | C# | 1,577,347 | | rb | Ruby | 6,78,413 | | cc | C++ | 443,054 | | hpp | C++ | 391,048 | | lua | Lua | 352,317 | | go | GO | 227,763 | | ts | TypeScript | 195,254 | | C | C | 134,537 | | scala | Scala | 92,052 | | hh | C++ | 67,161 | | H | C++ | 55,899 | | tsx | TypeScript | 33,107 | | rs | Rust | 29,693 | | phpt | PHP | 9,702 | | c++ | C++ | 1,342 | | h++ | C++ | 791 | | php3 | PHP | 540 | | phps | PHP | 270 | | php5 | PHP | 166 | | php4 | PHP | 29 | </details> </details> <p>&nbsp;</p> ## Risks and Limitations *This section identifies foreseeable harms and misunderstandings.* <details> <summary>Click to expand</summary><br/> Model may: - Overrepresent some viewpoints and underrepresent others - Contain stereotypes - Contain [personal information](#personal-data-and-information) - Generate: - Hateful, abusive, or violent language - Discriminatory or prejudicial language - Content that may not be appropriate for all settings, including sexual content - Make errors, including producing incorrect information as if it were factual - Generate irrelevant or repetitive outputs </details> <p>&nbsp;</p> ## Evaluation *This section describes the evaluation protocols and provides the results.* <details> <summary>Click to expand</summary><br/> ### Metrics *This section describes the different ways performance is calculated and why.* Includes: | Metric | Why chosen | |--------------------|--------------------------------------------------------------------| | [Perplexity](#perplexity) | Standard metric for quantifying model improvements during training | | Cross Entropy [Loss](#loss) | Standard objective for language models. | And multiple different metrics for specific tasks. _(More evaluation metrics forthcoming upon completion of evaluation protocol.)_ ### Factors *This section lists some different aspects of what BLOOM models. Its focus is on those aspects that are likely to give rise to high variance in model behavior.* - Language, such as English or Yoruba - Domain, such as newswire or stories - Demographic characteristics, such as gender or nationality ### Results *Results are based on the [Factors](#factors) and [Metrics](#metrics).* **Train-time Evaluation:** As of 25.May.2022, 15:00 PST: - Training Loss: 2.0 - Validation Loss: 2.2 - Perplexity: 8.9 (More evaluation scores forthcoming at the end of model training.) </details> <p>&nbsp;</p> ## Recommendations *This section provides information on warnings and potential mitigations.* <details> <summary>Click to expand</summary><br/> - Indirect users should be made aware when the content they're working with is created by the LLM. - Users should be aware of [Risks and Limitations](#risks-and-limitations), and include an appropriate age disclaimer or blocking interface as necessary. - Models pretrained with the LLM should include an updated Model Card. - Users of the model should provide mechanisms for those affected to provide feedback, such as an email address for comments. </details> <p>&nbsp;</p> ## Glossary and Calculations *This section defines common terms and how metrics are calculated.* <details> <summary>Click to expand</summary><br/> - <a name="loss">**Loss:**</a> A calculation of the difference between what the model has learned and what the data shows ("groundtruth"). The lower the loss, the better. The training process aims to minimize the loss. - <a name="perplexity">**Perplexity:**</a> This is based on what the model estimates the probability of new data is. The lower the perplexity, the better. If the model is 100% correct at predicting the next token it will see, then the perplexity is 1. Mathematically this is calculated using entropy. - <a name="high-stakes">**High-stakes settings:**</a> Such as those identified as "high-risk AI systems" and "unacceptable risk AI systems" in the European Union's proposed [Artificial Intelligence (AI) Act](https://artificialintelligenceact.eu/annexes/). - <a name="critical-decisions">**Critical decisions:**</a> Such as those defined in [the United States' proposed Algorithmic Accountability Act](https://www.congress.gov/117/bills/s3572/BILLS-117s3572is.pdf). - <a name="human-rights">**Human rights:**</a> Includes those rights defined in the [Universal Declaration of Human Rights](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf). - <a name="personal-data-and-information">**Personal Data and Personal Information:**</a> Personal data and information is defined in multiple data protection regulations, such as "[personal data](https://gdpr-info.eu/issues/personal-data/)" in the [European Union's General Data Protection Regulation](https://gdpr-info.eu); and "personal information" in the Republic of South Africa's [Protection of Personal Information Act](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf), The People's Republic of China's [Personal information protection law](http://en.npc.gov.cn.cdurl.cn/2021-12/29/c_694559.htm). - <a name="sensitive-characteristics">**Sensitive characteristics:**</a> This includes specifically protected categories in human rights (see [UHDR, Article 2](https://www.un.org/sites/un2.un.org/files/2021/03/udhr.pdf)) and personal information regulation (see GDPR, [Article 9; Protection of Personal Information Act, Chapter 1](https://www.gov.za/sites/default/files/gcis_document/201409/3706726-11act4of2013popi.pdf)) - <a name="deception">**Deception:**</a> Doing something to intentionally mislead individuals to believe something that is false, such as by creating deadbots or chatbots on social media posing as real people, or generating text documents without making consumers aware that the text is machine generated. </details> <p>&nbsp;</p> ## More Information <details> <summary>Click to expand</summary><br/> ### Dataset Creation Blog post detailing the design choices during the dataset creation: https://bigscience.huggingface.co/blog/building-a-tb-scale-multilingual-dataset-for-language-modeling ### Technical Specifications Blog post summarizing how the architecture, size, shape, and pre-training duration where selected: https://bigscience.huggingface.co/blog/what-language-model-to-train-if-you-have-two-million-gpu-hours More details on the architecture/optimizer: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml Blog post on the hardware/engineering side: https://bigscience.huggingface.co/blog/which-hardware-to-train-a-176b-parameters-model Details on the distributed setup used for the training: https://github.com/bigscience-workshop/bigscience/tree/master/train/tr11-176B-ml Tensorboard updated during the training: https://huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard#scalars&tagFilter=loss Insights on how to approach training, negative results: https://github.com/bigscience-workshop/bigscience/blob/master/train/lessons-learned.md Details on the obstacles overcome during the preparation on the engineering side (instabilities, optimization of training throughput, so many technical tricks and questions): https://github.com/bigscience-workshop/bigscience/blob/master/train/tr11-176B-ml/chronicles.md ### Initial Results Initial prompting experiments using interim checkpoints: https://huggingface.co/spaces/bigscience/bloom-book </details> <p>&nbsp;</p> ## Model Card Authors *Ordered roughly chronologically and by amount of time spent.* Margaret Mitchell, Giada Pistilli, Yacine Jernite, Ezinwanne Ozoani, Marissa Gerchick, Nazneen Rajani, Sasha Luccioni, Irene Solaiman, Maraim Masoud, Somaieh Nikpoor, Carlos Muñoz Ferrandis, Stas Bekman, Christopher Akiki, Danish Contractor, David Lansky, Angelina McMillan-Major, Tristan Thrush, Suzana Ilić, Gérard Dupont, Shayne Longpre, Manan Dey, Stella Biderman, Douwe Kiela, Emi Baylor, Teven Le Scao, Aaron Gokaslan, Julien Launay
CAUKiel/JavaBERT-uncased
[ "pytorch", "safetensors", "bert", "fill-mask", "java", "code", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- tags: - generated_from_trainer metrics: - sacrebleu model-index: - name: t5-pegasus-finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-pegasus-finetuned This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.8469 - Sacrebleu: 1.6544 - Rouge 1: 0.0646 - Rouge 2: 0.0076 - Rouge L: 0.0629 - Bleu 1: 0.2040 - Bleu 2: 0.0959 - Bleu 3: 0.0484 - Bleu 4: 0.0287 - Meteor: 0.0872 - Gen Len: 16.0997 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 512 - eval_batch_size: 256 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 1024 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.9.1+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
CBreit00/DialoGPT_small_Rick
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - consumer-finance-complaints model-index: - name: distilbert-base-uncased-wandb-week-3-complaints-classifier-1500 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-wandb-week-3-complaints-classifier-1500 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the consumer-finance-complaints dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1500 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
CLAck/indo-mixed
[ "pytorch", "marian", "text2text-generation", "en", "id", "dataset:ALT", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
15
2022-07-18T08:57:44Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: housing-categories results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.875 --- # housing-categories Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### caravan ![caravan](images/caravan.jpg) #### castle ![castle](images/castle.jpg) #### farm ![farm](images/farm.jpg) #### tree house ![tree house](images/tree_house.jpg) #### yurt ![yurt](images/yurt.jpg)
CLAck/indo-pure
[ "pytorch", "marian", "text2text-generation", "en", "id", "dataset:ALT", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: apache-2.0 --- This model is a BERT-based Location Mention Recognition model that is adopted from the [TLLMR4CM GitHub](https://github.com/rsuwaileh/TLLMR4CM/). The model identifies the toponyms' spans in text and predicts their location types. The location type can be coarse-grained (e.g., country, city, etc.) and fine-grained (e.g., street, POI, etc.) The model is trained using the training splits of all events from [IDRISI-R dataset](https://github.com/rsuwaileh/IDRISI) under the `Type-based` LMR mode and using the `Random` version of the data. You can download this data in `BILOU` format from [here](https://github.com/rsuwaileh/IDRISI/tree/main/data/LMR/EN/gold-random-bilou/). More details are available [here](https://github.com/rsuwaileh/IDRISI/tree/main/models). * Different variants of the model are available through HuggingFace: - [rsuwaileh/IDRISI-LMR-EN-random-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-random-typeless/) - [rsuwaileh/IDRISI-LMR-EN-timebased-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-timebased-typeless/) - [rsuwaileh/IDRISI-LMR-EN-timebased-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-timebased-typebased/) * Arabic models are also available: - [rsuwaileh/IDRISI-LMR-AR-random-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-AR-random-typeless/) - [rsuwaileh/IDRISI-LMR-AR-random-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-AR-random-typebased/) - [rsuwaileh/IDRISI-LMR-AR-timebased-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-AR-timebased-typeless/) - [rsuwaileh/IDRISI-LMR-AR-timebased-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-AR-timebased-typebased/) To cite the models: ``` @article{suwaileh2022tlLMR4disaster, title={When a Disaster Happens, We Are Ready: Location Mention Recognition from Crisis Tweets}, author={Suwaileh, Reem and Elsayed, Tamer and Imran, Muhammad and Sajjad, Hassan}, journal={International Journal of Disaster Risk Reduction}, year={2022} } @inproceedings{suwaileh2020tlLMR4disaster, title={Are We Ready for this Disaster? Towards Location Mention Recognition from Crisis Tweets}, author={Suwaileh, Reem and Imran, Muhammad and Elsayed, Tamer and Sajjad, Hassan}, booktitle={Proceedings of the 28th International Conference on Computational Linguistics}, pages={6252--6263}, year={2020} } ``` To cite the IDRISI-R dataset: ``` @article{rsuwaileh2022Idrisi-r, title={IDRISI-R: Large-scale English and Arabic Location Mention Recognition Datasets for Disaster Response over Twitter}, author={Suwaileh, Reem and Elsayed, Tamer and Imran, Muhammad}, journal={...}, volume={...}, pages={...}, year={2022}, publisher={...} } ```
CLS/WubiBERT_models
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-07-18T09:03:47Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: TestZee/t5-small-finetuned-kaggle-data-t5-v2.0 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # TestZee/t5-small-finetuned-kaggle-data-t5-v2.0 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.8648 - Validation Loss: 1.7172 - Train Rouge1: 24.1639 - Train Rouge2: 13.1314 - Train Rougel: 20.8170 - Train Rougelsum: 22.3549 - Train Gen Len: 19.0 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Train Gen Len | Epoch | |:----------:|:---------------:|:------------:|:------------:|:------------:|:---------------:|:-------------:|:-----:| | 2.1107 | 1.8251 | 23.3472 | 12.6240 | 20.0726 | 21.6297 | 19.0 | 0 | | 1.9759 | 1.7755 | 23.8048 | 13.0368 | 20.4876 | 22.1022 | 19.0 | 1 | | 1.9275 | 1.7466 | 23.9713 | 13.1351 | 20.5833 | 22.1610 | 19.0 | 2 | | 1.8931 | 1.7291 | 24.0628 | 13.1243 | 20.7427 | 22.2890 | 19.0 | 3 | | 1.8648 | 1.7172 | 24.1639 | 13.1314 | 20.8170 | 22.3549 | 19.0 | 4 | ### Framework versions - Transformers 4.20.1 - TensorFlow 2.8.2 - Datasets 2.3.2 - Tokenizers 0.12.1
CLTL/gm-ner-xlmrbase
[ "pytorch", "tf", "xlm-roberta", "token-classification", "nl", "transformers", "dighum", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "XLMRobertaForTokenClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
2022-07-18T09:06:05Z
--- tags: - generated_from_trainer datasets: - common_voice model-index: - name: CV_bn_trained_on_Validation results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # CV_bn_trained_on_Validation This model is a fine-tuned version of [./content/CV_bn_trained_on_Validation/checkpoint-6000](https://huggingface.co/./content/CV_bn_trained_on_Validation/checkpoint-6000) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.2181 - Wer: 0.3385 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1104 | 1.25 | 2000 | 0.2535 | 0.3705 | | 1.1069 | 2.49 | 4000 | 0.2369 | 0.3545 | | 1.0344 | 3.74 | 6000 | 0.2181 | 0.3385 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
CLTL/icf-levels-att
[ "pytorch", "roberta", "text-classification", "nl", "transformers", "license:mit" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
32
null
--- license: apache-2.0 --- This model is a BERT-based Location Mention Recognition model that is adopted from the [TLLMR4CM GitHub](https://github.com/rsuwaileh/TLLMR4CM/). The model identifies the toponyms' spans in the text without predicting their location types. The model is trained using the training splits of all events from [IDRISI-R dataset](https://github.com/rsuwaileh/IDRISI) under the `Type-less` LMR mode and using the `Random` version of the data. You can download this data in `BILOU` format from [here](https://github.com/rsuwaileh/IDRISI/tree/main/data/LMR/EN/gold-random-bilou/). All Location types in the data were normalized to the `LOC` tag. More details about the models are available [here](https://github.com/rsuwaileh/IDRISI/tree/main/models). * Different variants of the model are available through HuggingFace: - [rsuwaileh/IDRISI-LMR-EN-random-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-random-typebased/) - [rsuwaileh/IDRISI-LMR-EN-timebased-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-timebased-typeless/) - [rsuwaileh/IDRISI-LMR-EN-timebased-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-timebased-typebased/) * Arabic models are also available: - [rsuwaileh/IDRISI-LMR-AR-random-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-AR-random-typeless/) - [rsuwaileh/IDRISI-LMR-AR-random-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-AR-random-typebased/) - [rsuwaileh/IDRISI-LMR-AR-timebased-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-AR-timebased-typeless/) - [rsuwaileh/IDRISI-LMR-AR-timebased-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-AR-timebased-typebased/) To cite the models: ``` @article{suwaileh2022tlLMR4disaster, title={When a Disaster Happens, We Are Ready: Location Mention Recognition from Crisis Tweets}, author={Suwaileh, Reem and Elsayed, Tamer and Imran, Muhammad and Sajjad, Hassan}, journal={International Journal of Disaster Risk Reduction}, year={2022} } @inproceedings{suwaileh2020tlLMR4disaster, title={Are We Ready for this Disaster? Towards Location Mention Recognition from Crisis Tweets}, author={Suwaileh, Reem and Imran, Muhammad and Elsayed, Tamer and Sajjad, Hassan}, booktitle={Proceedings of the 28th International Conference on Computational Linguistics}, pages={6252--6263}, year={2020} } ``` To cite the IDRISI-R dataset: ``` @article{rsuwaileh2022Idrisi-r, title={IDRISI-R: Large-scale English and Arabic Location Mention Recognition Datasets for Disaster Response over Twitter}, author={Suwaileh, Reem and Elsayed, Tamer and Imran, Muhammad}, journal={...}, volume={...}, pages={...}, year={2022}, publisher={...} } ```
CLTL/icf-levels-ber
[ "pytorch", "roberta", "text-classification", "nl", "transformers", "license:mit" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
33
null
--- license: apache-2.0 --- This model is a BERT-based Location Mention Recognition model that is adopted from the [TLLMR4CM GitHub](https://github.com/rsuwaileh/TLLMR4CM/). The model identifies the toponyms' spans in the text without predicting their location types. The model is trained using the training splits of all events from [IDRISI-R dataset](https://github.com/rsuwaileh/IDRISI) under the `Type-less` LMR mode and using the `Time-based` version of the data. You can download this data in `BILOU` format from [here](https://github.com/rsuwaileh/IDRISI/tree/main/data/LMR/EN/gold-timebased-bilou/). All Location types in the data were normalized to the `LOC` tag. More details about the models are available [here](https://github.com/rsuwaileh/IDRISI/tree/main/models). * Different variants of the model are available through HuggingFace: - [rsuwaileh/IDRISI-LMR-EN-random-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-random-typeless/) - [rsuwaileh/IDRISI-LMR-EN-random-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-random-typebased/) - [rsuwaileh/IDRISI-LMR-EN-timebased-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-timebased-typebased/) * Arabic models are also available: - [rsuwaileh/IDRISI-LMR-AR-random-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-AR-random-typeless/) - [rsuwaileh/IDRISI-LMR-AR-random-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-AR-random-typebased/) - [rsuwaileh/IDRISI-LMR-AR-timebased-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-AR-timebased-typeless/) - [rsuwaileh/IDRISI-LMR-AR-timebased-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-AR-timebased-typebased/) To cite the models: ``` @article{suwaileh2022tlLMR4disaster, title={When a Disaster Happens, We Are Ready: Location Mention Recognition from Crisis Tweets}, author={Suwaileh, Reem and Elsayed, Tamer and Imran, Muhammad and Sajjad, Hassan}, journal={International Journal of Disaster Risk Reduction}, year={2022} } @inproceedings{suwaileh2020tlLMR4disaster, title={Are We Ready for this Disaster? Towards Location Mention Recognition from Crisis Tweets}, author={Suwaileh, Reem and Imran, Muhammad and Elsayed, Tamer and Sajjad, Hassan}, booktitle={Proceedings of the 28th International Conference on Computational Linguistics}, pages={6252--6263}, year={2020} } ``` To cite the IDRISI-R dataset: ``` @article{rsuwaileh2022Idrisi-r, title={IDRISI-R: Large-scale English and Arabic Location Mention Recognition Datasets for Disaster Response over Twitter}, author={Suwaileh, Reem and Elsayed, Tamer and Imran, Muhammad}, journal={...}, volume={...}, pages={...}, year={2022}, publisher={...} } ```
CLTL/icf-levels-enr
[ "pytorch", "roberta", "text-classification", "nl", "transformers", "license:mit" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
30
null
--- license: apache-2.0 --- This model is a BERT-based Location Mention Recognition model that is adopted from the [TLLMR4CM GitHub](https://github.com/rsuwaileh/TLLMR4CM/). The model identifies the toponyms' spans in the text and predicts their location types. The location type can be coarse-grained (e.g., country, city, etc.) and fine-grained (e.g., street, POI, etc.) The model is trained using the training splits of all events from [IDRISI-R dataset](https://github.com/rsuwaileh/IDRISI) under the `Type-based` LMR mode and using the `Time-based` version of the data. You can download this data in `BILOU` format from [here](https://github.com/rsuwaileh/IDRISI/tree/main/data/LMR/EN/gold-random-bilou/). More details about the models are available [here](https://github.com/rsuwaileh/IDRISI/tree/main/models). * Different variants of the model are available through HuggingFace: - [rsuwaileh/IDRISI-LMR-EN-random-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-random-typeless/) - [rsuwaileh/IDRISI-LMR-EN-random-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-random-typebased/) - [rsuwaileh/IDRISI-LMR-EN-timebased-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-timebased-typeless/) * Arabic models are also available: - [rsuwaileh/IDRISI-LMR-AR-random-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-AR-random-typeless/) - [rsuwaileh/IDRISI-LMR-AR-random-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-AR-random-typebased/) - [rsuwaileh/IDRISI-LMR-AR-timebased-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-AR-timebased-typeless/) - [rsuwaileh/IDRISI-LMR-AR-timebased-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-AR-timebased-typebased/) To cite the models: ``` @article{suwaileh2022tlLMR4disaster, title={When a Disaster Happens, We Are Ready: Location Mention Recognition from Crisis Tweets}, author={Suwaileh, Reem and Elsayed, Tamer and Imran, Muhammad and Sajjad, Hassan}, journal={International Journal of Disaster Risk Reduction}, year={2022} } @inproceedings{suwaileh2020tlLMR4disaster, title={Are We Ready for this Disaster? Towards Location Mention Recognition from Crisis Tweets}, author={Suwaileh, Reem and Imran, Muhammad and Elsayed, Tamer and Sajjad, Hassan}, booktitle={Proceedings of the 28th International Conference on Computational Linguistics}, pages={6252--6263}, year={2020} } ``` To cite the IDRISI-R dataset: ``` @article{rsuwaileh2022Idrisi-r, title={IDRISI-R: Large-scale English and Arabic Location Mention Recognition Datasets for Disaster Response over Twitter}, author={Suwaileh, Reem and Elsayed, Tamer and Imran, Muhammad}, journal={...}, volume={...}, pages={...}, year={2022}, publisher={...} } ```
CLTL/icf-levels-etn
[ "pytorch", "roberta", "text-classification", "nl", "transformers", "license:mit" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
31
null
# Neural Language Models for Nineteenth-Century English: bert_1760_1900 ## Introduction BERT model trained on a large historical dataset of books in English, published between 1760-1900 and comprised of ~5.1 billion tokens. - Data paper: http://doi.org/10.5334/johd.48 - Github repository: https://github.com/Living-with-machines/histLM ## License The models are released under open license CC BY 4.0, available at https://creativecommons.org/licenses/by/4.0/legalcode. ## Funding Statement This work was supported by Living with Machines (AHRC grant AH/S01179X/1) and The Alan Turing Institute (EPSRC grant EP/N510129/1). ## Dataset creators Kasra Hosseini, Kaspar Beelen and Mariona Coll Ardanuy (The Alan Turing Institute) preprocessed the text, created a database, trained and fine-tuned language models as described in the accompanying paper. Giovanni Colavizza (University of Amsterdam), David Beavan (The Alan Turing Institute) and James Hetherington (University College London) helped with planning, accessing the datasets and designing the experiments.
CLTL/icf-levels-fac
[ "pytorch", "roberta", "text-classification", "nl", "transformers", "license:mit" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
32
2022-07-18T09:31:40Z
# Neural Language Models for Nineteenth-Century English: bert_1850_1875 ## Introduction BERT model trained on a large historical dataset of books in English, published between 1850-1875 and comprised of ~1.3 billion tokens. - Data paper: http://doi.org/10.5334/johd.48 - Github repository: https://github.com/Living-with-machines/histLM ## License The models are released under open license CC BY 4.0, available at https://creativecommons.org/licenses/by/4.0/legalcode. ## Funding Statement This work was supported by Living with Machines (AHRC grant AH/S01179X/1) and The Alan Turing Institute (EPSRC grant EP/N510129/1). ## Dataset creators Kasra Hosseini, Kaspar Beelen and Mariona Coll Ardanuy (The Alan Turing Institute) preprocessed the text, created a database, trained and fine-tuned language models as described in the accompanying paper. Giovanni Colavizza (University of Amsterdam), David Beavan (The Alan Turing Institute) and James Hetherington (University College London) helped with planning, accessing the datasets and designing the experiments.
CLTL/icf-levels-mbw
[ "pytorch", "roberta", "text-classification", "nl", "transformers", "license:mit" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
30
null
# Neural Language Models for Nineteenth-Century English: bert_1875_1890 ## Introduction BERT model trained on a large historical dataset of books in English, published between 1875-1890 and comprised of ~1.3 billion tokens. - Data paper: http://doi.org/10.5334/johd.48 - Github repository: https://github.com/Living-with-machines/histLM ## License The models are released under open license CC BY 4.0, available at https://creativecommons.org/licenses/by/4.0/legalcode. ## Funding Statement This work was supported by Living with Machines (AHRC grant AH/S01179X/1) and The Alan Turing Institute (EPSRC grant EP/N510129/1). ## Dataset creators Kasra Hosseini, Kaspar Beelen and Mariona Coll Ardanuy (The Alan Turing Institute) preprocessed the text, created a database, trained and fine-tuned language models as described in the accompanying paper. Giovanni Colavizza (University of Amsterdam), David Beavan (The Alan Turing Institute) and James Hetherington (University College London) helped with planning, accessing the datasets and designing the experiments.
CLTL/icf-levels-stm
[ "pytorch", "roberta", "text-classification", "nl", "transformers", "license:mit" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
32
null
# Neural Language Models for Nineteenth-Century English: bert_1890_1900 ## Introduction BERT model trained on a large historical dataset of books in English, published between 1890-1900 and comprised of ~1.1 billion tokens. - Data paper: http://doi.org/10.5334/johd.48 - Github repository: https://github.com/Living-with-machines/histLM ## License The models are released under open license CC BY 4.0, available at https://creativecommons.org/licenses/by/4.0/legalcode. ## Funding Statement This work was supported by Living with Machines (AHRC grant AH/S01179X/1) and The Alan Turing Institute (EPSRC grant EP/N510129/1). ## Dataset creators Kasra Hosseini, Kaspar Beelen and Mariona Coll Ardanuy (The Alan Turing Institute) preprocessed the text, created a database, trained and fine-tuned language models as described in the accompanying paper. Giovanni Colavizza (University of Amsterdam), David Beavan (The Alan Turing Institute) and James Hetherington (University College London) helped with planning, accessing the datasets and designing the experiments.
CM-CA/Cartman
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: afl-3.0 --- distilbert Binary Text Classifier This distilbert based text classification model trained on imdb dataset performs binary sentiment classification on any given sentence. The model has been fine tuned for better results in manageable time frames. LABEL0 - Negative LABEL1 - Positive
CM-CA/DialoGPT-small-cartman
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad_v2 model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 1.4214 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.1814 | 1.0 | 8235 | 1.2488 | | 0.9078 | 2.0 | 16470 | 1.3127 | | 0.7439 | 3.0 | 24705 | 1.4214 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
CNT-UPenn/RoBERTa_for_seizureFrequency_QA
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: dpovedano/distilbert-base-uncased-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # dpovedano/distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0285 - Validation Loss: 0.0612 - Train Precision: 0.9222 - Train Recall: 0.9358 - Train F1: 0.9289 - Train Accuracy: 0.9834 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2631, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch | |:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:| | 0.0289 | 0.0612 | 0.9222 | 0.9358 | 0.9289 | 0.9834 | 0 | | 0.0284 | 0.0612 | 0.9222 | 0.9358 | 0.9289 | 0.9834 | 1 | | 0.0285 | 0.0612 | 0.9222 | 0.9358 | 0.9289 | 0.9834 | 2 | ### Framework versions - Transformers 4.20.1 - TensorFlow 2.8.2 - Datasets 2.3.2 - Tokenizers 0.12.1
Caddy/UD
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-base-cased_conll2003-CRF-first-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased_conll2003-CRF-first-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0546 - Precision: 0.6483 - Recall: 0.3940 - F1: 0.4902 - Accuracy: 0.9225 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.089 | 1.0 | 7021 | 0.0522 | 0.6315 | 0.3781 | 0.4730 | 0.9193 | | 0.0203 | 2.0 | 14042 | 0.0481 | 0.6587 | 0.4044 | 0.5011 | 0.9233 | | 0.0166 | 3.0 | 21063 | 0.0546 | 0.6483 | 0.3940 | 0.4902 | 0.9225 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.2+cu102 - Datasets 2.2.2 - Tokenizers 0.12.1
Calamarii/calamari
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: other tags: - generated_from_trainer model-index: - name: opt-350m-finetuned-stack results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opt-350m-finetuned-stack This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
CallumRai/HansardGPT2
[ "pytorch", "jax", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 246.73 +/- 23.48 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Cameron/BERT-Jigsaw
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
35
2022-07-18T11:20:24Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: hf-model-0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hf-model-0 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7158 - Accuracy: 0.45 - F1: 0.45 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:----:| | 0.6107 | 1.0 | 12 | 0.7134 | 0.45 | 0.45 | | 0.5364 | 2.0 | 24 | 0.7158 | 0.45 | 0.45 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Cameron/BERT-jigsaw-severetoxic
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
30
2021-10-23T15:00:31Z
--- tags: - text-classification library_name: generic --- # Text Classification repository template This is a template repository for Text Classification to support generic inference with Hugging Face Hub generic Inference API. There are two required steps: 1. Specify the requirements by defining a `requirements.txt` file. 2. Implement the `pipeline.py` `__init__` and `__call__` methods. These methods are called by the Inference API. The `__init__` method should load the model and preload all the elements needed for inference (model, processors, tokenizers, etc.). This is only called once. The `__call__` method performs the actual inference. Make sure to follow the same input/output specifications defined in the template for the pipeline to work. ## How to start First create a repo in https://hf.co/new. Then clone this template and push it to your repo. ``` git clone https://huggingface.co/templates/text-classification cd text-classification git remote set-url origin https://huggingface.co/$YOUR_USER/$YOUR_REPO_NAME git push --force ```
Cameron/BERT-mdgender-convai-binary
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
33
2022-07-18T12:24:37Z
--- tags: - paraphrasing - generated_from_trainer metrics: - rouge model-index: - name: pegasus-xsum-finetuned-paws-parasci results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-xsum-finetuned-paws-parasci This model is a fine-tuned version of [domenicrosati/pegasus-xsum-finetuned-paws](https://huggingface.co/domenicrosati/pegasus-xsum-finetuned-paws) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.2256 - Rouge1: 61.8854 - Rouge2: 43.1061 - Rougel: 57.421 - Rougelsum: 57.4417 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 4000 - mixed_precision_training: Native AMP - label_smoothing_factor: 0.1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:| | No log | 0.05 | 1000 | 3.8024 | 49.471 | 24.8024 | 43.4857 | 43.5552 | | No log | 0.09 | 2000 | 3.6533 | 49.1046 | 24.4038 | 43.0189 | 43.002 | | No log | 0.14 | 3000 | 3.5867 | 49.5026 | 24.748 | 43.3059 | 43.2923 | | No log | 0.19 | 4000 | 3.5613 | 49.4319 | 24.5444 | 43.2225 | 43.1965 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
dccuchile/albert-base-spanish-finetuned-qa-mlqa
[ "pytorch", "albert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "AlbertForQuestionAnswering" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
Chakita/KROBERT
[ "pytorch", "roberta", "fill-mask", "transformers", "masked-lm", "fill-in-the-blanks", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
2022-07-19T03:32:58Z
--- tags: - generated_from_trainer metrics: - sacrebleu model-index: - name: t5-pegasus-finetuned_test results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-pegasus-finetuned_test This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 6.0045 - Sacrebleu: 0.8737 - Rouge 1: 0.0237 - Rouge 2: 0.0 - Rouge L: 0.0232 - Bleu 1: 0.1444 - Bleu 2: 0.0447 - Bleu 3: 0.0175 - Bleu 4: 0.0083 - Meteor: 0.0609 - Gen Len: 15.098 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Sacrebleu | Rouge 1 | Rouge 2 | Rouge L | Bleu 1 | Bleu 2 | Bleu 3 | Bleu 4 | Meteor | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|:-------:|:-------:|:------:|:------:|:------:|:------:|:------:|:-------:| | No log | 52.5 | 210 | 5.9818 | 0.9114 | 0.0229 | 0.0 | 0.0225 | 0.1424 | 0.0436 | 0.0183 | 0.0091 | 0.06 | 15.126 | | No log | 70.0 | 280 | 6.0072 | 0.876 | 0.0233 | 0.0 | 0.0228 | 0.1437 | 0.0452 | 0.0177 | 0.0083 | 0.0607 | 15.088 | | No log | 87.5 | 350 | 6.0017 | 0.8695 | 0.0229 | 0.0 | 0.0225 | 0.1445 | 0.0443 | 0.0175 | 0.0082 | 0.0609 | 15.12 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.9.1+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
Chakita/Kalbert
[ "pytorch", "tensorboard", "albert", "fill-mask", "transformers", "generated_from_trainer", "license:mit", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "AlbertForMaskedLM" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- license: apache-2.0 tags: - automatic-speech-recognition - gary109/AI_Light_Dance - generated_from_trainer model-index: - name: ai-light-dance_singing3_ft_wav2vec2-large-xlsr-53-v1-5gram results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ai-light-dance_singing3_ft_wav2vec2-large-xlsr-53-v1-5gram This model is a fine-tuned version of [gary109/ai-light-dance_singing3_ft_wav2vec2-large-xlsr-53-v1-5gram](https://huggingface.co/gary109/ai-light-dance_singing3_ft_wav2vec2-large-xlsr-53-v1-5gram) on the GARY109/AI_LIGHT_DANCE - ONSET-SINGING3 dataset. It achieves the following results on the evaluation set: - Loss: 0.4505 - Wer: 0.2119 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-07 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 100.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.3355 | 1.0 | 144 | 0.4505 | 0.2119 | | 0.3069 | 2.0 | 288 | 0.4509 | 0.2124 | | 0.3049 | 3.0 | 432 | 0.4511 | 0.2119 | | 0.3028 | 4.0 | 576 | 0.4521 | 0.2114 | | 0.3092 | 5.0 | 720 | 0.4532 | 0.2112 | | 0.3043 | 6.0 | 864 | 0.4536 | 0.2117 | | 0.2903 | 7.0 | 1008 | 0.4543 | 0.2114 | | 0.3124 | 8.0 | 1152 | 0.4538 | 0.2118 | | 0.3079 | 9.0 | 1296 | 0.4541 | 0.2121 | | 0.3093 | 10.0 | 1440 | 0.4537 | 0.2117 | | 0.3093 | 11.0 | 1584 | 0.4544 | 0.2111 | | 0.3202 | 12.0 | 1728 | 0.4549 | 0.2110 | | 0.3086 | 13.0 | 1872 | 0.4546 | 0.2104 | | 0.2947 | 14.0 | 2016 | 0.4542 | 0.2119 | | 0.3145 | 15.0 | 2160 | 0.4539 | 0.2115 | | 0.3292 | 16.0 | 2304 | 0.4532 | 0.2115 | | 0.3049 | 17.0 | 2448 | 0.4547 | 0.2117 | | 0.3177 | 18.0 | 2592 | 0.4544 | 0.2111 | | 0.3108 | 19.0 | 2736 | 0.4547 | 0.2114 | | 0.2944 | 20.0 | 2880 | 0.4560 | 0.2105 | | 0.3232 | 21.0 | 3024 | 0.4560 | 0.2113 | | 0.3196 | 22.0 | 3168 | 0.4559 | 0.2107 | | 0.3207 | 23.0 | 3312 | 0.4563 | 0.2106 | | 0.3039 | 24.0 | 3456 | 0.4555 | 0.2110 | | 0.3157 | 25.0 | 3600 | 0.4560 | 0.2117 | | 0.3285 | 26.0 | 3744 | 0.4561 | 0.2102 | | 0.3125 | 27.0 | 3888 | 0.4553 | 0.2107 | | 0.3051 | 28.0 | 4032 | 0.4560 | 0.2103 | | 0.3166 | 29.0 | 4176 | 0.4560 | 0.2103 | | 0.321 | 30.0 | 4320 | 0.4551 | 0.2101 | | 0.3146 | 31.0 | 4464 | 0.4552 | 0.2100 | | 0.323 | 32.0 | 4608 | 0.4551 | 0.2105 | | 0.3223 | 33.0 | 4752 | 0.4554 | 0.2101 | | 0.3105 | 34.0 | 4896 | 0.4549 | 0.2102 | | 0.3134 | 35.0 | 5040 | 0.4552 | 0.2101 | | 0.3054 | 36.0 | 5184 | 0.4550 | 0.2103 | | 0.3162 | 37.0 | 5328 | 0.4554 | 0.2106 | | 0.3094 | 38.0 | 5472 | 0.4551 | 0.2099 | | 0.3174 | 39.0 | 5616 | 0.4553 | 0.2105 | | 0.3218 | 40.0 | 5760 | 0.4553 | 0.2106 | | 0.3134 | 41.0 | 5904 | 0.4552 | 0.2101 | | 0.3019 | 42.0 | 6048 | 0.4552 | 0.2101 | | 0.3169 | 43.0 | 6192 | 0.4552 | 0.2095 | | 0.3209 | 44.0 | 6336 | 0.4550 | 0.2090 | | 0.3035 | 45.0 | 6480 | 0.4550 | 0.2100 | | 0.3181 | 46.0 | 6624 | 0.4550 | 0.2104 | | 0.3133 | 47.0 | 6768 | 0.4546 | 0.2096 | | 0.3173 | 48.0 | 6912 | 0.4556 | 0.2099 | | 0.3174 | 49.0 | 7056 | 0.4552 | 0.2101 | | 0.313 | 50.0 | 7200 | 0.4553 | 0.2100 | | 0.3139 | 51.0 | 7344 | 0.4555 | 0.2101 | | 0.3054 | 52.0 | 7488 | 0.4555 | 0.2100 | | 0.3212 | 53.0 | 7632 | 0.4554 | 0.2097 | | 0.3252 | 54.0 | 7776 | 0.4553 | 0.2097 | | 0.3063 | 55.0 | 7920 | 0.4554 | 0.2106 | | 0.3206 | 56.0 | 8064 | 0.4551 | 0.2097 | | 0.3176 | 57.0 | 8208 | 0.4552 | 0.2101 | | 0.3179 | 58.0 | 8352 | 0.4554 | 0.2099 | | 0.3064 | 59.0 | 8496 | 0.4559 | 0.2092 | | 0.301 | 60.0 | 8640 | 0.4559 | 0.2103 | | 0.3103 | 61.0 | 8784 | 0.4559 | 0.2102 | | 0.3169 | 62.0 | 8928 | 0.4559 | 0.2103 | | 0.3081 | 63.0 | 9072 | 0.4559 | 0.2101 | | 0.3249 | 64.0 | 9216 | 0.4555 | 0.2106 | | 0.3031 | 65.0 | 9360 | 0.4553 | 0.2105 | | 0.3017 | 66.0 | 9504 | 0.4556 | 0.2105 | | 0.3261 | 67.0 | 9648 | 0.4551 | 0.2100 | | 0.3196 | 68.0 | 9792 | 0.4553 | 0.2096 | | 0.3085 | 69.0 | 9936 | 0.4554 | 0.2095 | | 0.3235 | 70.0 | 10080 | 0.4552 | 0.2096 | | 0.3194 | 71.0 | 10224 | 0.4550 | 0.2102 | | 0.3243 | 72.0 | 10368 | 0.4546 | 0.2098 | | 0.3115 | 73.0 | 10512 | 0.4542 | 0.2101 | | 0.3307 | 74.0 | 10656 | 0.4545 | 0.2100 | | 0.3072 | 75.0 | 10800 | 0.4547 | 0.2100 | | 0.3218 | 76.0 | 10944 | 0.4545 | 0.2102 | | 0.3116 | 77.0 | 11088 | 0.4540 | 0.2103 | | 0.3021 | 78.0 | 11232 | 0.4542 | 0.2101 | | 0.3165 | 79.0 | 11376 | 0.4539 | 0.2109 | | 0.327 | 80.0 | 11520 | 0.4539 | 0.2090 | | 0.3268 | 81.0 | 11664 | 0.4540 | 0.2110 | | 0.304 | 82.0 | 11808 | 0.4537 | 0.2097 | | 0.3256 | 83.0 | 11952 | 0.4537 | 0.2102 | | 0.3208 | 84.0 | 12096 | 0.4544 | 0.2101 | | 0.3199 | 85.0 | 12240 | 0.4541 | 0.2094 | | 0.3104 | 86.0 | 12384 | 0.4543 | 0.2097 | | 0.3218 | 87.0 | 12528 | 0.4542 | 0.2106 | | 0.3301 | 88.0 | 12672 | 0.4538 | 0.2098 | | 0.3055 | 89.0 | 12816 | 0.4540 | 0.2101 | | 0.3154 | 90.0 | 12960 | 0.4533 | 0.2098 | | 0.3169 | 91.0 | 13104 | 0.4543 | 0.2098 | | 0.3122 | 92.0 | 13248 | 0.4541 | 0.2098 | | 0.319 | 93.0 | 13392 | 0.4536 | 0.2094 | | 0.307 | 94.0 | 13536 | 0.4538 | 0.2092 | | 0.3132 | 95.0 | 13680 | 0.4540 | 0.2094 | | 0.3185 | 96.0 | 13824 | 0.4536 | 0.2099 | | 0.2996 | 97.0 | 13968 | 0.4541 | 0.2100 | | 0.3193 | 98.0 | 14112 | 0.4539 | 0.2092 | | 0.3091 | 99.0 | 14256 | 0.4538 | 0.2096 | | 0.315 | 100.0 | 14400 | 0.4544 | 0.2100 | ### Framework versions - Transformers 4.21.0.dev0 - Pytorch 1.9.1+cu102 - Datasets 2.3.3.dev0 - Tokenizers 0.12.1
Chakita/KannadaBERT
[ "pytorch", "roberta", "fill-mask", "transformers", "masked-lm", "fill-in-the-blanks", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
2022-07-19T03:40:55Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - consumer-finance-complaints metrics: - accuracy - f1 - recall - precision model-index: - name: distilroberta-base-wandb-week-3-complaints-classifier-512 results: - task: name: Text Classification type: text-classification dataset: name: consumer-finance-complaints type: consumer-finance-complaints args: default metrics: - name: Accuracy type: accuracy value: 0.8038326283064064 - name: F1 type: f1 value: 0.791857014338201 - name: Recall type: recall value: 0.8038326283064064 - name: Precision type: precision value: 0.7922430702228043 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilroberta-base-wandb-week-3-complaints-classifier-512 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the consumer-finance-complaints dataset. It achieves the following results on the evaluation set: - Loss: 0.6004 - Accuracy: 0.8038 - F1: 0.7919 - Recall: 0.8038 - Precision: 0.7922 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.7835312622444155e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 512 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:------:|:---------:| | 0.7559 | 0.61 | 1500 | 0.7307 | 0.7733 | 0.7411 | 0.7733 | 0.7286 | | 0.6361 | 1.22 | 3000 | 0.6559 | 0.7846 | 0.7699 | 0.7846 | 0.7718 | | 0.5774 | 1.83 | 4500 | 0.6004 | 0.8038 | 0.7919 | 0.8038 | 0.7922 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1
Chakita/gpt2_mwp
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- widget: - text: "procedure: single ap view of the chest comparison: none findings: no surgical hardware nor tubes. lungs, pleura: low lung volumes, bilateral airspace opacities. no pneumothorax or pleural effusion. cardiovascular and mediastinum: the cardiomediastinal silhouette seems stable. impression: 1. patchy bilateral airspace opacities, stable, but concerning for multifocal pneumonia. 2. absence of other suspicions, the rest of the lungs seems fine." - text: "procedure: single ap view of the chest comparison: none findings: No surgical hardware nor tubes. lungs, pleura: low lung volumes, bilateral airspace opacities. no pneumothorax or pleural effusion. cardiovascular and mediastinum: the cardiomediastinal silhouette seems stable. impression: 1. patchy bilateral airspace opacities, stable. 2. some areas are suggestive that pneumonia can not be excluded. 3. recommended to follow-up shortly and check if there are additional symptoms" tags: - text-classification - pytorch - transformers - uncased - radiology - biomedical - covid-19 - covid19 language: - en license: mit --- COVID-RadBERT was trained to detect the presence or absence of COVID-19 within radiology reports, along an "uncertain" diagnostic when further medical tests are required. ## Citation ```bibtex @article{chambon_cook_langlotz_2022, title={Improved fine-tuning of in-domain transformer model for inferring COVID-19 presence in multi-institutional radiology reports}, DOI={10.1007/s10278-022-00714-8}, journal={Journal of Digital Imaging}, author={Chambon, Pierre and Cook, Tessa S. and Langlotz, Curtis P.}, year={2022} } ```
Chalponkey/DialoGPT-small-Barry
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
Field blue flowers and bright stars ethereal in holy lighting
Cheatham/xlm-roberta-base-finetuned
[ "pytorch", "xlm-roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "XLMRobertaForSequenceClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
20
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids library_name: ml-agents --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Write your model_id: SimingSiming/testpyramidsrnd 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Cheatham/xlm-roberta-large-finetuned3
[ "pytorch", "xlm-roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "XLMRobertaForSequenceClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
22
null
--- license: mit tags: - object-detection - object-tracking - video - video-object-segmentation inference: false --- # unicorn_track_tiny_rt_mask ## Table of Contents - [unicorn_track_tiny_rt_mask](#-model_id--defaultmymodelname-true) - [Table of Contents](#table-of-contents) - [Model Details](#model-details) - [Uses](#uses) - [Direct Use](#direct-use) - [Evaluation Results](#evaluation-results) <model_details> ## Model Details Unicorn accomplishes the great unification of the network architecture and the learning paradigm for four tracking tasks. Unicorn puts forwards new state-of-the-art performance on many challenging tracking benchmarks using the same model parameters. This model has an input size of 640x1024. - License: This model is licensed under the MIT license - Resources for more information: - [Research Paper](https://arxiv.org/abs/2111.12085) - [GitHub Repo](https://github.com/MasterBin-IIAU/Unicorn) </model_details> <uses> ## Uses #### Direct Use This model can be used for: * Single Object Tracking (SOT) * Multiple Object Tracking (MOT) * Video Object Segmentation (VOS) * Multi-Object Tracking and Segmentation (MOTS) <Eval_Results> ## Evaluation Results LaSOT AUC (%): 67.1 BDD100K mMOTA (%): 37.5 DAVIS17 J&F (%): 66.8 BDD100K MOTS mMOTSA (%): 26.2 </Eval_Results> <Cite> ## Citation Information ```bibtex @inproceedings{unicorn, title={Towards Grand Unification of Object Tracking}, author={Yan, Bin and Jiang, Yi and Sun, Peize and Wang, Dong and Yuan, Zehuan and Luo, Ping and Lu, Huchuan}, booktitle={ECCV}, year={2022} } ``` </Cite>
Cheatham/xlm-roberta-large-finetuned4
[ "pytorch", "xlm-roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "XLMRobertaForSequenceClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
20
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="AliMMZ/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
CheonggyeMountain-Sherpa/kogpt-trinity-poem
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
15
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3A results: - metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="AliMMZ/q-Taxi-v3A", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
Chikita1/www_stash_stock
[ "license:bsd-3-clause-clear" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - pytorch - diffusers - unconditional-image-generation --- # Score-Based Generative Modeling through Stochastic Differential Equations (SDE) **Paper**: [Score-Based Generative Modeling through Stochastic Differential Equations](https://arxiv.org/abs/2011.13456) **Authors**: Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, Ben Poole **Abstract**: *Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (\aka, score) of the perturbed data distribution. By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictor-corrector framework to correct errors in the evolution of the discretized reverse-time SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with score-based models, as demonstrated with experiments on class-conditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model.* ## Inference *SDE* models can use **continuous** noise schedulers such as: - [scheduling_sde_ve](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_sde_ve.py) for inference. See the following code: ```python # !pip install diffusers from diffusers import DiffusionPipeline model_id = "google/ncsnpp-ffhq-1024" # load model and scheduler sde_ve = DiffusionPipeline.from_pretrained(model_id) # run pipeline in inference (sample random noise and denoise) image = sde_ve()["sample"] # save image image[0].save("sde_ve_generated_image.png") ``` Please take a look at [pipeline_score_sde_ve](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/score_sde_ve/pipeline_score_sde_ve.py) for more details on how to write your own denoising loop. For more information generally on how to use `diffusers` for inference, please have a look at the [official inference example](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb) ## Samples 1. <img src="https://huggingface.co/google/ncsnpp-ffhq-1024/resolve/main/images/generated_image_0.png" alt="drawing" width="512"/> 2. <img src="https://huggingface.co/google/ncsnpp-ffhq-1024/resolve/main/images/generated_image_1.png" alt="drawing" width="512"/> 3. <img src="https://huggingface.co/google/ncsnpp-ffhq-1024/resolve/main/images/generated_image_2.png" alt="drawing" width="512"/> 4. <img src="https://huggingface.co/google/ncsnpp-ffhq-1024/resolve/main/images/generated_image_3.png" alt="drawing" width="512"/>
Chinat/test-classifier
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit tags: - generated_from_trainer datasets: - banking77 metrics: - accuracy model-index: - name: distilbert-base-uncased-banking77-classification results: - task: name: Text Classification type: text-classification dataset: name: banking77 type: banking77 args: default metrics: - name: Accuracy type: accuracy value: 0.924025974025974 - task: type: text-classification name: Text Classification dataset: name: banking77 type: banking77 config: default split: test metrics: - name: Accuracy type: accuracy value: 0.924025974025974 verified: true - name: Precision Macro type: precision value: 0.9278003086307286 verified: true - name: Precision Micro type: precision value: 0.924025974025974 verified: true - name: Precision Weighted type: precision value: 0.9278003086307287 verified: true - name: Recall Macro type: recall value: 0.9240259740259743 verified: true - name: Recall Micro type: recall value: 0.924025974025974 verified: true - name: Recall Weighted type: recall value: 0.924025974025974 verified: true - name: F1 Macro type: f1 value: 0.9243068139192414 verified: true - name: F1 Micro type: f1 value: 0.924025974025974 verified: true - name: F1 Weighted type: f1 value: 0.9243068139192416 verified: true - name: loss type: loss value: 0.31516405940055847 verified: true widget: - text: 'Can I track the card you sent to me? ' example_title: Card Arrival Example - text: Can you explain your exchange rate policy to me? example_title: Exchange Rate Example - text: I can't pay by my credit card example_title: Card Not Working Example --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-banking77-classification This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the banking77 dataset. It achieves the following results on the evaluation set: - Loss: 0.3152 - Accuracy: 0.9240 - F1 Score: 0.9243 ## Model description This is my first fine-tuning experiment using Hugging Face. Using distilBERT as a pretrained model, I trained a classifier for online banking queries. It could be useful for addressing tickets. ## Intended uses & limitations The model can be used on text classification. In particular is fine tuned on banking domain. ## Training and evaluation data The dataset used is [banking77](https://huggingface.co/datasets/banking77) The 77 labels are: |label|intent| |:---:|:----:| |0|activate_my_card| |1|age_limit| |2|apple_pay_or_google_pay| |3|atm_support| |4|automatic_top_up| |5|balance_not_updated_after_bank_transfer| |6|balance_not_updated_after_cheque_or_cash_deposit| |7|beneficiary_not_allowed| |8|cancel_transfer| |9|card_about_to_expire| |10|card_acceptance| |11|card_arrival| |12|card_delivery_estimate| |13|card_linking| |14|card_not_working| |15|card_payment_fee_charged| |16|card_payment_not_recognised| |17|card_payment_wrong_exchange_rate| |18|card_swallowed| |19|cash_withdrawal_charge| |20|cash_withdrawal_not_recognised| |21|change_pin| |22|compromised_card| |23|contactless_not_working| |24|country_support| |25|declined_card_payment| |26|declined_cash_withdrawal| |27|declined_transfer| |28|direct_debit_payment_not_recognised| |29|disposable_card_limits| |30|edit_personal_details| |31|exchange_charge| |32|exchange_rate| |33|exchange_via_app| |34|extra_charge_on_statement| |35|failed_transfer| |36|fiat_currency_support| |37|get_disposable_virtual_card| |38|get_physical_card| |39|getting_spare_card| |40|getting_virtual_card| |41|lost_or_stolen_card| |42|lost_or_stolen_phone| |43|order_physical_card| |44|passcode_forgotten| |45|pending_card_payment| |46|pending_cash_withdrawal| |47|pending_top_up| |48|pending_transfer| |49|pin_blocked| |50|receiving_money| |51|Refund_not_showing_up| |52|request_refund| |53|reverted_card_payment?| |54|supported_cards_and_currencies| |55|terminate_account| |56|top_up_by_bank_transfer_charge| |57|top_up_by_card_charge| |58|top_up_by_cash_or_cheque| |59|top_up_failed| |60|top_up_limits| |61|top_up_reverted| |62|topping_up_by_card| |63|transaction_charged_twice| |64|transfer_fee_charged| |65|transfer_into_account| |66|transfer_not_received_by_recipient| |67|transfer_timing| |68|unable_to_verify_identity| |69|verify_my_identity| |70|verify_source_of_funds| |71|verify_top_up| |72|virtual_card_not_working| |73|visa_or_mastercard| |74|why_verify_identity| |75|wrong_amount_of_cash_received| |76|wrong_exchange_rate_for_cash_withdrawal| ## Training procedure ``` from transformers import pipeline pipe = pipeline("text-classification", model="nickprock/distilbert-base-uncased-banking77-classification") pipe("I can't pay by my credit card") ``` ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Score | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:| | 3.8732 | 1.0 | 157 | 3.1476 | 0.5370 | 0.4881 | | 2.5598 | 2.0 | 314 | 1.9780 | 0.6916 | 0.6585 | | 1.5863 | 3.0 | 471 | 1.2239 | 0.8042 | 0.7864 | | 0.9829 | 4.0 | 628 | 0.8067 | 0.8565 | 0.8487 | | 0.6274 | 5.0 | 785 | 0.5837 | 0.8799 | 0.8752 | | 0.4304 | 6.0 | 942 | 0.4630 | 0.9042 | 0.9040 | | 0.3106 | 7.0 | 1099 | 0.3982 | 0.9088 | 0.9087 | | 0.2238 | 8.0 | 1256 | 0.3587 | 0.9110 | 0.9113 | | 0.1708 | 9.0 | 1413 | 0.3351 | 0.9208 | 0.9208 | | 0.1256 | 10.0 | 1570 | 0.3242 | 0.9179 | 0.9182 | | 0.0981 | 11.0 | 1727 | 0.3136 | 0.9211 | 0.9214 | | 0.0745 | 12.0 | 1884 | 0.3151 | 0.9211 | 0.9213 | | 0.0601 | 13.0 | 2041 | 0.3089 | 0.9218 | 0.9220 | | 0.0482 | 14.0 | 2198 | 0.3158 | 0.9214 | 0.9216 | | 0.0402 | 15.0 | 2355 | 0.3126 | 0.9224 | 0.9226 | | 0.0344 | 16.0 | 2512 | 0.3143 | 0.9231 | 0.9233 | | 0.0298 | 17.0 | 2669 | 0.3156 | 0.9231 | 0.9233 | | 0.0272 | 18.0 | 2826 | 0.3134 | 0.9244 | 0.9247 | | 0.0237 | 19.0 | 2983 | 0.3156 | 0.9244 | 0.9246 | | 0.0229 | 20.0 | 3140 | 0.3152 | 0.9240 | 0.9243 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
Chungu424/repodata
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-07-19T10:45:53Z
--- license: apache-2.0 tags: - pytorch - diffusers - unconditional-image-generation --- # Denoising Diffusion Probabilistic Models (DDPM) **Paper**: [Denoising Diffusion Probabilistic Models](https://arxiv.org/abs/2006.11239) **Authors**: Jonathan Ho, Ajay Jain, Pieter Abbeel **Abstract**: *We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN.* ## Inference **DDPM** models can use *discrete noise schedulers* such as: - [scheduling_ddpm](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddpm.py) - [scheduling_ddim](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_ddim.py) - [scheduling_pndm](https://github.com/huggingface/diffusers/blob/main/src/diffusers/schedulers/scheduling_pndm.py) for inference. Note that while the *ddpm* scheduler yields the highest quality, it also takes the longest. For a good trade-off between quality and inference speed you might want to consider the *ddim* or *pndm* schedulers instead. See the following code: ```python # !pip install diffusers from diffusers import DDPMPipeline, DDIMPipeline, PNDMPipeline model_id = "google/ddpm-ema-cat-256" # load model and scheduler ddpm = DDPMPipeline.from_pretrained(model_id) # you can replace DDPMPipeline with DDIMPipeline or PNDMPipeline for faster inference # run pipeline in inference (sample random noise and denoise) image = ddpm().images[0] # save image image.save("ddpm_generated_image.png") ``` For more in-detail information, please have a look at the [official inference example](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb) ## Training If you want to train your own model, please have a look at the [official training example](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb) ## Samples 1. ![sample_1](https://huggingface.co/google/ddpm-ema-cat-256/resolve/main/images/generated_image_0.png) 2. ![sample_2](https://huggingface.co/google/ddpm-ema-cat-256/resolve/main/images/generated_image_1.png) 3. ![sample_3](https://huggingface.co/google/ddpm-ema-cat-256/resolve/main/images/generated_image_2.png) 4. ![sample_4](https://huggingface.co/google/ddpm-ema-cat-256/resolve/main/images/generated_image_3.png)
CleveGreen/FieldClassifier
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
34
null
English-Simile-Generation is a seq2seq paraphrase model which can transform sentence A to sentence B containing figurative or simile expression. A: Now I feel sad to see your scientific research progress is so slow. B: Now I feel sad to see your scientific research progress is as slow as snail. **To our knowledge, our model have better performance,diversity and practicability compared to EMNLP21 paper (Generating similes effortlessly like a Pro: A Style Transfer Approach for Simile Generation). ** from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("figurative-nlp/English-Simile-Generation") model = AutoModelForSeq2SeqLM.from_pretrained("figurative-nlp/Ehinese-Simile-Generation") input_ids = tokenizer( "Adrenaline shot through him powerful", return_tensors="pt" ).input_ids outputs = model.generate(input_ids,num_beams = 5,max_length = 64) result = tokenizer.decode(outputs[0], skip_special_tokens=True) print(result) #result : Adrenaline shot through him like an electric current
CohleM/bert-nepali-tokenizer
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit pipeline_tag: document-question-answering tags: - donut - image-to-text - vision widget: - text: "What is the invoice number?" src: "https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png" - text: "What is the purchase amount?" src: "https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/contract.jpeg" --- # Donut (base-sized model, fine-tuned on DocVQA) Donut model fine-tuned on DocVQA. It was introduced in the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewok et al. and first released in [this repository](https://github.com/clovaai/donut). Disclaimer: The team releasing Donut did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description Donut consists of a vision encoder (Swin Transformer) and a text decoder (BART). Given an image, the encoder first encodes the image into a tensor of embeddings (of shape batch_size, seq_len, hidden_size), after which the decoder autoregressively generates text, conditioned on the encoding of the encoder. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/donut_architecture.jpg) ## Intended uses & limitations This model is fine-tuned on DocVQA, a document visual question answering dataset. We refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/donut) which includes code examples. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2111-15664, author = {Geewook Kim and Teakgyu Hong and Moonbin Yim and Jinyoung Park and Jinyeong Yim and Wonseok Hwang and Sangdoo Yun and Dongyoon Han and Seunghyun Park}, title = {Donut: Document Understanding Transformer without {OCR}}, journal = {CoRR}, volume = {abs/2111.15664}, year = {2021}, url = {https://arxiv.org/abs/2111.15664}, eprinttype = {arXiv}, eprint = {2111.15664}, timestamp = {Thu, 02 Dec 2021 10:50:44 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2111-15664.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
CohleM/mbert-nepali-tokenizer
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: aalogan/bert-ner-nsm1 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # aalogan/bert-ner-nsm1 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0366 - Validation Loss: 0.1607 - Epoch: 5 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2694, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.4732 | 0.1911 | 0 | | 0.1551 | 0.1756 | 1 | | 0.0931 | 0.1747 | 2 | | 0.0679 | 0.1732 | 3 | | 0.0477 | 0.1603 | 4 | | 0.0366 | 0.1607 | 5 | ### Framework versions - Transformers 4.18.0 - TensorFlow 2.8.0 - Datasets 2.1.0 - Tokenizers 0.12.1
Coldestadam/Breakout_Mentors_SpongeBob_Model
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- license: mit tags: - donut - image-to-text - vision --- # Donut (base-sized model, fine-tuned on CORD) Donut model fine-tuned on CORD. It was introduced in the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewok et al. and first released in [this repository](https://github.com/clovaai/donut). Disclaimer: The team releasing Donut did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description Donut consists of a vision encoder (Swin Transformer) and a text decoder (BART). Given an image, the encoder first encodes the image into a tensor of embeddings (of shape batch_size, seq_len, hidden_size), after which the decoder autoregressively generates text, conditioned on the encoding of the encoder. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/donut_architecture.jpg) ## Intended uses & limitations This model is fine-tuned on CORD, a document parsing dataset. We refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/donut) which includes code examples. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2111-15664, author = {Geewook Kim and Teakgyu Hong and Moonbin Yim and Jinyoung Park and Jinyeong Yim and Wonseok Hwang and Sangdoo Yun and Dongyoon Han and Seunghyun Park}, title = {Donut: Document Understanding Transformer without {OCR}}, journal = {CoRR}, volume = {abs/2111.15664}, year = {2021}, url = {https://arxiv.org/abs/2111.15664}, eprinttype = {arXiv}, eprint = {2111.15664}, timestamp = {Thu, 02 Dec 2021 10:50:44 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2111-15664.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
ComCom/gpt2-medium
[ "pytorch", "gpt2", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "GPT2Model" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- license: mit tags: - donut - image-to-text - vision --- # Donut (base-sized model, fine-tuned on CORD, v1) Donut model fine-tuned on CORD. It was introduced in the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewok et al. and first released in [this repository](https://github.com/clovaai/donut). Disclaimer: The team releasing Donut did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description Donut consists of a vision encoder (Swin Transformer) and a text decoder (BART). Given an image, the encoder first encodes the image into a tensor of embeddings (of shape batch_size, seq_len, hidden_size), after which the decoder autoregressively generates text, conditioned on the encoding of the encoder. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/donut_architecture.jpg) ## Intended uses & limitations This model is fine-tuned on CORD, a document parsing dataset. We refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/donut) which includes code examples. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2111-15664, author = {Geewook Kim and Teakgyu Hong and Moonbin Yim and Jinyoung Park and Jinyeong Yim and Wonseok Hwang and Sangdoo Yun and Dongyoon Han and Seunghyun Park}, title = {Donut: Document Understanding Transformer without {OCR}}, journal = {CoRR}, volume = {abs/2111.15664}, year = {2021}, url = {https://arxiv.org/abs/2111.15664}, eprinttype = {arXiv}, eprint = {2111.15664}, timestamp = {Thu, 02 Dec 2021 10:50:44 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2111-15664.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
ComCom/gpt2
[ "pytorch", "gpt2", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "GPT2Model" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- language: - en thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg tags: - question-answering license: apache-2.0 datasets: - squad metrics: - squad --- # DistilBERT with a second step of distillation ## Model description This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation. In this version, the following pre-trained models were used: * Student: `distilbert-base-uncased` * Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1` ## Training data This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows: ```python from datasets import load_dataset squad = load_dataset('squad') ``` ## Training procedure ## Eval results | | Exact Match | F1 | |------------------|-------------|------| | DistilBERT paper | 79.1 | 86.9 | | Ours | 78.4 | 86.5 | The scores were calculated using the `squad` metric from `datasets`. ### BibTeX entry and citation info ```bibtex @misc{sanh2020distilbert, title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter}, author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf}, year={2020}, eprint={1910.01108}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Cometasonmi451/Mine
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-07-19T14:09:36Z
--- license: mit tags: - donut - image-to-text - vision --- # Donut (base-sized model, fine-tuned on ZhTrainTicket) Donut model fine-tuned on ZhTrainTicket. It was introduced in the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewok et al. and first released in [this repository](https://github.com/clovaai/donut). Disclaimer: The team releasing Donut did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description Donut consists of a vision encoder (Swin Transformer) and a text decoder (BART). Given an image, the encoder first encodes the image into a tensor of embeddings (of shape batch_size, seq_len, hidden_size), after which the decoder autoregressively generates text, conditioned on the encoding of the encoder. ![model image](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/donut_architecture.jpg) ## Intended uses & limitations This model is fine-tuned on ZhTrainTicket, a document parsing dataset. We refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/donut) which includes code examples. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2111-15664, author = {Geewook Kim and Teakgyu Hong and Moonbin Yim and Jinyoung Park and Jinyeong Yim and Wonseok Hwang and Sangdoo Yun and Dongyoon Han and Seunghyun Park}, title = {Donut: Document Understanding Transformer without {OCR}}, journal = {CoRR}, volume = {abs/2111.15664}, year = {2021}, url = {https://arxiv.org/abs/2111.15664}, eprinttype = {arXiv}, eprint = {2111.15664}, timestamp = {Thu, 02 Dec 2021 10:50:44 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2111-15664.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
Connor-tech/bert_cn_finetuning
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
27
null
--- license: mit tags: - generated_from_keras_callback model-index: - name: jonaskoenig/topic_classification_02 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # jonaskoenig/topic_classification_02 This model is a fine-tuned version of [microsoft/xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0189 - Train Binary Crossentropy: 0.3299 - Epoch: 5 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Binary Crossentropy | Epoch | |:----------:|:-------------------------:|:-----:| | 0.0250 | 0.4229 | 0 | | 0.0214 | 0.3684 | 1 | | 0.0204 | 0.3530 | 2 | | 0.0198 | 0.3433 | 3 | | 0.0193 | 0.3359 | 4 | | 0.0189 | 0.3299 | 5 | ### Framework versions - Transformers 4.20.1 - TensorFlow 2.9.1 - Datasets 2.3.2 - Tokenizers 0.12.1
ConstellationBoi/Oop
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit tags: - generated_from_trainer datasets: - harem metrics: - precision - recall - f1 - accuracy model-index: - name: bert-base-portuguese-cased_harem-selective-lowC-sm-first-ner results: - task: name: Token Classification type: token-classification dataset: name: harem type: harem args: selective metrics: - name: Precision type: precision value: 0.8 - name: Recall type: recall value: 0.8764044943820225 - name: F1 type: f1 value: 0.8364611260053619 - name: Accuracy type: accuracy value: 0.9764089121887287 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-portuguese-cased_harem-selective-lowC-sm-first-ner This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on the harem dataset. It achieves the following results on the evaluation set: - Loss: 0.1160 - Precision: 0.8 - Recall: 0.8764 - F1: 0.8365 - Accuracy: 0.9764 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.055 | 1.0 | 2517 | 0.0934 | 0.81 | 0.9101 | 0.8571 | 0.9699 | | 0.0236 | 2.0 | 5034 | 0.0883 | 0.8307 | 0.8820 | 0.8556 | 0.9751 | | 0.0129 | 3.0 | 7551 | 0.1160 | 0.8 | 0.8764 | 0.8365 | 0.9764 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.2+cu102 - Datasets 2.2.2 - Tokenizers 0.12.1
Contrastive-Tension/BERT-Base-CT
[ "pytorch", "tf", "jax", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
16
null
--- license: mit tags: - generated_from_trainer datasets: - harem metrics: - precision - recall - f1 - accuracy model-index: - name: bert-base-portuguese-cased_harem-selective-lowC-CRF-first-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-portuguese-cased_harem-selective-lowC-CRF-first-ner This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on the harem dataset. It achieves the following results on the evaluation set: - Loss: 0.0687 - Precision: 0.8030 - Recall: 0.8933 - F1: 0.8457 - Accuracy: 0.9748 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0646 | 1.0 | 2517 | 0.0924 | 0.7822 | 0.8876 | 0.8316 | 0.9670 | | 0.0263 | 2.0 | 5034 | 0.0644 | 0.7598 | 0.8708 | 0.8115 | 0.9685 | | 0.0234 | 3.0 | 7551 | 0.0687 | 0.8030 | 0.8933 | 0.8457 | 0.9748 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.2+cu102 - Datasets 2.2.2 - Tokenizers 0.12.1
Contrastive-Tension/RoBerta-Large-CT-STSb
[ "pytorch", "tf", "jax", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- language: en tags: - timelms - twitter license: mit datasets: - twitter-api --- # Twitter June 2022 (RoBERTa-base, 132M) This is a RoBERTa-base model trained on 132.26M tweets until the end of June 2022. More details and performance scores are available in the [TimeLMs paper](https://arxiv.org/abs/2202.03829). Below, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the [TimeLMs repository](https://github.com/cardiffnlp/timelms). For other models trained until different periods, check this [table](https://github.com/cardiffnlp/timelms#released-models). ## Preprocess Text Replace usernames and links for placeholders: "@user" and "http". If you're interested in retaining verified users which were also retained during training, you may keep the users listed [here](https://github.com/cardiffnlp/timelms/tree/main/data). ```python def preprocess(text): preprocessed_text = [] for t in text.split(): if len(t) > 1: t = '@user' if t[0] == '@' and t.count('@') == 1 else t t = 'http' if t.startswith('http') else t preprocessed_text.append(t) return ' '.join(preprocessed_text) ``` ## Example Masked Language Model ```python from transformers import pipeline, AutoTokenizer MODEL = "cardiffnlp/twitter-roberta-base-jun2022" fill_mask = pipeline("fill-mask", model=MODEL, tokenizer=MODEL) tokenizer = AutoTokenizer.from_pretrained(MODEL) def pprint(candidates, n): for i in range(n): token = tokenizer.decode(candidates[i]['token']) score = candidates[i]['score'] print("%d) %.5f %s" % (i+1, score, token)) texts = [ "So glad I'm <mask> vaccinated.", "I keep forgetting to bring a <mask>.", "Looking forward to watching <mask> Game tonight!", ] for text in texts: t = preprocess(text) print(f"{'-'*30}\n{t}") candidates = fill_mask(t) pprint(candidates, 5) ``` Output: ``` ------------------------------ So glad I'm <mask> vaccinated. 1) 0.36928 not 2) 0.29651 fully 3) 0.15332 getting 4) 0.04144 still 5) 0.01805 all ------------------------------ I keep forgetting to bring a <mask>. 1) 0.06048 book 2) 0.03458 backpack 3) 0.03362 lighter 4) 0.03162 charger 5) 0.02832 pen ------------------------------ Looking forward to watching <mask> Game tonight! 1) 0.65149 the 2) 0.14239 The 3) 0.02432 this 4) 0.00877 End 5) 0.00866 Big ``` ## Example Tweet Embeddings ```python from transformers import AutoTokenizer, AutoModel, TFAutoModel import numpy as np from scipy.spatial.distance import cosine from collections import Counter def get_embedding(text): # naive approach for demonstration text = preprocess(text) encoded_input = tokenizer(text, return_tensors='pt') features = model(**encoded_input) features = features[0].detach().cpu().numpy() return np.mean(features[0], axis=0) MODEL = "cardiffnlp/twitter-roberta-base-jun2022" tokenizer = AutoTokenizer.from_pretrained(MODEL) model = AutoModel.from_pretrained(MODEL) query = "The book was awesome" tweets = ["I just ordered fried chicken 🐣", "The movie was great", "What time is the next game?", "Just finished reading 'Embeddings in NLP'"] sims = Counter() for tweet in tweets: sim = 1 - cosine(get_embedding(query), get_embedding(tweet)) sims[tweet] = sim print('Most similar to: ', query) print(f"{'-'*30}") for idx, (tweet, sim) in enumerate(sims.most_common()): print("%d) %.5f %s" % (idx+1, sim, tweet)) ``` Output: ``` Most similar to: The book was awesome ------------------------------ 1) 0.98882 The movie was great 2) 0.96087 Just finished reading 'Embeddings in NLP' 3) 0.95450 I just ordered fried chicken 🐣 4) 0.95300 What time is the next game? ``` ## Example Feature Extraction ```python from transformers import AutoTokenizer, AutoModel, TFAutoModel import numpy as np MODEL = "cardiffnlp/twitter-roberta-base-jun2022" tokenizer = AutoTokenizer.from_pretrained(MODEL) text = "Good night 😊" text = preprocess(text) # Pytorch model = AutoModel.from_pretrained(MODEL) encoded_input = tokenizer(text, return_tensors='pt') features = model(**encoded_input) features = features[0].detach().cpu().numpy() features_mean = np.mean(features[0], axis=0) #features_max = np.max(features[0], axis=0) # # Tensorflow # model = TFAutoModel.from_pretrained(MODEL) # encoded_input = tokenizer(text, return_tensors='tf') # features = model(encoded_input) # features = features[0].numpy() # features_mean = np.mean(features[0], axis=0) # #features_max = np.max(features[0], axis=0) ```
Crisblair/Wkwk
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit tags: - cuneiform - akkadian - sumerian --- # Sumerian and Akkadian Cuneiform Language Translator This is a translation network that understands Sumerian and Akkadian languages written in cuneiform. It was trained on cuneiform transcribed in the CDLI ATF format. For example: ```text translate Akkadian to English: 1(disz){d}szul3-ma-nu-_sag man gal?_-u2 _man_ dan-nu _man kisz_ ``` The network was trained to translate from the ancient languages: * Akkadian * Sumerian written in transcribed cuneiform to English. The network requires a prompt telling it the direction of translation. For example: * `translate Akkadian to English: ` * `translate English to Sumerian: ` ## Limitations It was *not* trained to translate between the ancient languages but it is capable of it (`translate Sumerian to Akkadian: `). Buyer beware. Its vocabulary does not include ā, ḫ, ī, ř, š, ṣ, or ū so those letters are replaced with the unadorned letter (or "sh" in the case of š and ṣ).
Culmenus/opus-mt-de-is-finetuned-de-to-is_35g65cc
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - metrics: - type: mean_reward value: 6262.56 +/- 67.29 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: mujoco_halfcheetah type: mujoco_halfcheetah --- A(n) **APPO** model trained on the **mujoco_halfcheetah** environment. This model was trained using Sample Factory 2.0: https://github.com/alex-petrenko/sample-factory
CuongLD/wav2vec2-large-xlsr-vietnamese
[ "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "vi", "dataset:common_voice, infore_25h", "arxiv:2006.11477", "arxiv:2006.13979", "transformers", "audio", "speech", "xlsr-fine-tuning-week", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: Check_Aligned_Teeth results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9473684430122375 --- # Check_Aligned_Teeth Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### Aligned Teeth ![Aligned Teeth](images/Aligned_Teeth.jpg) #### Crooked Teeth ![Crooked Teeth](images/Crooked_Teeth.jpg)
CurtisASmith/GPT-JRT
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: eng datasets: - banking77 --- # GPT2 Fine-Tuned Banking 77 This is a fine-tuned version of the GPT2 model. It's best suited for text-generation. ## Model Description Kwaku/gpt2-finetuned-banking77 was fine tuned on the [banking77](https://huggingface.co/datasets/banking77) dataset, which is "composed of online banking queries annotated with their corresponding intents." ## Intended Uses and Limitations Given the magnitude of the [Microsoft DialoGPT-large](https://huggingface.co/microsoft/DialoGPT-large) model, the author resorted to fine-tuning the gpt2 model for the creation of a chatbot. The intent was for the chatbot to emulate a banking customer agent, hence the use of the banking77 dataset. However, when the fine-tuned model was deployed in the chatbot, the results were undesirable. Its responses were inappropriate and unnecessarily long. The last word of its response is repeated numerously, a major glitch in it. The model performs better in text-generation but is prone to generating banking-related text because of the corpus it was trained on. ### How to use You can use this model directly with a pipeline for text generation: ```python >>>from transformers import pipeline >>> model_name = "Kwaku/gpt2-finetuned-banking77" >>> generator = pipeline("text-generation", model=model_name) >>> result = generator("My money is", max_length=15, num_return_sequences=2) >>> print(result) [{'generated_text': 'My money is stuck in ATM pending. Please cancel this transaction and refund it'}, {'generated_text': 'My money is missing. How do I get a second card, and how'}] ``` ### Limitations and bias For users who want a diverse text-generator, this model's tendency to generate mostly bank-related text will be a drawback. It also inherits [the biases of its parent model, the GPT2](https://huggingface.co/gpt2#limitations-and-bias).
CurtisBowser/DialoGPT-medium-sora
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: distilbert_oscarth_0040 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_oscarth_0040 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.2890 - Validation Loss: 1.2296 - Epoch: 39 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 4.1327 | 2.9983 | 0 | | 2.7813 | 2.4562 | 1 | | 2.4194 | 2.2066 | 2 | | 2.2231 | 2.0562 | 3 | | 2.0894 | 1.9450 | 4 | | 1.9905 | 1.8621 | 5 | | 1.9148 | 1.7941 | 6 | | 1.8508 | 1.7363 | 7 | | 1.7976 | 1.6909 | 8 | | 1.7509 | 1.6488 | 9 | | 1.7126 | 1.6124 | 10 | | 1.6764 | 1.5835 | 11 | | 1.6450 | 1.5521 | 12 | | 1.6175 | 1.5282 | 13 | | 1.5919 | 1.5045 | 14 | | 1.5679 | 1.4833 | 15 | | 1.5476 | 1.4627 | 16 | | 1.5271 | 1.4498 | 17 | | 1.5098 | 1.4270 | 18 | | 1.4909 | 1.4161 | 19 | | 1.4760 | 1.3995 | 20 | | 1.4609 | 1.3864 | 21 | | 1.4475 | 1.3717 | 22 | | 1.4333 | 1.3590 | 23 | | 1.4203 | 1.3478 | 24 | | 1.4093 | 1.3403 | 25 | | 1.3980 | 1.3296 | 26 | | 1.3875 | 1.3176 | 27 | | 1.3773 | 1.3094 | 28 | | 1.3674 | 1.3011 | 29 | | 1.3579 | 1.2920 | 30 | | 1.3497 | 1.2826 | 31 | | 1.3400 | 1.2764 | 32 | | 1.3326 | 1.2694 | 33 | | 1.3236 | 1.2635 | 34 | | 1.3169 | 1.2536 | 35 | | 1.3096 | 1.2477 | 36 | | 1.3024 | 1.2408 | 37 | | 1.2957 | 1.2364 | 38 | | 1.2890 | 1.2296 | 39 | ### Framework versions - Transformers 4.20.1 - TensorFlow 2.8.2 - Datasets 2.3.2 - Tokenizers 0.12.1
CyberMuffin/DialoGPT-small-ChandlerBot
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- language: - en thumbnail: "https://pbs.twimg.com/media/FThx_rEWAAEoujW?format=jpg&name=medium" tags: - t5 - contrastive learning - ranking - decoding - metric learning - pytorch - text generation - retrieval license: "apache-2.0" datasets: - Wikipedia - PG19 - C4 - relic - ChapterBreak - HellaSwag - ROCStories metrics: - MAUVE - human --- ## Main repository https://github.com/martiansideofthemoon/rankgen ## What is RankGen? RankGen is a suite of encoder models (100M-1.2B parameters) which map prefixes and generations from any pretrained English language model to a shared vector space. RankGen can be used to rerank multiple full-length samples from an LM, and it can also be incorporated as a scoring function into beam search to significantly improve generation quality (0.85 vs 0.77 MAUVE, 75% preference according to humans annotators who are English writers). RankGen can also be used like a dense retriever, and achieves state-of-the-art performance on [literary retrieval](https://relic.cs.umass.edu/leaderboard.html). ## Setup **Requirements** (`pip` will install these dependencies for you) Python 3.7+, `torch` (CUDA recommended), `transformers` **Installation** ``` python3.7 -m virtualenv rankgen-venv source rankgen-venv/bin/activate pip install rankgen ``` Get the data [here](https://drive.google.com/drive/folders/1DRG2ess7fK3apfB-6KoHb_azMuHbsIv4?usp=sharing) and place folder in root directory. Alternatively, use `gdown` as shown below, ``` gdown --folder https://drive.google.com/drive/folders/1DRG2ess7fK3apfB-6KoHb_azMuHbsIv4 ``` Run the test script to make sure the RankGen checkpoint has loaded correctly, ``` python -m rankgen.test_rankgen_encoder --model_path kalpeshk2011/rankgen-t5-base-all ### Expected output 0.0009239262409127233 0.0011521980725477804 ``` ## Using RankGen Loading RankGen is simple using the HuggingFace APIs (see Method-2 below), but we suggest using [`RankGenEncoder`](https://github.com/martiansideofthemoon/rankgen/blob/master/rankgen/rankgen_encoder.py), which is a small wrapper around the HuggingFace APIs for correctly preprocessing data and doing tokenization automatically. You can either download [our repository](https://github.com/martiansideofthemoon/rankgen) and install the API, or copy the implementation from [below](#rankgenencoder-implementation). #### [SUGGESTED] Method-1: Loading the model with RankGenEncoder ``` from rankgen import RankGenEncoder, RankGenGenerator rankgen_encoder = RankGenEncoder("kalpeshk2011/rankgen-t5-base-all") # Encoding vectors prefix_vectors = rankgen_encoder.encode(["This is a prefix sentence."], vectors_type="prefix") suffix_vectors = rankgen_encoder.encode(["This is a suffix sentence."], vectors_type="suffix") # Generating text # use a HuggingFace compatible language model generator = RankGenGenerator(rankgen_encoder=rankgen_encoder, language_model="gpt2-medium") inputs = ["Whatever might be the nature of the tragedy it would be over with long before this, and those moving black spots away yonder to the west, that he had discerned from the bluff, were undoubtedly the departing raiders. There was nothing left for Keith to do except determine the fate of the unfortunates, and give their bodies decent burial. That any had escaped, or yet lived, was altogether unlikely, unless, perchance, women had been in the party, in which case they would have been borne away prisoners."] # Baseline nucleus sampling print(generator.generate_single(inputs, top_p=0.9)[0][0]) # Over-generate and re-rank print(generator.overgenerate_rerank(inputs, top_p=0.9, num_samples=10)[0][0]) # Beam search print(generator.beam_search(inputs, top_p=0.9, num_samples=10, beam_size=2)[0][0]) ``` #### Method-2: Loading the model with HuggingFace APIs ``` from transformers import T5Tokenizer, AutoModel tokenizer = T5Tokenizer.from_pretrained(f"google/t5-v1_1-base") model = AutoModel.from_pretrained("kalpeshk2011/rankgen-t5-base-all", trust_remote_code=True) ``` ### RankGenEncoder Implementation ``` import tqdm from transformers import T5Tokenizer, T5EncoderModel, AutoModel class RankGenEncoder(): def __init__(self, model_path, max_batch_size=32, model_size=None, cache_dir=None): assert model_path in ["kalpeshk2011/rankgen-t5-xl-all", "kalpeshk2011/rankgen-t5-xl-pg19", "kalpeshk2011/rankgen-t5-base-all", "kalpeshk2011/rankgen-t5-large-all"] self.max_batch_size = max_batch_size self.device = 'cuda' if torch.cuda.is_available() else 'cpu' if model_size is None: if "t5-large" in model_path or "t5_large" in model_path: self.model_size = "large" elif "t5-xl" in model_path or "t5_xl" in model_path: self.model_size = "xl" else: self.model_size = "base" else: self.model_size = model_size self.tokenizer = T5Tokenizer.from_pretrained(f"google/t5-v1_1-{self.model_size}", cache_dir=cache_dir) self.model = AutoModel.from_pretrained(model_path, trust_remote_code=True) self.model.to(self.device) self.model.eval() def encode(self, inputs, vectors_type="prefix", verbose=False, return_input_ids=False): tokenizer = self.tokenizer max_batch_size = self.max_batch_size if isinstance(inputs, str): inputs = [inputs] if vectors_type == 'prefix': inputs = ['pre ' + input for input in inputs] max_length = 512 else: inputs = ['suffi ' + input for input in inputs] max_length = 128 all_embeddings = [] all_input_ids = [] for i in tqdm.tqdm(range(0, len(inputs), max_batch_size), total=(len(inputs) // max_batch_size) + 1, disable=not verbose, desc=f"Encoding {vectors_type} inputs:"): tokenized_inputs = tokenizer(inputs[i:i + max_batch_size], return_tensors="pt", padding=True) for k, v in tokenized_inputs.items(): tokenized_inputs[k] = v[:, :max_length] tokenized_inputs = tokenized_inputs.to(self.device) with torch.inference_mode(): batch_embeddings = self.model(**tokenized_inputs) all_embeddings.append(batch_embeddings) if return_input_ids: all_input_ids.extend(tokenized_inputs.input_ids.cpu().tolist()) return { "embeddings": torch.cat(all_embeddings, dim=0), "input_ids": all_input_ids } ```
Czapla/Rick
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: run1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # run1 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6666 - Wer: 0.6375 - Cer: 0.3170 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 2000 - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:| | 1.0564 | 2.36 | 2000 | 2.3456 | 0.9628 | 0.5549 | | 0.5071 | 4.73 | 4000 | 2.0652 | 0.9071 | 0.5115 | | 0.3952 | 7.09 | 6000 | 2.3649 | 0.9108 | 0.4628 | | 0.3367 | 9.46 | 8000 | 1.7615 | 0.8253 | 0.4348 | | 0.2765 | 11.82 | 10000 | 1.6151 | 0.7937 | 0.4087 | | 0.2493 | 14.18 | 12000 | 1.4976 | 0.7881 | 0.3905 | | 0.2318 | 16.55 | 14000 | 1.6731 | 0.8160 | 0.3925 | | 0.2074 | 18.91 | 16000 | 1.5822 | 0.7658 | 0.3913 | | 0.1825 | 21.28 | 18000 | 1.5442 | 0.7361 | 0.3704 | | 0.1824 | 23.64 | 20000 | 1.5988 | 0.7621 | 0.3711 | | 0.1699 | 26.0 | 22000 | 1.4261 | 0.7119 | 0.3490 | | 0.158 | 28.37 | 24000 | 1.7482 | 0.7658 | 0.3648 | | 0.1385 | 30.73 | 26000 | 1.4103 | 0.6784 | 0.3348 | | 0.1199 | 33.1 | 28000 | 1.5214 | 0.6636 | 0.3273 | | 0.116 | 35.46 | 30000 | 1.4288 | 0.7212 | 0.3486 | | 0.1071 | 37.83 | 32000 | 1.5344 | 0.7138 | 0.3411 | | 0.1007 | 40.19 | 34000 | 1.4501 | 0.6691 | 0.3237 | | 0.0943 | 42.55 | 36000 | 1.5367 | 0.6859 | 0.3265 | | 0.0844 | 44.92 | 38000 | 1.5321 | 0.6599 | 0.3273 | | 0.0762 | 47.28 | 40000 | 1.6721 | 0.6264 | 0.3142 | | 0.0778 | 49.65 | 42000 | 1.6666 | 0.6375 | 0.3170 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.12.0+cu113 - Datasets 2.0.0 - Tokenizers 0.12.1
D-Keqi/espnet_asr_train_asr_streaming_transformer_raw_en_bpe500_sp_valid.acc.ave
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
--- language: - en tags: - pytorch - text-generation - causal-lm - rwkv license: apache-2.0 datasets: - The Pile --- # RWKV-3 430M ## Model Description RWKV-3 430M is a L24-D1024 causal language model trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details. At this moment you have to use my Github code (https://github.com/BlinkDL/RWKV-v2-RNN-Pile) to run it. ctx_len = 768 n_layer = 24 n_embd = 1024 Final checkpoint: RWKV-3-Pile-430M-20220817-10602.pth : Trained on the Pile for 326B tokens. * Pile loss 2.288 * LAMBADA ppl 12.83, acc 45.45% * PIQA acc 68.50% * SC2016 acc 63.60% * Hellaswag acc_norm 40.68% Preview checkpoint: RWKV-3-Pile-20220721-3029.pth : Trained on the Pile for 93B tokens. * Pile loss 2.341 * LAMBADA ppl 14.18, acc 44.25% * PIQA acc 67.95% * SC2016 acc 63.39% * Hellaswag acc_norm 39.06%
D3xter1922/electra-base-discriminator-finetuned-mnli
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - librispeech_asr model-index: - name: wav2vec2-large-xls-r-300m-en-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-en-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the librispeech_asr dataset. It achieves the following results on the evaluation set: - Loss: 2.7541 - Wer: 1.0 - Cer: 0.9877 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | Cer | |:-------------:|:-----:|:----:|:---------------:|:---:|:------:| | No log | 1.94 | 33 | 2.9905 | 1.0 | 1.0 | | No log | 3.88 | 66 | 2.9023 | 1.0 | 1.0 | | No log | 5.82 | 99 | 2.8788 | 1.0 | 1.0 | | 3.7488 | 7.76 | 132 | 2.8624 | 1.0 | 1.0 | | 3.7488 | 9.71 | 165 | 2.7541 | 1.0 | 0.9877 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cpu - Datasets 1.18.3 - Tokenizers 0.12.1
DHBaek/xlm-roberta-large-korquad-mask
[ "pytorch", "xlm-roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "XLMRobertaForQuestionAnswering" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- language: el license: cc-by-sa-4.0 datasets: - cc100 - oscar - wikipedia widget: - text: "Αυτό είναι ένα" - text: "Ανοιξα την" - text: "Ευχαριστώ για το" - text: "Έχει πολύ καιρό που δεν έχουμε" --- ## Greek GPT2 small model Version 2 (Uncased) ### Prerequisites transformers==4.19.2 ### Model architecture This model uses approximately half the size of GPT2 base model parameters. ### Tokenizer Using BPE tokenizer with vocabulary size 50,000. ### Training Data * Subset of [CC-100/el](https://data.statmt.org/cc-100/) : Monolingual Datasets from Web Crawl Data * Subset of [oscar](https://huggingface.co/datasets/oscar) * [wiki40b/el](https://www.tensorflow.org/datasets/catalog/wiki40b#wiki40bel) (Greek Wikipedia) ### Usage ```python from transformers import pipeline generator = pipeline('text-generation', model='ClassCat/gpt2-small-greek-v2') generator("Αυτό είναι ένα", max_length=50, num_return_sequences=5) ```
DSI/human-directed-sentiment
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
26
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: distilbert_oscarth_0060 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_oscarth_0060 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.1876 - Validation Loss: 1.1378 - Epoch: 59 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 4.1327 | 2.9983 | 0 | | 2.7813 | 2.4562 | 1 | | 2.4194 | 2.2066 | 2 | | 2.2231 | 2.0562 | 3 | | 2.0894 | 1.9450 | 4 | | 1.9905 | 1.8621 | 5 | | 1.9148 | 1.7941 | 6 | | 1.8508 | 1.7363 | 7 | | 1.7976 | 1.6909 | 8 | | 1.7509 | 1.6488 | 9 | | 1.7126 | 1.6124 | 10 | | 1.6764 | 1.5835 | 11 | | 1.6450 | 1.5521 | 12 | | 1.6175 | 1.5282 | 13 | | 1.5919 | 1.5045 | 14 | | 1.5679 | 1.4833 | 15 | | 1.5476 | 1.4627 | 16 | | 1.5271 | 1.4498 | 17 | | 1.5098 | 1.4270 | 18 | | 1.4909 | 1.4161 | 19 | | 1.4760 | 1.3995 | 20 | | 1.4609 | 1.3864 | 21 | | 1.4475 | 1.3717 | 22 | | 1.4333 | 1.3590 | 23 | | 1.4203 | 1.3478 | 24 | | 1.4093 | 1.3403 | 25 | | 1.3980 | 1.3296 | 26 | | 1.3875 | 1.3176 | 27 | | 1.3773 | 1.3094 | 28 | | 1.3674 | 1.3011 | 29 | | 1.3579 | 1.2920 | 30 | | 1.3497 | 1.2826 | 31 | | 1.3400 | 1.2764 | 32 | | 1.3326 | 1.2694 | 33 | | 1.3236 | 1.2635 | 34 | | 1.3169 | 1.2536 | 35 | | 1.3096 | 1.2477 | 36 | | 1.3024 | 1.2408 | 37 | | 1.2957 | 1.2364 | 38 | | 1.2890 | 1.2296 | 39 | | 1.2818 | 1.2236 | 40 | | 1.2751 | 1.2168 | 41 | | 1.2691 | 1.2126 | 42 | | 1.2644 | 1.2044 | 43 | | 1.2583 | 1.2008 | 44 | | 1.2529 | 1.1962 | 45 | | 1.2473 | 1.1919 | 46 | | 1.2416 | 1.1857 | 47 | | 1.2365 | 1.1812 | 48 | | 1.2318 | 1.1765 | 49 | | 1.2273 | 1.1738 | 50 | | 1.2224 | 1.1672 | 51 | | 1.2177 | 1.1673 | 52 | | 1.2132 | 1.1595 | 53 | | 1.2084 | 1.1564 | 54 | | 1.2033 | 1.1518 | 55 | | 1.1993 | 1.1481 | 56 | | 1.1966 | 1.1445 | 57 | | 1.1924 | 1.1412 | 58 | | 1.1876 | 1.1378 | 59 | ### Framework versions - Transformers 4.20.1 - TensorFlow 2.8.2 - Datasets 2.3.2 - Tokenizers 0.12.1
DTAI-KULeuven/robbertje-1-gb-non-shuffled
[ "pytorch", "roberta", "fill-mask", "nl", "dataset:oscar", "dataset:dbrd", "dataset:lassy-ud", "dataset:europarl-mono", "dataset:conll2002", "arxiv:2101.05716", "transformers", "Dutch", "Flemish", "RoBERTa", "RobBERT", "RobBERTje", "license:mit", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
53
null
--- license: mit tags: - generated_from_trainer datasets: - bc4chemd_ner metrics: - precision - recall - f1 - accuracy model-index: - name: bc4chemd_ner-Bio_ClinicalBERT-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: bc4chemd_ner type: bc4chemd_ner args: bc4chemd metrics: - name: Precision type: precision value: 0.8944236722550557 - name: Recall type: recall value: 0.8777321865383098 - name: F1 type: f1 value: 0.8859993229654115 - name: Accuracy type: accuracy value: 0.9908228496683563 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bc4chemd_ner-Bio_ClinicalBERT-finetuned-ner This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on the bc4chemd_ner dataset. It achieves the following results on the evaluation set: - Loss: 0.0641 - Precision: 0.8944 - Recall: 0.8777 - F1: 0.8860 - Accuracy: 0.9908 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.006 | 1.0 | 1918 | 0.0310 | 0.8697 | 0.8510 | 0.8602 | 0.9894 | | 0.0097 | 2.0 | 3836 | 0.0345 | 0.8855 | 0.8637 | 0.8745 | 0.9898 | | 0.0058 | 3.0 | 5754 | 0.0359 | 0.8733 | 0.8836 | 0.8784 | 0.9902 | | 0.0014 | 4.0 | 7672 | 0.0440 | 0.8723 | 0.8842 | 0.8782 | 0.9903 | | 0.0005 | 5.0 | 9590 | 0.0539 | 0.8862 | 0.8673 | 0.8766 | 0.9903 | | 0.0001 | 6.0 | 11508 | 0.0558 | 0.8939 | 0.8628 | 0.8781 | 0.9904 | | 0.0001 | 7.0 | 13426 | 0.0558 | 0.8846 | 0.8729 | 0.8787 | 0.9903 | | 0.0012 | 8.0 | 15344 | 0.0635 | 0.8935 | 0.8696 | 0.8814 | 0.9905 | | 0.0 | 9.0 | 17262 | 0.0624 | 0.8897 | 0.8831 | 0.8864 | 0.9908 | | 0.0002 | 10.0 | 19180 | 0.0641 | 0.8944 | 0.8777 | 0.8860 | 0.9908 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
alexandrainst/da-emotion-classification-base
[ "pytorch", "tf", "bert", "text-classification", "da", "transformers", "license:cc-by-sa-4.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
837
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 182.38 +/- 36.42 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
alexandrainst/da-subjectivivity-classification-base
[ "pytorch", "tf", "safetensors", "bert", "text-classification", "da", "dataset:DDSC/twitter-sent", "dataset:DDSC/europarl", "transformers", "license:cc-by-sa-4.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
846
2022-07-20T07:38:03Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: korean-aihub-learning-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # korean-aihub-learning-2 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.9945 - Wer: 0.9533 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 0.99 | 35 | 46.3840 | 1.0 | | No log | 1.99 | 70 | 26.0949 | 1.0 | | 37.1581 | 2.99 | 105 | 19.0168 | 1.0 | | 37.1581 | 3.99 | 140 | 13.3294 | 1.0 | | 37.1581 | 4.99 | 175 | 7.9410 | 1.0 | | 12.5054 | 5.99 | 210 | 5.0323 | 1.0 | | 12.5054 | 6.99 | 245 | 4.6242 | 1.0 | | 12.5054 | 7.99 | 280 | 4.6206 | 1.0 | | 4.8394 | 8.99 | 315 | 4.5820 | 1.0 | | 4.8394 | 9.99 | 350 | 4.5629 | 1.0 | | 4.8394 | 10.99 | 385 | 4.5385 | 1.0 | | 4.6489 | 11.99 | 420 | 4.5627 | 1.0 | | 4.6489 | 12.99 | 455 | 4.5276 | 1.0 | | 4.6489 | 13.99 | 490 | 4.5292 | 1.0 | | 4.5654 | 14.99 | 525 | 4.5179 | 1.0 | | 4.5654 | 15.99 | 560 | 4.4928 | 1.0 | | 4.5654 | 16.99 | 595 | 4.4791 | 1.0 | | 4.521 | 17.99 | 630 | 4.4649 | 1.0 | | 4.521 | 18.99 | 665 | 4.4588 | 1.0 | | 4.3529 | 19.99 | 700 | 4.3632 | 1.0 | | 4.3529 | 20.99 | 735 | 4.2990 | 1.0 | | 4.3529 | 21.99 | 770 | 4.2326 | 0.9988 | | 4.1301 | 22.99 | 805 | 4.0843 | 1.0 | | 4.1301 | 23.99 | 840 | 3.9784 | 0.9975 | | 4.1301 | 24.99 | 875 | 3.7876 | 1.0 | | 3.7047 | 25.99 | 910 | 3.6109 | 0.9988 | | 3.7047 | 26.99 | 945 | 3.4049 | 0.9828 | | 3.7047 | 27.99 | 980 | 3.1913 | 0.9606 | | 3.006 | 28.99 | 1015 | 3.0567 | 0.9508 | | 3.006 | 29.99 | 1050 | 2.9945 | 0.9533 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1
Danbi/distilroberta-base-finetuned-wikitext2
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-07-20T08:30:47Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: distilgpt_oscarth_0060 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt_oscarth_0060 This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.8883 - Validation Loss: 2.7784 - Epoch: 59 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 5.6021 | 4.5759 | 0 | | 4.4536 | 4.1235 | 1 | | 4.1386 | 3.9013 | 2 | | 3.9546 | 3.7563 | 3 | | 3.8255 | 3.6477 | 4 | | 3.7271 | 3.5617 | 5 | | 3.6488 | 3.4936 | 6 | | 3.5844 | 3.4379 | 7 | | 3.5301 | 3.3891 | 8 | | 3.4833 | 3.3448 | 9 | | 3.4427 | 3.3098 | 10 | | 3.4068 | 3.2750 | 11 | | 3.3749 | 3.2425 | 12 | | 3.3462 | 3.2211 | 13 | | 3.3202 | 3.1941 | 14 | | 3.2964 | 3.1720 | 15 | | 3.2749 | 3.1512 | 16 | | 3.2548 | 3.1322 | 17 | | 3.2363 | 3.1141 | 18 | | 3.2188 | 3.0982 | 19 | | 3.2025 | 3.0818 | 20 | | 3.1871 | 3.0678 | 21 | | 3.1724 | 3.0533 | 22 | | 3.1583 | 3.0376 | 23 | | 3.1446 | 3.0256 | 24 | | 3.1318 | 3.0122 | 25 | | 3.1195 | 3.0016 | 26 | | 3.1079 | 2.9901 | 27 | | 3.0968 | 2.9826 | 28 | | 3.0863 | 2.9711 | 29 | | 3.0761 | 2.9593 | 30 | | 3.0665 | 2.9514 | 31 | | 3.0572 | 2.9432 | 32 | | 3.0483 | 2.9347 | 33 | | 3.0396 | 2.9250 | 34 | | 3.0313 | 2.9160 | 35 | | 3.0232 | 2.9095 | 36 | | 3.0153 | 2.9028 | 37 | | 3.0078 | 2.8949 | 38 | | 3.0004 | 2.8864 | 39 | | 2.9932 | 2.8799 | 40 | | 2.9864 | 2.8722 | 41 | | 2.9797 | 2.8657 | 42 | | 2.9731 | 2.8605 | 43 | | 2.9668 | 2.8544 | 44 | | 2.9606 | 2.8481 | 45 | | 2.9545 | 2.8421 | 46 | | 2.9487 | 2.8368 | 47 | | 2.9429 | 2.8288 | 48 | | 2.9374 | 2.8257 | 49 | | 2.9319 | 2.8204 | 50 | | 2.9266 | 2.8152 | 51 | | 2.9215 | 2.8093 | 52 | | 2.9164 | 2.8054 | 53 | | 2.9114 | 2.7998 | 54 | | 2.9066 | 2.7980 | 55 | | 2.9019 | 2.7913 | 56 | | 2.8972 | 2.7865 | 57 | | 2.8927 | 2.7816 | 58 | | 2.8883 | 2.7784 | 59 | ### Framework versions - Transformers 4.20.1 - TensorFlow 2.8.2 - Datasets 2.3.2 - Tokenizers 0.12.1
Dandara/bertimbau-socioambiental
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
27
null
--- tags: autotrain language: en widget: - text: "I love AutoTrain 🤗" datasets: - ar2rpapian/autotrain-data-Flexport_Classification_Desc co2_eq_emissions: 206.60369255723003 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 1155542601 - CO2 Emissions (in grams): 206.60369255723003 ## Validation Metrics - Loss: 0.22105568647384644 - Accuracy: 0.9578838092484789 - Macro F1: 0.9360695960738429 - Micro F1: 0.9578838092484788 - Weighted F1: 0.957863360811612 - Macro Precision: 0.9415730549729362 - Micro Precision: 0.9578838092484789 - Weighted Precision: 0.9586754512711492 - Macro Recall: 0.9329742157218464 - Micro Recall: 0.9578838092484789 - Weighted Recall: 0.9578838092484789 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/ar2rpapian/autotrain-Flexport_Classification_Desc-1155542601 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("ar2rpapian/autotrain-Flexport_Classification_Desc-1155542601", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("ar2rpapian/autotrain-Flexport_Classification_Desc-1155542601", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
Darein/Def
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 278.06 +/- 13.17 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
DarkWolf/kn-electra-small
[ "pytorch", "electra", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "electra", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: apache-2.0 tags: - summarization - generated_from_trainer metrics: - rouge model-index: - name: mt5-small-finetuned-amazon-en-es results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mt5-small-finetuned-amazon-en-es This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 9.2585 - Rouge1: 6.1835 - Rouge2: 0.0 - Rougel: 5.8333 - Rougelsum: 6.1835 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:| | 24.1065 | 1.0 | 11 | 32.7123 | 7.342 | 1.5385 | 7.1515 | 7.342 | | 22.6474 | 2.0 | 22 | 19.7137 | 6.1039 | 0.0 | 5.7143 | 6.1039 | | 16.319 | 3.0 | 33 | 12.8543 | 6.1039 | 0.0 | 5.7143 | 6.1039 | | 16.3224 | 4.0 | 44 | 10.1929 | 5.9524 | 0.0 | 5.7143 | 5.9524 | | 15.0599 | 5.0 | 55 | 9.9186 | 5.9524 | 0.0 | 5.7143 | 5.9524 | | 14.6053 | 6.0 | 66 | 9.3235 | 6.1835 | 0.0 | 5.8333 | 6.1835 | | 14.4345 | 7.0 | 77 | 9.1621 | 6.1835 | 0.0 | 5.8333 | 6.1835 | | 13.7973 | 8.0 | 88 | 9.2585 | 6.1835 | 0.0 | 5.8333 | 6.1835 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1