pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
listlengths 0
201
| languages
listlengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
listlengths 0
722
| processed_texts
listlengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dpo_helfulhelpful_gamma0.0_beta0.1_subset20000_modelmistral7b_maxsteps5000_bz8_lr1e-05
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 15
- training_steps: 5000
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "library_name": "peft", "tags": ["trl", "dpo", "generated_from_trainer"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "dpo_helfulhelpful_gamma0.0_beta0.1_subset20000_modelmistral7b_maxsteps5000_bz8_lr1e-05", "results": []}]}
|
Holarissun/dpo_helfulhelpful_gamma0.0_beta0.1_subset20000_modelmistral7b_maxsteps5000_bz8_lr1e-05
| null |
[
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null |
2024-04-14T10:13:28+00:00
|
[] |
[] |
TAGS
#peft #safetensors #trl #dpo #generated_from_trainer #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us
|
# dpo_helfulhelpful_gamma0.0_beta0.1_subset20000_modelmistral7b_maxsteps5000_bz8_lr1e-05
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 15
- training_steps: 5000
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# dpo_helfulhelpful_gamma0.0_beta0.1_subset20000_modelmistral7b_maxsteps5000_bz8_lr1e-05\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 15\n- training_steps: 5000",
"### Training results",
"### Framework versions\n\n- PEFT 0.9.0\n- Transformers 4.38.2\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #safetensors #trl #dpo #generated_from_trainer #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #region-us \n",
"# dpo_helfulhelpful_gamma0.0_beta0.1_subset20000_modelmistral7b_maxsteps5000_bz8_lr1e-05\n\nThis model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 1e-05\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 8\n- total_train_batch_size: 8\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 15\n- training_steps: 5000",
"### Training results",
"### Framework versions\n\n- PEFT 0.9.0\n- Transformers 4.38.2\n- Pytorch 2.1.2\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
reinforcement-learning
| null |
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="tarpalsus/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
{"tags": ["Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation"], "model-index": [{"name": "q-Taxi-v3", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "Taxi-v3", "type": "Taxi-v3"}, "metrics": [{"type": "mean_reward", "value": "7.54 +/- 2.73", "name": "mean_reward", "verified": false}]}]}]}
|
tarpalsus/q-Taxi-v3
| null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | null |
2024-04-14T10:14:52+00:00
|
[] |
[] |
TAGS
#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us
|
# Q-Learning Agent playing1 Taxi-v3
This is a trained model of a Q-Learning agent playing Taxi-v3 .
## Usage
|
[
"# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage"
] |
[
"TAGS\n#Taxi-v3 #q-learning #reinforcement-learning #custom-implementation #model-index #region-us \n",
"# Q-Learning Agent playing1 Taxi-v3\n This is a trained model of a Q-Learning agent playing Taxi-v3 .\n\n ## Usage"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fluent-noisy-wav2vec
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0129
- Wer: 0.2656
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 25
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.5477 | 1.26 | 500 | 2.9258 | 1.0 |
| 1.6916 | 2.53 | 1000 | 0.4439 | 0.5218 |
| 0.4069 | 3.79 | 1500 | 0.0990 | 0.3524 |
| 0.2584 | 5.05 | 2000 | 0.0812 | 0.3256 |
| 0.1954 | 6.31 | 2500 | 0.0340 | 0.2825 |
| 0.1391 | 7.58 | 3000 | 0.0691 | 0.3046 |
| 0.1378 | 8.84 | 3500 | 0.0334 | 0.2848 |
| 0.1088 | 10.1 | 4000 | 0.0349 | 0.2871 |
| 0.0972 | 11.36 | 4500 | 0.0959 | 0.2761 |
| 0.0883 | 12.63 | 5000 | 0.0229 | 0.2726 |
| 0.0734 | 13.89 | 5500 | 0.0303 | 0.2772 |
| 0.0644 | 15.15 | 6000 | 0.0251 | 0.2755 |
| 0.0536 | 16.41 | 6500 | 0.0139 | 0.2714 |
| 0.0428 | 17.68 | 7000 | 0.0214 | 0.2685 |
| 0.0362 | 18.94 | 7500 | 0.0196 | 0.2667 |
| 0.0377 | 20.2 | 8000 | 0.0257 | 0.2691 |
| 0.0289 | 21.46 | 8500 | 0.0191 | 0.2673 |
| 0.0297 | 22.73 | 9000 | 0.0207 | 0.2667 |
| 0.029 | 23.99 | 9500 | 0.0129 | 0.2656 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["wer"], "base_model": "facebook/wav2vec2-base", "model-index": [{"name": "fluent-noisy-wav2vec", "results": []}]}
|
holmes26/fluent-noisy-wav2vec
| null |
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-14T10:17:54+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #base_model-facebook/wav2vec2-base #license-apache-2.0 #endpoints_compatible #region-us
|
fluent-noisy-wav2vec
====================
This model is a fine-tuned version of facebook/wav2vec2-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0129
* Wer: 0.2656
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 1000
* num\_epochs: 25
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 25\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #base_model-facebook/wav2vec2-base #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 1000\n* num\\_epochs: 25\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
automatic-speech-recognition
|
transformers
|
# wav2vec2-base-asr
This model is a fine-tuned version of [rinna/japanese-wav2vec2-base](https://huggingface.co/rinna/japanese-wav2vec2-base) on the [common_voice_11_0 dataset](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/viewer/ja) for ASR tasks.
This model can only predict Hiragana.
## Acknowledgments
This model's fine-tuning approach was inspired by and references the training methodology used in [vumichien/wav2vec2-large-xlsr-japanese-hiragana](https://huggingface.co/vumichien/wav2vec2-large-xlsr-japanese-hiragana).
## Training Procedure
Fine-tuning on the common_voice_11_0 dataset led to the following results:
| Step | Training Loss | Validation Loss | WER |
|-------|---------------|-----------------|----------|
| 1000 | 6.088100 | 3.452597 | 1.000000 |
| 2000 | 2.816600 | 0.756278 | 0.263624 |
| 3000 | 0.837600 | 0.471486 | 0.185915 |
| 4000 | 0.624900 | 0.420854 | 0.159801 |
| 5000 | 0.533300 | 0.392494 | 0.149141 |
| 6000 | 0.490000 | 0.394669 | 0.144826 |
| 7000 | 0.441600 | 0.379999 | 0.141807 |
### Training hyperparameters
The training hyperparameters remained consistent throughout the fine-tuning process:
- learning_rate: 1e-4
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- num_train_epochs: 20
- warmup_steps: 2000
- lr_scheduler_type: linear
### How to evaluate the model
```python
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
from datasets import load_dataset
import torch
import torchaudio
import librosa
import numpy as np
import re
import MeCab
import pykakasi
from evaluate import load
model = Wav2Vec2ForCTC.from_pretrained('TKU410410103/wav2vec2-base-japanese-asr')
processor = Wav2Vec2Processor.from_pretrained("TKU410410103/wav2vec2-base-japanese-asr")
# load dataset
test_dataset = load_dataset('mozilla-foundation/common_voice_11_0', 'ja', split='test')
remove_columns = [col for col in test_dataset.column_names if col not in ['audio', 'sentence']]
test_dataset = test_dataset.remove_columns(remove_columns)
# resample
def process_waveforms(batch):
speech_arrays = []
sampling_rates = []
for audio_path in batch['audio']:
speech_array, _ = torchaudio.load(audio_path['path'])
speech_array_resampled = librosa.resample(np.asarray(speech_array[0].numpy()), orig_sr=48000, target_sr=16000)
speech_arrays.append(speech_array_resampled)
sampling_rates.append(16000)
batch["array"] = speech_arrays
batch["sampling_rate"] = sampling_rates
return batch
# hiragana
CHARS_TO_IGNORE = [",", "?", "¿", ".", "!", "¡", ";", ";", ":", '""', "%", '"', "�", "ʿ", "·", "჻", "~", "՞",
"؟", "،", "।", "॥", "«", "»", "„", "“", "”", "「", "」", "‘", "’", "《", "》", "(", ")", "[", "]",
"{", "}", "=", "`", "_", "+", "<", ">", "…", "–", "°", "´", "ʾ", "‹", "›", "©", "®", "—", "→", "。",
"、", "﹂", "﹁", "‧", "~", "﹏", ",", "{", "}", "(", ")", "[", "]", "【", "】", "‥", "〽",
"『", "』", "〝", "〟", "⟨", "⟩", "〜", ":", "!", "?", "♪", "؛", "/", "\\", "º", "−", "^", "'", "ʻ", "ˆ"]
chars_to_ignore_regex = f"[{re.escape(''.join(CHARS_TO_IGNORE))}]"
wakati = MeCab.Tagger("-Owakati")
kakasi = pykakasi.kakasi()
kakasi.setMode("J","H")
kakasi.setMode("K","H")
kakasi.setMode("r","Hepburn")
conv = kakasi.getConverter()
def prepare_char(batch):
batch["sentence"] = conv.do(wakati.parse(batch["sentence"]).strip())
batch["sentence"] = re.sub(chars_to_ignore_regex,'', batch["sentence"]).strip()
return batch
resampled_eval_dataset = test_dataset.map(process_waveforms, batched=True, batch_size=50, num_proc=4)
eval_dataset = resampled_eval_dataset.map(prepare_char, num_proc=4)
# begin the evaluation process
wer = load("wer")
cer = load("cer")
def evaluate(batch):
inputs = processor(batch["array"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to(device), attention_mask=inputs.attention_mask.to(device)).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
columns_to_remove = [column for column in eval_dataset.column_names if column != "sentence"]
batch_size = 16
result = eval_dataset.map(evaluate, remove_columns=columns_to_remove, batched=True, batch_size=batch_size)
wer_result = wer.compute(predictions=result["pred_strings"], references=result["sentence"])
cer_result = cer.compute(predictions=result["pred_strings"], references=result["sentence"])
print("WER: {:2f}%".format(100 * wer_result))
print("CER: {:2f}%".format(100 * cer_result))
```
### Test results
The final model was evaluated as follows:
On common_voice_11_0:
- WER: 14.177284%
- CER: 6.462501%
On reazonspeech(tiny):
- WER: 40.864413%
- CER: 29.367348%
### Framework versions
- Transformers 4.39.1
- Pytorch 2.2.1+cu118
- Datasets 2.17.1
|
{"language": ["ja"], "license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["mozilla-foundation/common_voice_11_0"], "metrics": ["wer", "cer"], "model-index": [{"name": "wav2vec2-base-japanese-asr", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "common_voice_11_0", "type": "common_voice", "args": "ja"}, "metrics": [{"type": "wer", "value": 14.177284, "name": "Test WER"}, {"type": "cer", "value": 6.462501, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Reazonspeech", "type": "custom", "args": "ja"}, "metrics": [{"type": "wer", "value": 40.864413, "name": "Test WER"}, {"type": "cer", "value": 29.367348, "name": "Test CER"}]}]}]}
|
TKU410410103/wav2vec2-base-japanese-asr
| null |
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"ja",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2024-04-14T10:22:21+00:00
|
[] |
[
"ja"
] |
TAGS
#transformers #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #ja #dataset-mozilla-foundation/common_voice_11_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
wav2vec2-base-asr
=================
This model is a fine-tuned version of rinna/japanese-wav2vec2-base on the common\_voice\_11\_0 dataset for ASR tasks.
This model can only predict Hiragana.
Acknowledgments
---------------
This model's fine-tuning approach was inspired by and references the training methodology used in vumichien/wav2vec2-large-xlsr-japanese-hiragana.
Training Procedure
------------------
Fine-tuning on the common\_voice\_11\_0 dataset led to the following results:
### Training hyperparameters
The training hyperparameters remained consistent throughout the fine-tuning process:
* learning\_rate: 1e-4
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* gradient\_accumulation\_steps: 2
* num\_train\_epochs: 20
* warmup\_steps: 2000
* lr\_scheduler\_type: linear
### How to evaluate the model
### Test results
The final model was evaluated as follows:
On common\_voice\_11\_0:
* WER: 14.177284%
* CER: 6.462501%
On reazonspeech(tiny):
* WER: 40.864413%
* CER: 29.367348%
### Framework versions
* Transformers 4.39.1
* Pytorch 2.2.1+cu118
* Datasets 2.17.1
|
[
"### Training hyperparameters\n\n\nThe training hyperparameters remained consistent throughout the fine-tuning process:\n\n\n* learning\\_rate: 1e-4\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* num\\_train\\_epochs: 20\n* warmup\\_steps: 2000\n* lr\\_scheduler\\_type: linear",
"### How to evaluate the model",
"### Test results\n\n\nThe final model was evaluated as follows:\n\n\nOn common\\_voice\\_11\\_0:\n\n\n* WER: 14.177284%\n* CER: 6.462501%\n\n\nOn reazonspeech(tiny):\n\n\n* WER: 40.864413%\n* CER: 29.367348%",
"### Framework versions\n\n\n* Transformers 4.39.1\n* Pytorch 2.2.1+cu118\n* Datasets 2.17.1"
] |
[
"TAGS\n#transformers #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #ja #dataset-mozilla-foundation/common_voice_11_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe training hyperparameters remained consistent throughout the fine-tuning process:\n\n\n* learning\\_rate: 1e-4\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* num\\_train\\_epochs: 20\n* warmup\\_steps: 2000\n* lr\\_scheduler\\_type: linear",
"### How to evaluate the model",
"### Test results\n\n\nThe final model was evaluated as follows:\n\n\nOn common\\_voice\\_11\\_0:\n\n\n* WER: 14.177284%\n* CER: 6.462501%\n\n\nOn reazonspeech(tiny):\n\n\n* WER: 40.864413%\n* CER: 29.367348%",
"### Framework versions\n\n\n* Transformers 4.39.1\n* Pytorch 2.2.1+cu118\n* Datasets 2.17.1"
] |
null |
transformers
|
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/ibivibiv/aegolius-acadicus-24b-v2
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/aegolius-acadicus-24b-v2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-24b-v2-GGUF/resolve/main/aegolius-acadicus-24b-v2.Q2_K.gguf) | Q2_K | 8.9 | |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-24b-v2-GGUF/resolve/main/aegolius-acadicus-24b-v2.IQ3_XS.gguf) | IQ3_XS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-24b-v2-GGUF/resolve/main/aegolius-acadicus-24b-v2.Q3_K_S.gguf) | Q3_K_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-24b-v2-GGUF/resolve/main/aegolius-acadicus-24b-v2.IQ3_S.gguf) | IQ3_S | 10.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-24b-v2-GGUF/resolve/main/aegolius-acadicus-24b-v2.IQ3_M.gguf) | IQ3_M | 10.7 | |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-24b-v2-GGUF/resolve/main/aegolius-acadicus-24b-v2.Q3_K_M.gguf) | Q3_K_M | 11.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-24b-v2-GGUF/resolve/main/aegolius-acadicus-24b-v2.Q3_K_L.gguf) | Q3_K_L | 12.6 | |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-24b-v2-GGUF/resolve/main/aegolius-acadicus-24b-v2.IQ4_XS.gguf) | IQ4_XS | 13.1 | |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-24b-v2-GGUF/resolve/main/aegolius-acadicus-24b-v2.Q4_K_S.gguf) | Q4_K_S | 13.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-24b-v2-GGUF/resolve/main/aegolius-acadicus-24b-v2.Q4_K_M.gguf) | Q4_K_M | 14.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-24b-v2-GGUF/resolve/main/aegolius-acadicus-24b-v2.Q5_K_S.gguf) | Q5_K_S | 16.7 | |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-24b-v2-GGUF/resolve/main/aegolius-acadicus-24b-v2.Q5_K_M.gguf) | Q5_K_M | 17.2 | |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-24b-v2-GGUF/resolve/main/aegolius-acadicus-24b-v2.Q6_K.gguf) | Q6_K | 19.9 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-24b-v2-GGUF/resolve/main/aegolius-acadicus-24b-v2.Q8_0.gguf) | Q8_0 | 25.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["moe", "moerge"], "base_model": "ibivibiv/aegolius-acadicus-24b-v2", "quantized_by": "mradermacher"}
|
mradermacher/aegolius-acadicus-24b-v2-GGUF
| null |
[
"transformers",
"gguf",
"moe",
"moerge",
"en",
"base_model:ibivibiv/aegolius-acadicus-24b-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-14T10:24:55+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #gguf #moe #moerge #en #base_model-ibivibiv/aegolius-acadicus-24b-v2 #license-apache-2.0 #endpoints_compatible #region-us
|
About
-----
static quants of URL
weighted/imatrix quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
|
[] |
[
"TAGS\n#transformers #gguf #moe #moerge #en #base_model-ibivibiv/aegolius-acadicus-24b-v2 #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
text-generation
|
transformers
|
# karasu-1.1B-slerp_reverse
karasu-1.1B-slerp_reverse is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [lightblue/karasu-1.1B](https://huggingface.co/lightblue/karasu-1.1B)
* [niryuu/Karasu-1.1b-chat-vector](https://huggingface.co/niryuu/Karasu-1.1b-chat-vector)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: lightblue/karasu-1.1B
layer_range: [0, 22]
- model: niryuu/Karasu-1.1b-chat-vector
layer_range: [0, 22]
merge_method: slerp
base_model: lightblue/karasu-1.1B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "aipib/karasu-1.1B-slerp_reverse"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
{"tags": ["merge", "mergekit", "lazymergekit", "lightblue/karasu-1.1B", "niryuu/Karasu-1.1b-chat-vector"], "base_model": ["lightblue/karasu-1.1B", "niryuu/Karasu-1.1b-chat-vector"]}
|
aipib/karasu-1.1B-slerp_reverse
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"lightblue/karasu-1.1B",
"niryuu/Karasu-1.1b-chat-vector",
"base_model:lightblue/karasu-1.1B",
"base_model:niryuu/Karasu-1.1b-chat-vector",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T10:26:38+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #lightblue/karasu-1.1B #niryuu/Karasu-1.1b-chat-vector #base_model-lightblue/karasu-1.1B #base_model-niryuu/Karasu-1.1b-chat-vector #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# karasu-1.1B-slerp_reverse
karasu-1.1B-slerp_reverse is a merge of the following models using LazyMergekit:
* lightblue/karasu-1.1B
* niryuu/Karasu-1.1b-chat-vector
## Configuration
## Usage
|
[
"# karasu-1.1B-slerp_reverse\n\nkarasu-1.1B-slerp_reverse is a merge of the following models using LazyMergekit:\n* lightblue/karasu-1.1B\n* niryuu/Karasu-1.1b-chat-vector",
"## Configuration",
"## Usage"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #merge #mergekit #lazymergekit #lightblue/karasu-1.1B #niryuu/Karasu-1.1b-chat-vector #base_model-lightblue/karasu-1.1B #base_model-niryuu/Karasu-1.1b-chat-vector #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# karasu-1.1B-slerp_reverse\n\nkarasu-1.1B-slerp_reverse is a merge of the following models using LazyMergekit:\n* lightblue/karasu-1.1B\n* niryuu/Karasu-1.1b-chat-vector",
"## Configuration",
"## Usage"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
andreidima/Llama-2-13b-Romanian-qlora
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null |
2024-04-14T10:29:22+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# Shadowm7expMeliodaspercival_01_experiment26t3q-7B
Shadowm7expMeliodaspercival_01_experiment26t3q-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
- model: mahiatlinux/ShadowM7EXP-7B
- model: MaziyarPanahi/MeliodasPercival_01_Experiment26T3q
merge_method: model_stock
base_model: mistralai/Mistral-7B-v0.1
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/Shadowm7expMeliodaspercival_01_experiment26t3q-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"]}
|
automerger/Shadowm7expMeliodaspercival_01_experiment26t3q-7B
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T10:35:24+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #automerger #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Shadowm7expMeliodaspercival_01_experiment26t3q-7B
Shadowm7expMeliodaspercival_01_experiment26t3q-7B is an automated merge created by Maxime Labonne using the following configuration.
## Configuration
## Usage
|
[
"# Shadowm7expMeliodaspercival_01_experiment26t3q-7B\n\nShadowm7expMeliodaspercival_01_experiment26t3q-7B is an automated merge created by Maxime Labonne using the following configuration.",
"## Configuration",
"## Usage"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #automerger #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Shadowm7expMeliodaspercival_01_experiment26t3q-7B\n\nShadowm7expMeliodaspercival_01_experiment26t3q-7B is an automated merge created by Maxime Labonne using the following configuration.",
"## Configuration",
"## Usage"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
DuongTrongChi/gemma_custom_tokenizor
| null |
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T10:36:17+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #gemma #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #gemma #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text2text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
MD1998/FLAN-T5-V1
| null |
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T10:36:42+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
fastai
|
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
{"tags": ["fastai"]}
|
maviced/practica3
| null |
[
"fastai",
"has_space",
"region:us"
] | null |
2024-04-14T10:40:44+00:00
|
[] |
[] |
TAGS
#fastai #has_space #region-us
|
# Amazing!
Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the documentation here)!
2. Create a demo in Gradio or Streamlit using Spaces (documentation here).
3. Join the fastai community on the Fastai Discord!
Greetings fellow fastlearner ! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
[
"# Amazing!\n\n Congratulations on hosting your fastai model on the Hugging Face Hub!",
"# Some next steps\n1. Fill out this model card with more information (see the template below and the documentation here)!\n\n2. Create a demo in Gradio or Streamlit using Spaces (documentation here).\n\n3. Join the fastai community on the Fastai Discord!\n\nGreetings fellow fastlearner ! Don't forget to delete this content from your model card.\n\n\n---",
"# Model card",
"## Model description\nMore information needed",
"## Intended uses & limitations\nMore information needed",
"## Training and evaluation data\nMore information needed"
] |
[
"TAGS\n#fastai #has_space #region-us \n",
"# Amazing!\n\n Congratulations on hosting your fastai model on the Hugging Face Hub!",
"# Some next steps\n1. Fill out this model card with more information (see the template below and the documentation here)!\n\n2. Create a demo in Gradio or Streamlit using Spaces (documentation here).\n\n3. Join the fastai community on the Fastai Discord!\n\nGreetings fellow fastlearner ! Don't forget to delete this content from your model card.\n\n\n---",
"# Model card",
"## Model description\nMore information needed",
"## Intended uses & limitations\nMore information needed",
"## Training and evaluation data\nMore information needed"
] |
video-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9178
- Accuracy: 0.3548
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 75
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.8633 | 1.0 | 75 | 1.8419 | 0.3857 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "cc-by-nc-4.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "MCG-NJU/videomae-base", "model-index": [{"name": "videomae-base-finetuned-ucf101-subset", "results": []}]}
|
lystar/videomae-base-finetuned-ucf101-subset
| null |
[
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-14T10:40:55+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #videomae #video-classification #generated_from_trainer #base_model-MCG-NJU/videomae-base #license-cc-by-nc-4.0 #endpoints_compatible #region-us
|
videomae-base-finetuned-ucf101-subset
=====================================
This model is a fine-tuned version of MCG-NJU/videomae-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.9178
* Accuracy: 0.3548
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* training\_steps: 75
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.1.2
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* training\\_steps: 75",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #videomae #video-classification #generated_from_trainer #base_model-MCG-NJU/videomae-base #license-cc-by-nc-4.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* training\\_steps: 75",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.1.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
ChitteshKK-hf/mistral_7b_guanaco
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-14T10:43:20+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# Model Card for Mistral-7B-Instruct-v0.2
The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.2.
Mistral-7B-v0.2 has the following changes compared to Mistral-7B-v0.1
- 32k context window (vs 8k context in v0.1)
- Rope-theta = 1e6
- No Sliding-Window Attention
For full details of this model please read our [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/la-plateforme/).
## Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
E.g.
```
text = "<s>[INST] What is your favourite condiment? [/INST]"
"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
"[INST] Do you have mayonnaise recipes? [/INST]"
```
This format is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating) via the `apply_chat_template()` method:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.2")
messages = [
{"role": "user", "content": "What is your favourite condiment?"},
{"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
{"role": "user", "content": "Do you have mayonnaise recipes?"}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## Troubleshooting
- If you see the following error:
```
Traceback (most recent call last):
File "", line 1, in
File "/transformers/models/auto/auto_factory.py", line 482, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
File "/transformers/models/auto/configuration_auto.py", line 1022, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
File "/transformers/models/auto/configuration_auto.py", line 723, in getitem
raise KeyError(key)
KeyError: 'mistral'
```
Installing transformers from source should solve the issue
pip install git+https://github.com/huggingface/transformers
This should not be required after transformers-v4.33.4.
## Limitations
The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
|
{"license": "apache-2.0", "tags": ["finetuned"], "pipeline_tag": "text-generation", "inference": true, "widget": [{"messages": [{"role": "user", "content": "What is your favorite condiment?"}]}]}
|
alquimista888/mixtral_quantized
| null |
[
"transformers",
"pytorch",
"safetensors",
"gguf",
"mistral",
"text-generation",
"finetuned",
"conversational",
"arxiv:2310.06825",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T10:43:44+00:00
|
[
"2310.06825"
] |
[] |
TAGS
#transformers #pytorch #safetensors #gguf #mistral #text-generation #finetuned #conversational #arxiv-2310.06825 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Mistral-7B-Instruct-v0.2
The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.2.
Mistral-7B-v0.2 has the following changes compared to Mistral-7B-v0.1
- 32k context window (vs 8k context in v0.1)
- Rope-theta = 1e6
- No Sliding-Window Attention
For full details of this model please read our paper and release blog post.
## Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by '[INST]' and '[/INST]' tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
E.g.
This format is available as a chat template via the 'apply_chat_template()' method:
## Troubleshooting
- If you see the following error:
Installing transformers from source should solve the issue
pip install git+URL
This should not be required after transformers-v4.33.4.
## Limitations
The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
## The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
|
[
"# Model Card for Mistral-7B-Instruct-v0.2\n\nThe Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.2.\n\nMistral-7B-v0.2 has the following changes compared to Mistral-7B-v0.1\n- 32k context window (vs 8k context in v0.1)\n- Rope-theta = 1e6\n- No Sliding-Window Attention\n\nFor full details of this model please read our paper and release blog post.",
"## Instruction format\n\nIn order to leverage instruction fine-tuning, your prompt should be surrounded by '[INST]' and '[/INST]' tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.\n\nE.g.\n\n\nThis format is available as a chat template via the 'apply_chat_template()' method:",
"## Troubleshooting\n- If you see the following error:\n\n\nInstalling transformers from source should solve the issue\npip install git+URL\n\nThis should not be required after transformers-v4.33.4.",
"## Limitations\n\nThe Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. \nIt does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to\nmake the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.",
"## The Mistral AI Team\n\nAlbert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed."
] |
[
"TAGS\n#transformers #pytorch #safetensors #gguf #mistral #text-generation #finetuned #conversational #arxiv-2310.06825 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Mistral-7B-Instruct-v0.2\n\nThe Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.2.\n\nMistral-7B-v0.2 has the following changes compared to Mistral-7B-v0.1\n- 32k context window (vs 8k context in v0.1)\n- Rope-theta = 1e6\n- No Sliding-Window Attention\n\nFor full details of this model please read our paper and release blog post.",
"## Instruction format\n\nIn order to leverage instruction fine-tuning, your prompt should be surrounded by '[INST]' and '[/INST]' tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.\n\nE.g.\n\n\nThis format is available as a chat template via the 'apply_chat_template()' method:",
"## Troubleshooting\n- If you see the following error:\n\n\nInstalling transformers from source should solve the issue\npip install git+URL\n\nThis should not be required after transformers-v4.33.4.",
"## Limitations\n\nThe Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance. \nIt does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to\nmake the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.",
"## The Mistral AI Team\n\nAlbert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed."
] |
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
hjawad367/segformer-b0-finetuned-ADE-20K-Kaggle
| null |
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-14T10:46:24+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbertfinetuneHS5E8BHLRVHS
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6647
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7886 | 1.0 | 1000 | 1.5221 |
| 1.1733 | 2.0 | 2000 | 1.3578 |
| 0.8003 | 3.0 | 3000 | 1.3842 |
| 0.5553 | 4.0 | 4000 | 1.5867 |
| 0.4178 | 5.0 | 5000 | 1.6647 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "distilbertfinetuneHS5E8BHLRVHS", "results": []}]}
|
KarthikAlagarsamy/distilbertfinetuneHS5E8BHLRVHS
| null |
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-14T10:55:03+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #distilbert #question-answering #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us
|
distilbertfinetuneHS5E8BHLRVHS
==============================
This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.6647
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #distilbert #question-answering #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# Uploaded model
- **Developed by:** LeroyDyer
- **License:** apache-2.0
- **Finetuned from model :** LeroyDyer/Mixtral_AI_CyberTron_Ultra
# What does he NOT KNOW ! that is the question!
### MOTTO FOR MODEL!
## Models are the same as loras , take them with light weight they are like tablets of knowledge!
Exactly ! ( models / loras ? is there a difference ? only mega merges make a true difference !
the small merges are just applying an adapter lol - Its in there somewhere?)
### Ok Its a Great MODEL ! (My Favorite Goto Brain now ! - will be fine tuned even more ! (if i get cloud credits))
Highly Math Trained As well as many TextBooks and Lessons Highly fit datasets as well as Coding Datasets highly tuned!
This model has absorbed all its previous generations as well as ALL high performers and Specialist models (mistral) It has absorb many foriegn languge models and still stays as an english model !
Very impressive responses Short and long as also it was trained on some binary datasets to return a direct answer! and others to perform step by step response as wel as other to perform interactive response with clients for vairous tasks, such as product design and system design discussion:
Finacial information and other finacial tasks have been highly tunes also : Infact when returning to previous aligned datasets they stayed in line and was sdtill able to achieve High tuning!
Hence a process of merging with a specific topic or role and then training for the role and topic on themed data, hence previous itterations heavily tuned for medical or law or role play as the conception was that intergating the model into a single enity may even corrput them , so the decision to seperate concerns was taken :
This enabled for ssstrategic merging and tuning !
Concepts : chain of thought and functin calling Self rag ! Thoughts , emotive responses have been enhance where possibel with the data given . even sexy books have been highly tuned into the model :
but also i think american genera books (sci fi, fantasy, romance novels are required) for great role play which some expect: )
I have recently seen a strategy in which prompts can be embedded into the adapter to Trigger Specific Roles :
I hae tried to remove such prompting as you are a helpful ai to a character theme instead such as you are a cyber hacker by day and business man by night ! ie to give the model various internal personas !
after some training i noticed it was also talking to itself !! (rehersing) but the tokens for thought were missing so it lookeed strange until i noticed the bug;
After removing the thought tokens they were displayed in the output as the tokenizer was masking them !
But Still a Great Model , Given a Task based data set it Coverges Super quickly hence my enjoyment of the model as training of it is super quick !
Now when ii load up datasets : they are generally only a few bad steps before it begins to drop below zero maintaining a steady 0.6 etc whilst loading the unnseen new dataset , hence not needing so many epochs to adjust the matrix to the new information !
Im not sure if Lora actually works when you save them but i do save some and use them to load models for training ! as they are jump starts for model which did not recive that fine tuning , they can be merged and alligned ! (probably thiey are Good! )
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "code", "medical ", "farmer", "doctor", "Mega-Series", "Cyber-Series", "Role-Play", "Self-Rag", "ThinkingBot", "milestone", "mega-series", "SpydazWebAI"], "datasets": ["gretelai/synthetic_text_to_sql", "HuggingFaceTB/cosmopedia", "teknium/OpenHermes-2.5", "Open-Orca/SlimOrca", "Open-Orca/OpenOrca", "cognitivecomputations/dolphin-coder", "databricks/databricks-dolly-15k", "yahma/alpaca-cleaned", "uonlp/CulturaX", "mwitiderrick/SwahiliPlatypus", "swahili", "Rogendo/English-Swahili-Sentence-Pairs", "ise-uiuc/Magicoder-Evol-Instruct-110K", "meta-math/MetaMathQA"], "metrics": ["accuracy", "bertscore", "bleu", "brier_score", "cer", "character", "charcut_mt", "chrf", "code_eval"], "base_model": "LeroyDyer/Mixtral_AI_CyberTron_Ultra"}
|
LeroyDyer/Mixtral_AI_CyberTron_Ultra
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"code",
"medical ",
"farmer",
"doctor",
"Mega-Series",
"Cyber-Series",
"Role-Play",
"Self-Rag",
"ThinkingBot",
"milestone",
"mega-series",
"SpydazWebAI",
"conversational",
"en",
"dataset:gretelai/synthetic_text_to_sql",
"dataset:HuggingFaceTB/cosmopedia",
"dataset:teknium/OpenHermes-2.5",
"dataset:Open-Orca/SlimOrca",
"dataset:Open-Orca/OpenOrca",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:databricks/databricks-dolly-15k",
"dataset:yahma/alpaca-cleaned",
"dataset:uonlp/CulturaX",
"dataset:mwitiderrick/SwahiliPlatypus",
"dataset:swahili",
"dataset:Rogendo/English-Swahili-Sentence-Pairs",
"dataset:ise-uiuc/Magicoder-Evol-Instruct-110K",
"dataset:meta-math/MetaMathQA",
"base_model:LeroyDyer/Mixtral_AI_CyberTron_Ultra",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-14T10:56:47+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #code #medical #farmer #doctor #Mega-Series #Cyber-Series #Role-Play #Self-Rag #ThinkingBot #milestone #mega-series #SpydazWebAI #conversational #en #dataset-gretelai/synthetic_text_to_sql #dataset-HuggingFaceTB/cosmopedia #dataset-teknium/OpenHermes-2.5 #dataset-Open-Orca/SlimOrca #dataset-Open-Orca/OpenOrca #dataset-cognitivecomputations/dolphin-coder #dataset-databricks/databricks-dolly-15k #dataset-yahma/alpaca-cleaned #dataset-uonlp/CulturaX #dataset-mwitiderrick/SwahiliPlatypus #dataset-swahili #dataset-Rogendo/English-Swahili-Sentence-Pairs #dataset-ise-uiuc/Magicoder-Evol-Instruct-110K #dataset-meta-math/MetaMathQA #base_model-LeroyDyer/Mixtral_AI_CyberTron_Ultra #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: LeroyDyer
- License: apache-2.0
- Finetuned from model : LeroyDyer/Mixtral_AI_CyberTron_Ultra
# What does he NOT KNOW ! that is the question!
### MOTTO FOR MODEL!
## Models are the same as loras , take them with light weight they are like tablets of knowledge!
Exactly ! ( models / loras ? is there a difference ? only mega merges make a true difference !
the small merges are just applying an adapter lol - Its in there somewhere?)
### Ok Its a Great MODEL ! (My Favorite Goto Brain now ! - will be fine tuned even more ! (if i get cloud credits))
Highly Math Trained As well as many TextBooks and Lessons Highly fit datasets as well as Coding Datasets highly tuned!
This model has absorbed all its previous generations as well as ALL high performers and Specialist models (mistral) It has absorb many foriegn languge models and still stays as an english model !
Very impressive responses Short and long as also it was trained on some binary datasets to return a direct answer! and others to perform step by step response as wel as other to perform interactive response with clients for vairous tasks, such as product design and system design discussion:
Finacial information and other finacial tasks have been highly tunes also : Infact when returning to previous aligned datasets they stayed in line and was sdtill able to achieve High tuning!
Hence a process of merging with a specific topic or role and then training for the role and topic on themed data, hence previous itterations heavily tuned for medical or law or role play as the conception was that intergating the model into a single enity may even corrput them , so the decision to seperate concerns was taken :
This enabled for ssstrategic merging and tuning !
Concepts : chain of thought and functin calling Self rag ! Thoughts , emotive responses have been enhance where possibel with the data given . even sexy books have been highly tuned into the model :
but also i think american genera books (sci fi, fantasy, romance novels are required) for great role play which some expect: )
I have recently seen a strategy in which prompts can be embedded into the adapter to Trigger Specific Roles :
I hae tried to remove such prompting as you are a helpful ai to a character theme instead such as you are a cyber hacker by day and business man by night ! ie to give the model various internal personas !
after some training i noticed it was also talking to itself !! (rehersing) but the tokens for thought were missing so it lookeed strange until i noticed the bug;
After removing the thought tokens they were displayed in the output as the tokenizer was masking them !
But Still a Great Model , Given a Task based data set it Coverges Super quickly hence my enjoyment of the model as training of it is super quick !
Now when ii load up datasets : they are generally only a few bad steps before it begins to drop below zero maintaining a steady 0.6 etc whilst loading the unnseen new dataset , hence not needing so many epochs to adjust the matrix to the new information !
Im not sure if Lora actually works when you save them but i do save some and use them to load models for training ! as they are jump starts for model which did not recive that fine tuning , they can be merged and alligned ! (probably thiey are Good! )
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
|
[
"# Uploaded model\n\n- Developed by: LeroyDyer\n- License: apache-2.0\n- Finetuned from model : LeroyDyer/Mixtral_AI_CyberTron_Ultra",
"# What does he NOT KNOW ! that is the question!",
"### MOTTO FOR MODEL!",
"## Models are the same as loras , take them with light weight they are like tablets of knowledge! \nExactly ! ( models / loras ? is there a difference ? only mega merges make a true difference ! \n the small merges are just applying an adapter lol - Its in there somewhere?)",
"### Ok Its a Great MODEL ! (My Favorite Goto Brain now ! - will be fine tuned even more ! (if i get cloud credits)) \n\n\n\nHighly Math Trained As well as many TextBooks and Lessons Highly fit datasets as well as Coding Datasets highly tuned! \n\nThis model has absorbed all its previous generations as well as ALL high performers and Specialist models (mistral) It has absorb many foriegn languge models and still stays as an english model !\n\nVery impressive responses Short and long as also it was trained on some binary datasets to return a direct answer! and others to perform step by step response as wel as other to perform interactive response with clients for vairous tasks, such as product design and system design discussion:\n\nFinacial information and other finacial tasks have been highly tunes also : Infact when returning to previous aligned datasets they stayed in line and was sdtill able to achieve High tuning!\nHence a process of merging with a specific topic or role and then training for the role and topic on themed data, hence previous itterations heavily tuned for medical or law or role play as the conception was that intergating the model into a single enity may even corrput them , so the decision to seperate concerns was taken :\nThis enabled for ssstrategic merging and tuning !\n\nConcepts : chain of thought and functin calling Self rag ! Thoughts , emotive responses have been enhance where possibel with the data given . even sexy books have been highly tuned into the model : \nbut also i think american genera books (sci fi, fantasy, romance novels are required) for great role play which some expect: )\nI have recently seen a strategy in which prompts can be embedded into the adapter to Trigger Specific Roles : \nI hae tried to remove such prompting as you are a helpful ai to a character theme instead such as you are a cyber hacker by day and business man by night ! ie to give the model various internal personas !\nafter some training i noticed it was also talking to itself !! (rehersing) but the tokens for thought were missing so it lookeed strange until i noticed the bug; \nAfter removing the thought tokens they were displayed in the output as the tokenizer was masking them !\n\nBut Still a Great Model , Given a Task based data set it Coverges Super quickly hence my enjoyment of the model as training of it is super quick !\nNow when ii load up datasets : they are generally only a few bad steps before it begins to drop below zero maintaining a steady 0.6 etc whilst loading the unnseen new dataset , hence not needing so many epochs to adjust the matrix to the new information !\n\nIm not sure if Lora actually works when you save them but i do save some and use them to load models for training ! as they are jump starts for model which did not recive that fine tuning , they can be merged and alligned ! (probably thiey are Good! )\n\n\n\n\n\n\n\n\n\n\n\n\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #code #medical #farmer #doctor #Mega-Series #Cyber-Series #Role-Play #Self-Rag #ThinkingBot #milestone #mega-series #SpydazWebAI #conversational #en #dataset-gretelai/synthetic_text_to_sql #dataset-HuggingFaceTB/cosmopedia #dataset-teknium/OpenHermes-2.5 #dataset-Open-Orca/SlimOrca #dataset-Open-Orca/OpenOrca #dataset-cognitivecomputations/dolphin-coder #dataset-databricks/databricks-dolly-15k #dataset-yahma/alpaca-cleaned #dataset-uonlp/CulturaX #dataset-mwitiderrick/SwahiliPlatypus #dataset-swahili #dataset-Rogendo/English-Swahili-Sentence-Pairs #dataset-ise-uiuc/Magicoder-Evol-Instruct-110K #dataset-meta-math/MetaMathQA #base_model-LeroyDyer/Mixtral_AI_CyberTron_Ultra #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: LeroyDyer\n- License: apache-2.0\n- Finetuned from model : LeroyDyer/Mixtral_AI_CyberTron_Ultra",
"# What does he NOT KNOW ! that is the question!",
"### MOTTO FOR MODEL!",
"## Models are the same as loras , take them with light weight they are like tablets of knowledge! \nExactly ! ( models / loras ? is there a difference ? only mega merges make a true difference ! \n the small merges are just applying an adapter lol - Its in there somewhere?)",
"### Ok Its a Great MODEL ! (My Favorite Goto Brain now ! - will be fine tuned even more ! (if i get cloud credits)) \n\n\n\nHighly Math Trained As well as many TextBooks and Lessons Highly fit datasets as well as Coding Datasets highly tuned! \n\nThis model has absorbed all its previous generations as well as ALL high performers and Specialist models (mistral) It has absorb many foriegn languge models and still stays as an english model !\n\nVery impressive responses Short and long as also it was trained on some binary datasets to return a direct answer! and others to perform step by step response as wel as other to perform interactive response with clients for vairous tasks, such as product design and system design discussion:\n\nFinacial information and other finacial tasks have been highly tunes also : Infact when returning to previous aligned datasets they stayed in line and was sdtill able to achieve High tuning!\nHence a process of merging with a specific topic or role and then training for the role and topic on themed data, hence previous itterations heavily tuned for medical or law or role play as the conception was that intergating the model into a single enity may even corrput them , so the decision to seperate concerns was taken :\nThis enabled for ssstrategic merging and tuning !\n\nConcepts : chain of thought and functin calling Self rag ! Thoughts , emotive responses have been enhance where possibel with the data given . even sexy books have been highly tuned into the model : \nbut also i think american genera books (sci fi, fantasy, romance novels are required) for great role play which some expect: )\nI have recently seen a strategy in which prompts can be embedded into the adapter to Trigger Specific Roles : \nI hae tried to remove such prompting as you are a helpful ai to a character theme instead such as you are a cyber hacker by day and business man by night ! ie to give the model various internal personas !\nafter some training i noticed it was also talking to itself !! (rehersing) but the tokens for thought were missing so it lookeed strange until i noticed the bug; \nAfter removing the thought tokens they were displayed in the output as the tokenizer was masking them !\n\nBut Still a Great Model , Given a Task based data set it Coverges Super quickly hence my enjoyment of the model as training of it is super quick !\nNow when ii load up datasets : they are generally only a few bad steps before it begins to drop below zero maintaining a steady 0.6 etc whilst loading the unnseen new dataset , hence not needing so many epochs to adjust the matrix to the new information !\n\nIm not sure if Lora actually works when you save them but i do save some and use them to load models for training ! as they are jump starts for model which did not recive that fine tuning , they can be merged and alligned ! (probably thiey are Good! )\n\n\n\n\n\n\n\n\n\n\n\n\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
mogesa/my-tokenizer
| null |
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-14T10:58:04+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
transformers
|
# Uploaded model
- **Developed by:** LeroyDyer
- **License:** apache-2.0
- **Finetuned from model :** LeroyDyer/Mixtral_AI_CyberTron_Ultra
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "LeroyDyer/Mixtral_AI_CyberTron_Ultra"}
|
LeroyDyer/Brain_3_0_LORA
| null |
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:LeroyDyer/Mixtral_AI_CyberTron_Ultra",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-14T11:01:55+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-LeroyDyer/Mixtral_AI_CyberTron_Ultra #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: LeroyDyer
- License: apache-2.0
- Finetuned from model : LeroyDyer/Mixtral_AI_CyberTron_Ultra
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
|
[
"# Uploaded model\n\n- Developed by: LeroyDyer\n- License: apache-2.0\n- Finetuned from model : LeroyDyer/Mixtral_AI_CyberTron_Ultra\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
[
"TAGS\n#transformers #safetensors #text-generation-inference #unsloth #mistral #trl #en #base_model-LeroyDyer/Mixtral_AI_CyberTron_Ultra #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: LeroyDyer\n- License: apache-2.0\n- Finetuned from model : LeroyDyer/Mixtral_AI_CyberTron_Ultra\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
image-segmentation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-segments-sidewalk-2
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the segments/sidewalk-semantic dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1850
- Mean Iou: 0.0074
- Mean Accuracy: 0.0858
- Overall Accuracy: 0.0532
- Accuracy Unlabeled: 0.0
- Accuracy Flat-road: 0.0521
- Accuracy Flat-sidewalk: 0.2880
- Accuracy Flat-crosswalk: 0.0002
- Accuracy Flat-cyclinglane: 0.0414
- Accuracy Flat-parkingdriveway: nan
- Accuracy Flat-railtrack: 0.0
- Accuracy Flat-curb: 0.0
- Accuracy Human-person: 0.0
- Accuracy Human-rider: 0.0
- Accuracy Vehicle-car: 0.7261
- Accuracy Vehicle-truck: 0.0
- Accuracy Vehicle-bus: 0.0
- Accuracy Vehicle-tramtrain: 0.0
- Accuracy Vehicle-motorcycle: 0.0
- Accuracy Vehicle-bicycle: nan
- Accuracy Vehicle-caravan: 0.0
- Accuracy Vehicle-cartrailer: 0.0
- Accuracy Construction-building: 0.8347
- Accuracy Construction-door: 0.0
- Accuracy Construction-wall: 0.0011
- Accuracy Construction-fenceguardrail: 0.0
- Accuracy Construction-bridge: nan
- Accuracy Construction-tunnel: 0.0
- Accuracy Construction-stairs: 0.0
- Accuracy Object-pole: 0.0
- Accuracy Object-trafficsign: 0.0
- Accuracy Object-trafficlight: 0.0
- Accuracy Nature-vegetation: 0.7158
- Accuracy Nature-terrain: 0.0
- Accuracy Sky: 0.0001
- Accuracy Void-ground: 0.0
- Accuracy Void-dynamic: 0.0
- Accuracy Void-static: 0.0
- Accuracy Void-unclear: nan
- Iou Unlabeled: 0.0
- Iou Flat-road: 0.0373
- Iou Flat-sidewalk: 0.0114
- Iou Flat-crosswalk: 0.0002
- Iou Flat-cyclinglane: 0.0279
- Iou Flat-parkingdriveway: 0.0
- Iou Flat-railtrack: 0.0
- Iou Flat-curb: 0.0
- Iou Human-person: 0.0
- Iou Human-rider: 0.0
- Iou Vehicle-car: 0.0081
- Iou Vehicle-truck: 0.0
- Iou Vehicle-bus: 0.0
- Iou Vehicle-tramtrain: 0.0
- Iou Vehicle-motorcycle: 0.0
- Iou Vehicle-bicycle: nan
- Iou Vehicle-caravan: 0.0
- Iou Vehicle-cartrailer: 0.0
- Iou Construction-building: 0.0229
- Iou Construction-door: 0.0
- Iou Construction-wall: 0.0011
- Iou Construction-fenceguardrail: 0.0
- Iou Construction-bridge: 0.0
- Iou Construction-tunnel: 0.0
- Iou Construction-stairs: 0.0
- Iou Object-pole: 0.0
- Iou Object-trafficsign: 0.0
- Iou Object-trafficlight: 0.0
- Iou Nature-vegetation: 0.1347
- Iou Nature-terrain: 0.0
- Iou Sky: 0.0000
- Iou Void-ground: 0.0
- Iou Void-dynamic: 0.0
- Iou Void-static: 0.0
- Iou Void-unclear: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Flat-road | Accuracy Flat-sidewalk | Accuracy Flat-crosswalk | Accuracy Flat-cyclinglane | Accuracy Flat-parkingdriveway | Accuracy Flat-railtrack | Accuracy Flat-curb | Accuracy Human-person | Accuracy Human-rider | Accuracy Vehicle-car | Accuracy Vehicle-truck | Accuracy Vehicle-bus | Accuracy Vehicle-tramtrain | Accuracy Vehicle-motorcycle | Accuracy Vehicle-bicycle | Accuracy Vehicle-caravan | Accuracy Vehicle-cartrailer | Accuracy Construction-building | Accuracy Construction-door | Accuracy Construction-wall | Accuracy Construction-fenceguardrail | Accuracy Construction-bridge | Accuracy Construction-tunnel | Accuracy Construction-stairs | Accuracy Object-pole | Accuracy Object-trafficsign | Accuracy Object-trafficlight | Accuracy Nature-vegetation | Accuracy Nature-terrain | Accuracy Sky | Accuracy Void-ground | Accuracy Void-dynamic | Accuracy Void-static | Accuracy Void-unclear | Iou Unlabeled | Iou Flat-road | Iou Flat-sidewalk | Iou Flat-crosswalk | Iou Flat-cyclinglane | Iou Flat-parkingdriveway | Iou Flat-railtrack | Iou Flat-curb | Iou Human-person | Iou Human-rider | Iou Vehicle-car | Iou Vehicle-truck | Iou Vehicle-bus | Iou Vehicle-tramtrain | Iou Vehicle-motorcycle | Iou Vehicle-bicycle | Iou Vehicle-caravan | Iou Vehicle-cartrailer | Iou Construction-building | Iou Construction-door | Iou Construction-wall | Iou Construction-fenceguardrail | Iou Construction-bridge | Iou Construction-tunnel | Iou Construction-stairs | Iou Object-pole | Iou Object-trafficsign | Iou Object-trafficlight | Iou Nature-vegetation | Iou Nature-terrain | Iou Sky | Iou Void-ground | Iou Void-dynamic | Iou Void-static | Iou Void-unclear |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------:|:------------------:|:----------------------:|:-----------------------:|:-------------------------:|:-----------------------------:|:-----------------------:|:------------------:|:---------------------:|:--------------------:|:--------------------:|:----------------------:|:--------------------:|:--------------------------:|:---------------------------:|:------------------------:|:------------------------:|:---------------------------:|:------------------------------:|:--------------------------:|:--------------------------:|:------------------------------------:|:----------------------------:|:----------------------------:|:----------------------------:|:--------------------:|:---------------------------:|:----------------------------:|:--------------------------:|:-----------------------:|:------------:|:--------------------:|:---------------------:|:--------------------:|:---------------------:|:-------------:|:-------------:|:-----------------:|:------------------:|:--------------------:|:------------------------:|:------------------:|:-------------:|:----------------:|:---------------:|:---------------:|:-----------------:|:---------------:|:---------------------:|:----------------------:|:-------------------:|:-------------------:|:----------------------:|:-------------------------:|:---------------------:|:---------------------:|:-------------------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:---------------:|:----------------------:|:-----------------------:|:---------------------:|:------------------:|:-------:|:---------------:|:----------------:|:---------------:|:----------------:|
| 3.583 | 0.43 | 50 | 3.5690 | 0.0035 | 0.0283 | 0.0143 | 0.0179 | 0.0003 | 0.0008 | 0.0087 | 0.2546 | nan | 0.0010 | 0.0022 | 0.0015 | 0.0234 | 0.2546 | 0.0 | 0.0330 | 0.0 | 0.0004 | nan | 0.0 | 0.0105 | 0.0 | 0.0027 | 0.0120 | 0.0 | nan | 0.0 | 0.0000 | 0.0061 | 0.0439 | 0.0009 | 0.0312 | 0.0055 | 0.0143 | 0.0679 | 0.0008 | 0.0841 | nan | 0.0156 | 0.0003 | 0.0008 | 0.0069 | 0.0354 | 0.0 | 0.0007 | 0.0004 | 0.0001 | 0.0096 | 0.0010 | 0.0 | 0.0001 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0097 | 0.0 | 0.0023 | 0.0081 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0033 | 0.0005 | 0.0009 | 0.0144 | 0.0041 | 0.0056 | 0.0014 | 0.0007 | 0.0001 | 0.0 |
| 3.541 | 0.85 | 100 | 3.5349 | 0.0040 | 0.0309 | 0.0152 | 0.0195 | 0.0003 | 0.0033 | 0.0139 | 0.2434 | nan | 0.0007 | 0.0027 | 0.0032 | 0.0195 | 0.3347 | 0.0 | 0.0174 | 0.0 | 0.0005 | nan | 0.0 | 0.0105 | 0.0003 | 0.0033 | 0.0132 | 0.0 | nan | 0.0 | 0.0 | 0.0086 | 0.0445 | 0.0007 | 0.0547 | 0.0053 | 0.0147 | 0.0689 | 0.0011 | 0.0731 | nan | 0.0166 | 0.0003 | 0.0029 | 0.0102 | 0.0339 | 0.0 | 0.0005 | 0.0004 | 0.0002 | 0.0079 | 0.0013 | 0.0 | 0.0000 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0098 | 0.0002 | 0.0028 | 0.0089 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0046 | 0.0005 | 0.0007 | 0.0263 | 0.0039 | 0.0056 | 0.0014 | 0.0009 | 0.0001 | 0.0 |
| 3.4975 | 1.28 | 150 | 3.5016 | 0.0044 | 0.0336 | 0.0165 | 0.0157 | 0.0004 | 0.0078 | 0.0120 | 0.2803 | nan | 0.0004 | 0.0011 | 0.0018 | 0.0176 | 0.4099 | 0.0 | 0.0113 | 0.0 | 0.0004 | nan | 0.0 | 0.0090 | 0.0008 | 0.0025 | 0.0113 | 0.0 | nan | 0.0 | 0.0 | 0.0141 | 0.0421 | 0.0004 | 0.0806 | 0.0057 | 0.0142 | 0.0624 | 0.0011 | 0.0390 | nan | 0.0136 | 0.0004 | 0.0044 | 0.0095 | 0.0338 | 0.0 | 0.0003 | 0.0002 | 0.0001 | 0.0071 | 0.0016 | 0.0 | 0.0000 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0085 | 0.0005 | 0.0022 | 0.0085 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0073 | 0.0004 | 0.0004 | 0.0415 | 0.0042 | 0.0057 | 0.0015 | 0.0009 | 0.0000 | 0.0 |
| 3.5169 | 1.71 | 200 | 3.4636 | 0.0052 | 0.0399 | 0.0195 | 0.0159 | 0.0003 | 0.0224 | 0.0081 | 0.3090 | nan | 0.0002 | 0.0010 | 0.0022 | 0.0136 | 0.4941 | 0.0 | 0.0071 | 0.0 | 0.0003 | nan | 0.0 | 0.0112 | 0.0032 | 0.0018 | 0.0168 | 0.0 | nan | 0.0 | 0.0 | 0.0127 | 0.0408 | 0.0002 | 0.1364 | 0.0040 | 0.0128 | 0.0652 | 0.0014 | 0.0562 | nan | 0.0138 | 0.0003 | 0.0047 | 0.0068 | 0.0366 | 0.0 | 0.0001 | 0.0002 | 0.0002 | 0.0062 | 0.0021 | 0.0 | 0.0000 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0106 | 0.0016 | 0.0017 | 0.0111 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0071 | 0.0006 | 0.0002 | 0.0657 | 0.0031 | 0.0052 | 0.0017 | 0.0012 | 0.0000 | 0.0 |
| 3.3938 | 2.14 | 250 | 3.4255 | 0.0057 | 0.0433 | 0.0208 | 0.0139 | 0.0005 | 0.0461 | 0.0090 | 0.2873 | nan | 0.0002 | 0.0015 | 0.0016 | 0.0119 | 0.5499 | 0.0 | 0.0067 | 0.0 | 0.0003 | nan | 0.0 | 0.0101 | 0.0044 | 0.0021 | 0.0197 | 0.0 | nan | 0.0 | 0.0 | 0.0134 | 0.0390 | 0.0001 | 0.1878 | 0.0037 | 0.0157 | 0.0644 | 0.0011 | 0.0534 | nan | 0.0121 | 0.0005 | 0.0054 | 0.0075 | 0.0382 | 0.0 | 0.0001 | 0.0003 | 0.0002 | 0.0052 | 0.0025 | 0.0 | 0.0000 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0096 | 0.0021 | 0.0019 | 0.0132 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0071 | 0.0006 | 0.0001 | 0.0801 | 0.0029 | 0.0062 | 0.0018 | 0.0010 | 0.0000 | 0.0 |
| 3.4231 | 2.56 | 300 | 3.3861 | 0.0062 | 0.0466 | 0.0232 | 0.0130 | 0.0005 | 0.0759 | 0.0072 | 0.2834 | nan | 0.0001 | 0.0006 | 0.0022 | 0.0097 | 0.5913 | 0.0 | 0.0017 | 0.0 | 0.0003 | nan | 0.0 | 0.0109 | 0.0069 | 0.0018 | 0.0223 | 0.0 | nan | 0.0 | 0.0 | 0.0136 | 0.0389 | 0.0000 | 0.2467 | 0.0033 | 0.0148 | 0.0719 | 0.0010 | 0.0262 | nan | 0.0113 | 0.0005 | 0.0060 | 0.0062 | 0.0426 | 0.0 | 0.0001 | 0.0002 | 0.0003 | 0.0045 | 0.0031 | 0.0 | 0.0000 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0105 | 0.0031 | 0.0016 | 0.0150 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0072 | 0.0008 | 0.0000 | 0.0909 | 0.0027 | 0.0059 | 0.0019 | 0.0010 | 0.0000 | 0.0 |
| 3.3664 | 2.99 | 350 | 3.3542 | 0.0065 | 0.0503 | 0.0246 | 0.0112 | 0.0006 | 0.1034 | 0.0054 | 0.2629 | nan | 0.0002 | 0.0002 | 0.0015 | 0.0066 | 0.6758 | 0.0 | 0.0009 | 0.0 | 0.0002 | nan | 0.0 | 0.0073 | 0.0123 | 0.0020 | 0.0218 | 0.0 | nan | 0.0 | 0.0 | 0.0118 | 0.0363 | 0.0 | 0.3115 | 0.0032 | 0.0066 | 0.0634 | 0.0008 | 0.0131 | nan | 0.0099 | 0.0006 | 0.0065 | 0.0048 | 0.0455 | 0.0 | 0.0001 | 0.0001 | 0.0002 | 0.0035 | 0.0031 | 0.0 | 0.0000 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0071 | 0.0042 | 0.0019 | 0.0151 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0072 | 0.0008 | 0.0 | 0.1091 | 0.0027 | 0.0030 | 0.0021 | 0.0007 | 0.0000 | 0.0 |
| 3.3089 | 3.42 | 400 | 3.3039 | 0.0069 | 0.0551 | 0.0281 | 0.0107 | 0.0006 | 0.1407 | 0.0043 | 0.2687 | nan | 0.0002 | 0.0002 | 0.0011 | 0.0051 | 0.6984 | 0.0 | 0.0002 | 0.0 | 0.0002 | nan | 0.0 | 0.0081 | 0.0164 | 0.0018 | 0.0267 | 0.0 | nan | 0.0 | 0.0 | 0.0104 | 0.0366 | 0.0 | 0.3861 | 0.0041 | 0.0079 | 0.0729 | 0.0007 | 0.0069 | nan | 0.0094 | 0.0006 | 0.0075 | 0.0039 | 0.0495 | 0.0 | 0.0002 | 0.0001 | 0.0002 | 0.0025 | 0.0037 | 0.0 | 0.0000 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0079 | 0.0054 | 0.0018 | 0.0179 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0066 | 0.0010 | 0.0 | 0.1123 | 0.0033 | 0.0032 | 0.0025 | 0.0007 | 0.0000 | 0.0 |
| 3.2464 | 3.85 | 450 | 3.2705 | 0.0070 | 0.0575 | 0.0288 | 0.0092 | 0.0006 | 0.1587 | 0.0043 | 0.2538 | nan | 0.0001 | 0.0 | 0.0001 | 0.0027 | 0.7500 | 0.0 | 0.0001 | 0.0 | 0.0001 | nan | 0.0 | 0.0068 | 0.0248 | 0.0020 | 0.0284 | 0.0 | nan | 0.0 | 0.0 | 0.0089 | 0.0347 | 0.0 | 0.4160 | 0.0040 | 0.0057 | 0.0694 | 0.0005 | 0.0007 | nan | 0.0083 | 0.0006 | 0.0080 | 0.0040 | 0.0504 | 0.0 | 0.0001 | 0.0 | 0.0000 | 0.0016 | 0.0036 | 0.0 | 0.0000 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0067 | 0.0065 | 0.0019 | 0.0191 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0060 | 0.0012 | 0.0 | 0.1181 | 0.0032 | 0.0025 | 0.0026 | 0.0005 | 0.0000 | 0.0 |
| 3.2788 | 4.27 | 500 | 3.2230 | 0.0074 | 0.0616 | 0.0320 | 0.0100 | 0.0009 | 0.1784 | 0.0041 | 0.2465 | nan | 0.0001 | 0.0 | 0.0 | 0.0020 | 0.7658 | 0.0 | 0.0 | 0.0 | 0.0001 | nan | 0.0 | 0.0060 | 0.0325 | 0.0018 | 0.0373 | 0.0 | nan | 0.0 | 0.0 | 0.0075 | 0.0318 | 0.0 | 0.4977 | 0.0046 | 0.0049 | 0.0783 | 0.0004 | 0.0 | nan | 0.0089 | 0.0009 | 0.0084 | 0.0038 | 0.0523 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0012 | 0.0040 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0058 | 0.0071 | 0.0018 | 0.0231 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0048 | 0.0013 | 0.0 | 0.1257 | 0.0037 | 0.0023 | 0.0029 | 0.0004 | 0.0 | 0.0 |
| 3.2179 | 4.7 | 550 | 3.1821 | 0.0074 | 0.0641 | 0.0337 | 0.0090 | 0.0011 | 0.2000 | 0.0030 | 0.2313 | nan | 0.0001 | 0.0 | 0.0 | 0.0016 | 0.7777 | 0.0 | 0.0 | 0.0 | 0.0000 | nan | 0.0 | 0.0054 | 0.0473 | 0.0018 | 0.0371 | 0.0 | nan | 0.0 | 0.0 | 0.0042 | 0.0317 | 0.0 | 0.5484 | 0.0049 | 0.0047 | 0.0782 | 0.0004 | 0.0 | nan | 0.0081 | 0.0011 | 0.0090 | 0.0028 | 0.0523 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0010 | 0.0042 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0053 | 0.0088 | 0.0018 | 0.0229 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0027 | 0.0016 | 0.0 | 0.1282 | 0.0039 | 0.0022 | 0.0034 | 0.0003 | 0.0 | 0.0 |
| 3.1906 | 5.13 | 600 | 3.1424 | 0.0074 | 0.0651 | 0.0338 | 0.0104 | 0.0015 | 0.2149 | 0.0034 | 0.2067 | nan | 0.0001 | 0.0 | 0.0 | 0.0011 | 0.8076 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0044 | 0.0642 | 0.0017 | 0.0322 | 0.0 | nan | 0.0 | 0.0 | 0.0024 | 0.0266 | 0.0 | 0.5584 | 0.0044 | 0.0047 | 0.0734 | 0.0002 | 0.0 | nan | 0.0093 | 0.0015 | 0.0094 | 0.0032 | 0.0515 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0008 | 0.0041 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0044 | 0.0099 | 0.0017 | 0.0211 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0016 | 0.0016 | 0.0 | 0.1304 | 0.0035 | 0.0021 | 0.0034 | 0.0002 | 0.0 | 0.0 |
| 3.2075 | 5.56 | 650 | 3.1091 | 0.0075 | 0.0678 | 0.0367 | 0.0088 | 0.0019 | 0.2249 | 0.0032 | 0.2201 | nan | 0.0000 | 0.0 | 0.0 | 0.0009 | 0.8058 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0035 | 0.0846 | 0.0014 | 0.0332 | 0.0 | nan | 0.0 | 0.0 | 0.0009 | 0.0145 | 0.0 | 0.6252 | 0.0050 | 0.0049 | 0.0639 | 0.0002 | 0.0 | nan | 0.0080 | 0.0018 | 0.0096 | 0.0030 | 0.0525 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0006 | 0.0042 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0034 | 0.0114 | 0.0014 | 0.0220 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0007 | 0.0013 | 0.0 | 0.1327 | 0.0041 | 0.0022 | 0.0035 | 0.0002 | 0.0 | 0.0 |
| 3.0888 | 5.98 | 700 | 3.0596 | 0.0075 | 0.0690 | 0.0364 | 0.0080 | 0.0025 | 0.2672 | 0.0028 | 0.1936 | nan | 0.0001 | 0.0 | 0.0 | 0.0008 | 0.8043 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0033 | 0.1121 | 0.0018 | 0.0380 | 0.0 | nan | 0.0 | 0.0000 | 0.0007 | 0.0109 | 0.0 | 0.6151 | 0.0053 | 0.0040 | 0.0672 | 0.0001 | 0.0 | nan | 0.0073 | 0.0024 | 0.0108 | 0.0027 | 0.0502 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0006 | 0.0044 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0033 | 0.0125 | 0.0018 | 0.0244 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0005 | 0.0012 | 0.0 | 0.1313 | 0.0043 | 0.0017 | 0.0041 | 0.0001 | 0.0 | 0.0 |
| 3.1177 | 6.41 | 750 | 3.0297 | 0.0075 | 0.0700 | 0.0387 | 0.0082 | 0.0029 | 0.2403 | 0.0022 | 0.2030 | nan | 0.0000 | 0.0 | 0.0 | 0.0004 | 0.8050 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0028 | 0.1223 | 0.0012 | 0.0318 | 0.0 | nan | 0.0 | 0.0000 | 0.0 | 0.0030 | 0.0 | 0.6823 | 0.0056 | 0.0042 | 0.0556 | 0.0001 | 0.0 | nan | 0.0077 | 0.0028 | 0.0099 | 0.0022 | 0.0513 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0003 | 0.0042 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0028 | 0.0131 | 0.0012 | 0.0210 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0005 | 0.0 | 0.1343 | 0.0045 | 0.0018 | 0.0039 | 0.0001 | 0.0 | 0.0 |
| 3.0981 | 6.84 | 800 | 2.9921 | 0.0075 | 0.0716 | 0.0382 | 0.0071 | 0.0029 | 0.3004 | 0.0017 | 0.1831 | nan | 0.0000 | 0.0 | 0.0 | 0.0004 | 0.8008 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0026 | 0.1617 | 0.0013 | 0.0377 | 0.0 | nan | 0.0 | 0.0000 | 0.0 | 0.0003 | 0.0 | 0.6584 | 0.0052 | 0.0035 | 0.0532 | 0.0001 | 0.0 | nan | 0.0066 | 0.0028 | 0.0117 | 0.0016 | 0.0489 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0003 | 0.0045 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0026 | 0.0139 | 0.0013 | 0.0245 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0001 | 0.0 | 0.1324 | 0.0042 | 0.0013 | 0.0044 | 0.0001 | 0.0 | 0.0 |
| 2.9986 | 7.26 | 850 | 2.9500 | 0.0073 | 0.0718 | 0.0379 | 0.0051 | 0.0029 | 0.3269 | 0.0015 | 0.1622 | nan | 0.0000 | 0.0 | 0.0 | 0.0002 | 0.7995 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0019 | 0.1661 | 0.0011 | 0.0342 | 0.0 | nan | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.6631 | 0.0057 | 0.0034 | 0.0505 | 0.0000 | 0.0 | nan | 0.0049 | 0.0028 | 0.0124 | 0.0015 | 0.0477 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0002 | 0.0043 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0019 | 0.0139 | 0.0011 | 0.0226 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.1328 | 0.0044 | 0.0012 | 0.0047 | 0.0000 | 0.0 | 0.0 |
| 3.01 | 7.69 | 900 | 2.9155 | 0.0075 | 0.0726 | 0.0397 | 0.0057 | 0.0048 | 0.2827 | 0.0016 | 0.1780 | nan | 0.0 | 0.0 | 0.0 | 0.0001 | 0.7988 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0018 | 0.1896 | 0.0014 | 0.0313 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7004 | 0.0053 | 0.0027 | 0.0462 | 0.0 | 0.0 | nan | 0.0054 | 0.0044 | 0.0112 | 0.0016 | 0.0495 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0001 | 0.0043 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0018 | 0.0146 | 0.0014 | 0.0215 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1358 | 0.0041 | 0.0010 | 0.0043 | 0.0 | 0.0 | 0.0 |
| 2.7829 | 8.12 | 950 | 2.9108 | 0.0075 | 0.0737 | 0.0396 | 0.0046 | 0.0056 | 0.3126 | 0.0016 | 0.1523 | nan | 0.0 | 0.0 | 0.0 | 0.0001 | 0.7949 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0012 | 0.2263 | 0.0013 | 0.0359 | 0.0 | nan | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.6970 | 0.0047 | 0.0030 | 0.0449 | 0.0 | 0.0 | nan | 0.0045 | 0.0051 | 0.0120 | 0.0016 | 0.0474 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0001 | 0.0045 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0012 | 0.0158 | 0.0013 | 0.0238 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.1359 | 0.0036 | 0.0010 | 0.0057 | 0.0 | 0.0 | 0.0 |
| 2.8855 | 8.55 | 1000 | 2.8632 | 0.0075 | 0.0762 | 0.0407 | 0.0035 | 0.0060 | 0.3065 | 0.0012 | 0.1668 | nan | 0.0 | 0.0 | 0.0 | 0.0000 | 0.7886 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0011 | 0.2899 | 0.0015 | 0.0307 | 0.0 | nan | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.7177 | 0.0046 | 0.0028 | 0.0425 | 0.0 | 0.0 | nan | 0.0034 | 0.0053 | 0.0118 | 0.0012 | 0.0482 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0047 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0011 | 0.0177 | 0.0015 | 0.0216 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.1363 | 0.0035 | 0.0008 | 0.0067 | 0.0 | 0.0 | 0.0 |
| 2.9367 | 8.97 | 1050 | 2.8275 | 0.0073 | 0.0747 | 0.0400 | 0.0028 | 0.0064 | 0.3273 | 0.0011 | 0.1432 | nan | 0.0 | 0.0 | 0.0 | 0.0000 | 0.7941 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0007 | 0.2668 | 0.0013 | 0.0213 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7122 | 0.0036 | 0.0029 | 0.0306 | 0.0 | 0.0 | nan | 0.0027 | 0.0057 | 0.0123 | 0.0011 | 0.0455 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0044 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0007 | 0.0169 | 0.0013 | 0.0157 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1387 | 0.0028 | 0.0007 | 0.0061 | 0.0 | 0.0 | 0.0 |
| 2.7874 | 9.4 | 1100 | 2.7847 | 0.0072 | 0.0772 | 0.0396 | 0.0020 | 0.0044 | 0.3967 | 0.0012 | 0.1218 | nan | 0.0 | 0.0 | 0.0 | 0.0000 | 0.7892 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0007 | 0.3182 | 0.0009 | 0.0262 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7052 | 0.0033 | 0.0022 | 0.0224 | 0.0 | 0.0 | nan | 0.0020 | 0.0040 | 0.0142 | 0.0012 | 0.0427 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0045 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0007 | 0.0183 | 0.0009 | 0.0183 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1382 | 0.0025 | 0.0005 | 0.0057 | 0.0 | 0.0 | 0.0 |
| 2.7519 | 9.83 | 1150 | 2.7757 | 0.0074 | 0.0773 | 0.0408 | 0.0016 | 0.0078 | 0.3511 | 0.0007 | 0.1389 | nan | 0.0 | 0.0 | 0.0 | 0.0000 | 0.7855 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0005 | 0.3444 | 0.0012 | 0.0257 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7131 | 0.0030 | 0.0018 | 0.0220 | 0.0 | 0.0 | nan | 0.0016 | 0.0068 | 0.0130 | 0.0007 | 0.0451 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0048 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0005 | 0.0190 | 0.0012 | 0.0186 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1375 | 0.0023 | 0.0004 | 0.0067 | 0.0 | 0.0 | 0.0 |
| 2.8513 | 10.26 | 1200 | 2.7939 | 0.0076 | 0.0773 | 0.0420 | 0.0021 | 0.0113 | 0.3051 | 0.0007 | 0.1357 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7816 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0003 | 0.3811 | 0.0013 | 0.0213 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7334 | 0.0018 | 0.0024 | 0.0175 | 0.0 | 0.0 | nan | 0.0021 | 0.0096 | 0.0117 | 0.0007 | 0.0452 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0047 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0003 | 0.0194 | 0.0013 | 0.0157 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1406 | 0.0014 | 0.0005 | 0.0058 | 0.0 | 0.0 | 0.0 |
| 2.9147 | 10.68 | 1250 | 2.7415 | 0.0073 | 0.0800 | 0.0437 | 0.0016 | 0.0124 | 0.3254 | 0.0004 | 0.1348 | nan | 0.0 | 0.0 | 0.0 | 0.0000 | 0.7681 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0003 | 0.4424 | 0.0013 | 0.0158 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7583 | 0.0027 | 0.0016 | 0.0161 | 0.0 | 0.0 | nan | 0.0016 | 0.0105 | 0.0123 | 0.0004 | 0.0444 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0055 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0003 | 0.0210 | 0.0013 | 0.0118 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1361 | 0.0020 | 0.0004 | 0.0062 | 0.0 | 0.0 | 0.0 |
| 2.9311 | 11.11 | 1300 | 2.7368 | 0.0071 | 0.0793 | 0.0437 | 0.0013 | 0.0122 | 0.3176 | 0.0005 | 0.1358 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7735 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0001 | 0.4230 | 0.0011 | 0.0100 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7709 | 0.0014 | 0.0014 | 0.0102 | 0.0 | 0.0 | nan | 0.0012 | 0.0103 | 0.0121 | 0.0005 | 0.0446 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0051 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0001 | 0.0205 | 0.0011 | 0.0079 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1390 | 0.0011 | 0.0003 | 0.0044 | 0.0 | 0.0 | 0.0 |
| 2.7803 | 11.54 | 1350 | 2.6650 | 0.0074 | 0.0813 | 0.0444 | 0.0014 | 0.0140 | 0.3329 | 0.0005 | 0.1260 | nan | 0.0 | 0.0 | 0.0 | 0.0000 | 0.7640 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0002 | 0.4862 | 0.0011 | 0.0122 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7668 | 0.0018 | 0.0015 | 0.0107 | 0.0 | 0.0 | nan | 0.0014 | 0.0116 | 0.0125 | 0.0005 | 0.0432 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0057 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0002 | 0.0218 | 0.0011 | 0.0093 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1369 | 0.0014 | 0.0003 | 0.0050 | 0.0 | 0.0 | 0.0 |
| 2.8496 | 11.97 | 1400 | 2.6764 | 0.0073 | 0.0794 | 0.0434 | 0.0009 | 0.0170 | 0.3417 | 0.0005 | 0.0959 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7733 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0001 | 0.4785 | 0.0010 | 0.0090 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7354 | 0.0010 | 0.0012 | 0.0066 | 0.0 | 0.0 | nan | 0.0009 | 0.0139 | 0.0128 | 0.0005 | 0.0392 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0051 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0001 | 0.0210 | 0.0010 | 0.0073 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1408 | 0.0007 | 0.0002 | 0.0035 | 0.0 | 0.0 | 0.0 |
| 2.9356 | 12.39 | 1450 | 2.6741 | 0.0075 | 0.0819 | 0.0451 | 0.0009 | 0.0190 | 0.3118 | 0.0005 | 0.1236 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7635 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0001 | 0.5493 | 0.0008 | 0.0110 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7487 | 0.0006 | 0.0008 | 0.0074 | 0.0 | 0.0 | nan | 0.0009 | 0.0154 | 0.0120 | 0.0005 | 0.0447 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0055 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0001 | 0.0220 | 0.0008 | 0.0086 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1411 | 0.0004 | 0.0002 | 0.0040 | 0.0 | 0.0 | 0.0 |
| 2.7973 | 12.82 | 1500 | 2.6146 | 0.0073 | 0.0828 | 0.0454 | 0.0007 | 0.0191 | 0.3251 | 0.0005 | 0.1108 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7611 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0001 | 0.5792 | 0.0008 | 0.0056 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7606 | 0.0005 | 0.0009 | 0.0028 | 0.0 | 0.0 | nan | 0.0007 | 0.0155 | 0.0123 | 0.0004 | 0.0422 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0056 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0001 | 0.0227 | 0.0008 | 0.0046 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1418 | 0.0003 | 0.0002 | 0.0018 | 0.0 | 0.0 | 0.0 |
| 2.5603 | 13.25 | 1550 | 2.6177 | 0.0074 | 0.0830 | 0.0463 | 0.0007 | 0.0228 | 0.3112 | 0.0006 | 0.1018 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7591 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.6096 | 0.0007 | 0.0062 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7584 | 0.0002 | 0.0007 | 0.0022 | 0.0 | 0.0 | nan | 0.0007 | 0.0181 | 0.0120 | 0.0006 | 0.0412 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0056 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.0228 | 0.0007 | 0.0051 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1425 | 0.0001 | 0.0001 | 0.0015 | 0.0 | 0.0 | 0.0 |
| 2.7181 | 13.68 | 1600 | 2.5973 | 0.0074 | 0.0822 | 0.0472 | 0.0005 | 0.0246 | 0.2926 | 0.0005 | 0.1116 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7616 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0001 | 0.5796 | 0.0007 | 0.0033 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7706 | 0.0002 | 0.0003 | 0.0013 | 0.0 | 0.0 | nan | 0.0005 | 0.0195 | 0.0114 | 0.0005 | 0.0438 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0056 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0001 | 0.0230 | 0.0007 | 0.0028 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1411 | 0.0002 | 0.0001 | 0.0009 | 0.0 | 0.0 | 0.0 |
| 2.6034 | 14.1 | 1650 | 2.5914 | 0.0073 | 0.0843 | 0.0469 | 0.0005 | 0.0228 | 0.3196 | 0.0004 | 0.1056 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7544 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0001 | 0.6371 | 0.0006 | 0.0031 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7686 | 0.0002 | 0.0004 | 0.0013 | 0.0 | 0.0 | nan | 0.0005 | 0.0181 | 0.0122 | 0.0004 | 0.0417 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0060 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0001 | 0.0238 | 0.0006 | 0.0027 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1402 | 0.0001 | 0.0001 | 0.0009 | 0.0 | 0.0 | 0.0 |
| 2.6792 | 14.53 | 1700 | 2.5773 | 0.0073 | 0.0857 | 0.0463 | 0.0003 | 0.0232 | 0.3367 | 0.0005 | 0.0953 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7459 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.7004 | 0.0004 | 0.0066 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7443 | 0.0000 | 0.0002 | 0.0015 | 0.0 | 0.0 | nan | 0.0003 | 0.0184 | 0.0127 | 0.0005 | 0.0395 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0062 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.0237 | 0.0004 | 0.0055 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1410 | 0.0000 | 0.0000 | 0.0011 | 0.0 | 0.0 | 0.0 |
| 2.5936 | 14.96 | 1750 | 2.5147 | 0.0072 | 0.0859 | 0.0459 | 0.0003 | 0.0228 | 0.3530 | 0.0005 | 0.0813 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7462 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.7110 | 0.0004 | 0.0043 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7411 | 0.0000 | 0.0002 | 0.0006 | 0.0 | 0.0 | nan | 0.0003 | 0.0181 | 0.0131 | 0.0005 | 0.0362 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0062 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.0236 | 0.0004 | 0.0037 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1414 | 0.0000 | 0.0000 | 0.0005 | 0.0 | 0.0 | 0.0 |
| 2.7846 | 15.38 | 1800 | 2.5288 | 0.0074 | 0.0855 | 0.0491 | 0.0003 | 0.0307 | 0.3131 | 0.0004 | 0.0913 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7464 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.6968 | 0.0005 | 0.0032 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7653 | 0.0000 | 0.0001 | 0.0008 | 0.0 | 0.0 | nan | 0.0003 | 0.0237 | 0.0121 | 0.0004 | 0.0395 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0063 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.0242 | 0.0005 | 0.0028 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1411 | 0.0000 | 0.0000 | 0.0006 | 0.0 | 0.0 | 0.0 |
| 2.7533 | 15.81 | 1850 | 2.5347 | 0.0074 | 0.0849 | 0.0480 | 0.0002 | 0.0296 | 0.3023 | 0.0005 | 0.0928 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7478 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.7060 | 0.0004 | 0.0031 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7481 | 0.0000 | 0.0001 | 0.0002 | 0.0 | 0.0 | nan | 0.0002 | 0.0229 | 0.0117 | 0.0005 | 0.0408 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0060 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.0239 | 0.0004 | 0.0027 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1413 | 0.0000 | 0.0000 | 0.0001 | 0.0 | 0.0 | 0.0 |
| 2.6202 | 16.24 | 1900 | 2.5233 | 0.0074 | 0.0854 | 0.0484 | 0.0002 | 0.0319 | 0.3158 | 0.0005 | 0.0869 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7452 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.7275 | 0.0004 | 0.0038 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7336 | 0.0 | 0.0000 | 0.0003 | 0.0 | 0.0 | nan | 0.0002 | 0.0246 | 0.0121 | 0.0005 | 0.0393 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0062 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.0235 | 0.0004 | 0.0034 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1412 | 0.0 | 0.0000 | 0.0002 | 0.0 | 0.0 | 0.0 |
| 2.7831 | 16.67 | 1950 | 2.4872 | 0.0072 | 0.0865 | 0.0486 | 0.0001 | 0.0311 | 0.3382 | 0.0005 | 0.0763 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7401 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.7483 | 0.0002 | 0.0027 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7429 | 0.0000 | 0.0000 | 0.0006 | 0.0 | 0.0 | nan | 0.0001 | 0.0240 | 0.0128 | 0.0005 | 0.0355 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0067 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.0240 | 0.0002 | 0.0024 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1395 | 0.0000 | 0.0000 | 0.0005 | 0.0 | 0.0 | 0.0 |
| 2.6771 | 17.09 | 2000 | 2.4556 | 0.0073 | 0.0866 | 0.0501 | 0.0002 | 0.0324 | 0.3120 | 0.0006 | 0.0705 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7385 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.7390 | 0.0002 | 0.0023 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7900 | 0.0 | 0.0000 | 0.0001 | 0.0 | 0.0 | nan | 0.0002 | 0.0248 | 0.0120 | 0.0005 | 0.0346 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0067 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.0243 | 0.0002 | 0.0020 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1415 | 0.0 | 0.0000 | 0.0000 | 0.0 | 0.0 | 0.0 |
| 2.4096 | 17.52 | 2050 | 2.4846 | 0.0075 | 0.0859 | 0.0507 | 0.0001 | 0.0400 | 0.2959 | 0.0005 | 0.0812 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7376 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.7732 | 0.0002 | 0.0027 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7321 | 0.0 | 0.0000 | 0.0005 | 0.0 | 0.0 | nan | 0.0001 | 0.0301 | 0.0116 | 0.0005 | 0.0383 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0069 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.0234 | 0.0002 | 0.0024 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1399 | 0.0 | 0.0000 | 0.0004 | 0.0 | 0.0 | 0.0 |
| 2.4462 | 17.95 | 2100 | 2.4614 | 0.0075 | 0.0857 | 0.0510 | 0.0001 | 0.0405 | 0.2981 | 0.0004 | 0.0742 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7409 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.7569 | 0.0002 | 0.0023 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7419 | 0.0 | 0.0 | 0.0004 | 0.0 | 0.0 | nan | 0.0001 | 0.0303 | 0.0116 | 0.0004 | 0.0373 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0066 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.0237 | 0.0002 | 0.0021 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1409 | 0.0 | 0.0 | 0.0003 | 0.0 | 0.0 | 0.0 |
| 2.4129 | 18.38 | 2150 | 2.4534 | 0.0075 | 0.0858 | 0.0504 | 0.0001 | 0.0399 | 0.3024 | 0.0004 | 0.0796 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7393 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.7732 | 0.0001 | 0.0027 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7230 | 0.0 | 0.0 | 0.0003 | 0.0 | 0.0 | nan | 0.0001 | 0.0300 | 0.0118 | 0.0004 | 0.0388 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0068 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.0235 | 0.0001 | 0.0025 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1393 | 0.0 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.0 |
| 2.4048 | 18.8 | 2200 | 2.4267 | 0.0073 | 0.0858 | 0.0474 | 0.0001 | 0.0349 | 0.3520 | 0.0004 | 0.0593 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7348 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.7945 | 0.0001 | 0.0046 | 0.0 | nan | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.6776 | 0.0 | 0.0000 | 0.0004 | 0.0 | 0.0 | nan | 0.0001 | 0.0265 | 0.0132 | 0.0004 | 0.0314 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0071 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0226 | 0.0001 | 0.0043 | 0.0 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.1353 | 0.0 | 0.0000 | 0.0003 | 0.0 | 0.0 | nan |
| 2.4369 | 19.23 | 2250 | 2.4101 | 0.0075 | 0.0866 | 0.0501 | 0.0000 | 0.0377 | 0.3220 | 0.0004 | 0.0694 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7360 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.7833 | 0.0001 | 0.0025 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7331 | 0.0 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0000 | 0.0284 | 0.0123 | 0.0004 | 0.0352 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0070 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.0233 | 0.0001 | 0.0023 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1399 | 0.0 | 0.0 | 0.0001 | 0.0 | 0.0 | nan |
| 2.4132 | 19.66 | 2300 | 2.4152 | 0.0078 | 0.0863 | 0.0530 | 0.0001 | 0.0452 | 0.2873 | 0.0003 | 0.0791 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7368 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.7715 | 0.0001 | 0.0022 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7518 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | nan | 0.0001 | 0.0334 | 0.0113 | 0.0003 | 0.0395 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0070 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.0238 | 0.0001 | 0.0020 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1405 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | nan |
| 2.3273 | 20.09 | 2350 | 2.3988 | 0.0075 | 0.0866 | 0.0500 | 0.0000 | 0.0381 | 0.3221 | 0.0004 | 0.0691 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7351 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.7904 | 0.0001 | 0.0023 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7258 | 0.0 | 0.0000 | 0.0000 | 0.0 | 0.0 | nan | 0.0000 | 0.0287 | 0.0124 | 0.0004 | 0.0353 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0071 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.0231 | 0.0001 | 0.0022 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1392 | 0.0 | 0.0000 | 0.0000 | 0.0 | 0.0 | nan |
| 2.7112 | 20.51 | 2400 | 2.3673 | 0.0074 | 0.0866 | 0.0520 | 0.0000 | 0.0426 | 0.3041 | 0.0004 | 0.0661 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7337 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.7843 | 0.0001 | 0.0018 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7502 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | nan | 0.0000 | 0.0317 | 0.0118 | 0.0004 | 0.0347 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0073 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.0236 | 0.0001 | 0.0017 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1397 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 |
| 2.4112 | 20.94 | 2450 | 2.3806 | 0.0078 | 0.0864 | 0.0528 | 0.0000 | 0.0441 | 0.2836 | 0.0004 | 0.0771 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7364 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.7745 | 0.0001 | 0.0018 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7590 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | nan | 0.0000 | 0.0327 | 0.0112 | 0.0004 | 0.0391 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0069 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.0235 | 0.0001 | 0.0017 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1416 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | nan |
| 2.4846 | 21.37 | 2500 | 2.3986 | 0.0075 | 0.0856 | 0.0530 | 0.0000 | 0.0499 | 0.2838 | 0.0004 | 0.0694 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7335 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.7987 | 0.0001 | 0.0024 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7143 | 0.0 | 0.0 | 0.0001 | 0.0 | 0.0 | nan | 0.0000 | 0.0364 | 0.0113 | 0.0004 | 0.0369 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0072 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0227 | 0.0001 | 0.0022 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1385 | 0.0 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 |
| 2.5263 | 21.79 | 2550 | 2.3352 | 0.0074 | 0.0864 | 0.0496 | 0.0000 | 0.0387 | 0.3347 | 0.0004 | 0.0515 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7344 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.7991 | 0.0001 | 0.0022 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7161 | 0.0 | 0.0000 | 0.0000 | 0.0 | 0.0 | nan | 0.0000 | 0.0290 | 0.0127 | 0.0004 | 0.0297 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0071 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0227 | 0.0001 | 0.0021 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1389 | 0.0 | 0.0000 | 0.0000 | 0.0 | 0.0 | nan |
| 2.453 | 22.22 | 2600 | 2.3465 | 0.0076 | 0.0859 | 0.0539 | 0.0000 | 0.0505 | 0.2887 | 0.0003 | 0.0537 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7331 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.7891 | 0.0001 | 0.0023 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7443 | 0.0 | 0.0000 | 0.0000 | 0.0 | 0.0 | nan | 0.0000 | 0.0368 | 0.0114 | 0.0003 | 0.0315 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0074 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.0235 | 0.0001 | 0.0021 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1386 | 0.0 | 0.0000 | 0.0000 | 0.0 | 0.0 | nan |
| 2.7019 | 22.65 | 2650 | 2.3168 | 0.0075 | 0.0868 | 0.0511 | 0.0000 | 0.0408 | 0.3173 | 0.0004 | 0.0563 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7320 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.7998 | 0.0001 | 0.0023 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7425 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | nan | 0.0000 | 0.0303 | 0.0122 | 0.0004 | 0.0317 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0074 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.0232 | 0.0001 | 0.0021 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1395 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | nan |
| 2.7737 | 23.08 | 2700 | 2.3540 | 0.0074 | 0.0859 | 0.0512 | 0.0000 | 0.0454 | 0.2955 | 0.0004 | 0.0657 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7311 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.8175 | 0.0001 | 0.0020 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7042 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | nan | 0.0000 | 0.0334 | 0.0116 | 0.0004 | 0.0359 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0073 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0222 | 0.0001 | 0.0019 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1383 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.369 | 23.5 | 2750 | 2.3262 | 0.0076 | 0.0873 | 0.0528 | 0.0000 | 0.0433 | 0.3015 | 0.0004 | 0.0616 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7300 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.7972 | 0.0001 | 0.0020 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7695 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.0 | nan | 0.0000 | 0.0320 | 0.0118 | 0.0004 | 0.0341 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0076 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0000 | 0.0234 | 0.0001 | 0.0019 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1410 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | nan |
| 2.4282 | 23.93 | 2800 | 2.3334 | 0.0077 | 0.0866 | 0.0519 | 0.0000 | 0.0442 | 0.2954 | 0.0004 | 0.0671 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7318 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.8095 | 0.0001 | 0.0018 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7334 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | nan | 0.0000 | 0.0325 | 0.0116 | 0.0004 | 0.0366 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0073 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0226 | 0.0001 | 0.0017 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1401 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | nan |
| 2.2932 | 24.36 | 2850 | 2.2981 | 0.0076 | 0.0864 | 0.0520 | 0.0000 | 0.0451 | 0.3003 | 0.0003 | 0.0589 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7315 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.8105 | 0.0001 | 0.0018 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7309 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | nan | 0.0000 | 0.0332 | 0.0117 | 0.0003 | 0.0335 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0074 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0227 | 0.0001 | 0.0017 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1399 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | nan |
| 2.334 | 24.79 | 2900 | 2.3066 | 0.0071 | 0.0863 | 0.0526 | 0.0000 | 0.0481 | 0.3031 | 0.0003 | 0.0563 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7274 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.8170 | 0.0000 | 0.0023 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7204 | 0.0 | 0.0000 | 0.0000 | 0.0 | 0.0 | nan | 0.0000 | 0.0351 | 0.0118 | 0.0003 | 0.0327 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0079 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0230 | 0.0000 | 0.0021 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1365 | 0.0 | 0.0000 | 0.0000 | 0.0 | 0.0 | 0.0 |
| 2.547 | 25.21 | 2950 | 2.3064 | 0.0077 | 0.0861 | 0.0537 | 0.0000 | 0.0512 | 0.2805 | 0.0003 | 0.0633 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7264 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.8193 | 0.0000 | 0.0019 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7269 | 0.0 | 0.0001 | 0.0000 | 0.0 | 0.0 | nan | 0.0000 | 0.0370 | 0.0111 | 0.0003 | 0.0358 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0079 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0228 | 0.0000 | 0.0018 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1373 | 0.0 | 0.0000 | 0.0000 | 0.0 | 0.0 | nan |
| 2.3814 | 25.64 | 3000 | 2.3104 | 0.0076 | 0.0862 | 0.0519 | 0.0000 | 0.0472 | 0.2887 | 0.0003 | 0.0643 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7270 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.8368 | 0.0001 | 0.0017 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7072 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | nan | 0.0000 | 0.0345 | 0.0114 | 0.0003 | 0.0362 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0076 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0221 | 0.0001 | 0.0017 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1380 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | nan |
| 2.3606 | 26.07 | 3050 | 2.2659 | 0.0073 | 0.0853 | 0.0503 | 0.0000 | 0.0451 | 0.3161 | 0.0003 | 0.0466 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7313 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.8180 | 0.0000 | 0.0018 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6860 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | nan | 0.0000 | 0.0330 | 0.0122 | 0.0003 | 0.0287 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0076 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0227 | 0.0000 | 0.0017 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1333 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | nan |
| 2.5409 | 26.5 | 3100 | 2.2998 | 0.0074 | 0.0865 | 0.0538 | 0.0000 | 0.0502 | 0.2868 | 0.0002 | 0.0582 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7246 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.8215 | 0.0000 | 0.0015 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7395 | 0.0 | 0.0002 | 0.0000 | 0.0 | 0.0 | nan | 0.0000 | 0.0364 | 0.0113 | 0.0002 | 0.0340 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0081 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0230 | 0.0000 | 0.0015 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1379 | 0.0 | 0.0000 | 0.0000 | 0.0 | 0.0 | nan |
| 2.3258 | 26.92 | 3150 | 2.2954 | 0.0077 | 0.0860 | 0.0531 | 0.0 | 0.0498 | 0.2769 | 0.0003 | 0.0603 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7307 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.8194 | 0.0001 | 0.0017 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7267 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0361 | 0.0110 | 0.0003 | 0.0353 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0074 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0225 | 0.0001 | 0.0017 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1391 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | nan |
| 2.3487 | 27.35 | 3200 | 2.2857 | 0.0076 | 0.0860 | 0.0532 | 0.0 | 0.0509 | 0.2855 | 0.0003 | 0.0558 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7280 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.8264 | 0.0000 | 0.0016 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7172 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0368 | 0.0113 | 0.0003 | 0.0333 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0077 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0226 | 0.0000 | 0.0016 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1372 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | nan |
| 2.2953 | 27.78 | 3250 | 2.2727 | 0.0075 | 0.0861 | 0.0521 | 0.0 | 0.0496 | 0.2988 | 0.0002 | 0.0515 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7236 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.8456 | 0.0000 | 0.0013 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6973 | 0.0 | 0.0001 | 0.0000 | 0.0 | 0.0 | nan | 0.0 | 0.0359 | 0.0117 | 0.0002 | 0.0316 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0080 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0223 | 0.0000 | 0.0012 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1353 | 0.0 | 0.0000 | 0.0000 | 0.0 | 0.0 | nan |
| 2.3487 | 28.21 | 3300 | 2.3029 | 0.0075 | 0.0855 | 0.0522 | 0.0 | 0.0500 | 0.2892 | 0.0002 | 0.0581 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7283 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.8276 | 0.0000 | 0.0019 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6952 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0362 | 0.0114 | 0.0002 | 0.0344 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0078 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0228 | 0.0000 | 0.0018 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1340 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | nan |
| 2.3138 | 28.63 | 3350 | 2.2554 | 0.0074 | 0.0860 | 0.0504 | 0.0 | 0.0444 | 0.3070 | 0.0003 | 0.0495 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7290 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.8402 | 0.0000 | 0.0012 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6956 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0325 | 0.0119 | 0.0003 | 0.0306 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0075 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0222 | 0.0000 | 0.0012 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1364 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | nan |
| 2.3157 | 29.06 | 3400 | 2.2679 | 0.0074 | 0.0860 | 0.0539 | 0.0 | 0.0529 | 0.2817 | 0.0001 | 0.0563 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7253 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.8286 | 0.0000 | 0.0012 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7203 | 0.0 | 0.0000 | 0.0000 | 0.0 | 0.0 | nan | 0.0 | 0.0380 | 0.0112 | 0.0001 | 0.0340 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0081 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0230 | 0.0000 | 0.0012 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1355 | 0.0 | 0.0000 | 0.0000 | 0.0 | 0.0 | nan |
| 2.2678 | 29.49 | 3450 | 2.2538 | 0.0073 | 0.0867 | 0.0497 | 0.0 | 0.0414 | 0.3204 | 0.0003 | 0.0485 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7255 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.8487 | 0.0001 | 0.0009 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7005 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0305 | 0.0123 | 0.0003 | 0.0296 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0078 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0221 | 0.0001 | 0.0009 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1367 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | nan |
| 2.3993 | 29.91 | 3500 | 2.2745 | 0.0075 | 0.0865 | 0.0557 | 0.0 | 0.0555 | 0.2680 | 0.0002 | 0.0589 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7239 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.8238 | 0.0000 | 0.0014 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7502 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0397 | 0.0108 | 0.0002 | 0.0352 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0082 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0231 | 0.0000 | 0.0013 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1381 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | nan |
| 2.2576 | 30.34 | 3550 | 2.2471 | 0.0074 | 0.0851 | 0.0524 | 0.0 | 0.0520 | 0.2844 | 0.0002 | 0.0489 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7285 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.8330 | 0.0000 | 0.0014 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6892 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0375 | 0.0113 | 0.0002 | 0.0311 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0077 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0224 | 0.0000 | 0.0013 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1337 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | nan |
| 2.2889 | 30.77 | 3600 | 2.2595 | 0.0075 | 0.0860 | 0.0525 | 0.0 | 0.0496 | 0.2902 | 0.0002 | 0.0503 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7281 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.8338 | 0.0000 | 0.0011 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7116 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0359 | 0.0114 | 0.0002 | 0.0315 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0077 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0225 | 0.0000 | 0.0011 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1366 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | nan |
| 2.2409 | 31.2 | 3650 | 2.2224 | 0.0072 | 0.0860 | 0.0505 | 0.0 | 0.0449 | 0.3076 | 0.0002 | 0.0437 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7269 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.8424 | 0.0000 | 0.0010 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6995 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0328 | 0.0119 | 0.0002 | 0.0281 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0078 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0224 | 0.0000 | 0.0010 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1350 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | nan |
| 2.4857 | 31.62 | 3700 | 2.2306 | 0.0075 | 0.0863 | 0.0525 | 0.0 | 0.0488 | 0.2940 | 0.0002 | 0.0471 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7261 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.8368 | 0.0000 | 0.0012 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7218 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0354 | 0.0115 | 0.0002 | 0.0300 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0079 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0227 | 0.0000 | 0.0011 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1371 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | nan |
| 2.5056 | 32.05 | 3750 | 2.2192 | 0.0075 | 0.0852 | 0.0540 | 0.0 | 0.0560 | 0.2740 | 0.0002 | 0.0424 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7282 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.8345 | 0.0000 | 0.0012 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7061 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0400 | 0.0109 | 0.0002 | 0.0281 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0078 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0225 | 0.0000 | 0.0011 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1356 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | nan |
| 2.4911 | 32.48 | 3800 | 2.2501 | 0.0074 | 0.0850 | 0.0518 | 0.0 | 0.0529 | 0.2857 | 0.0002 | 0.0483 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7249 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.8575 | 0.0000 | 0.0010 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6644 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0380 | 0.0113 | 0.0002 | 0.0311 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0079 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0220 | 0.0000 | 0.0010 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1316 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | nan |
| 2.2452 | 32.91 | 3850 | 2.2030 | 0.0071 | 0.0870 | 0.0509 | 0.0 | 0.0424 | 0.3135 | 0.0002 | 0.0466 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7250 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.8420 | 0.0000 | 0.0009 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7275 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0311 | 0.0121 | 0.0002 | 0.0294 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0080 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0226 | 0.0000 | 0.0009 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1377 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | 0.0 |
| 2.3314 | 33.33 | 3900 | 2.2650 | 0.0075 | 0.0851 | 0.0527 | 0.0 | 0.0539 | 0.2782 | 0.0001 | 0.0562 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7247 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.8481 | 0.0000 | 0.0008 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6774 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0386 | 0.0111 | 0.0001 | 0.0345 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0080 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0224 | 0.0000 | 0.0008 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1317 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | nan |
| 2.456 | 33.76 | 3950 | 2.2260 | 0.0075 | 0.0859 | 0.0526 | 0.0 | 0.0501 | 0.2881 | 0.0002 | 0.0473 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7281 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.8343 | 0.0000 | 0.0013 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7135 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0362 | 0.0114 | 0.0002 | 0.0305 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0078 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0226 | 0.0000 | 0.0013 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1364 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | nan |
| 2.1795 | 34.19 | 4000 | 2.2383 | 0.0073 | 0.0854 | 0.0515 | 0.0 | 0.0501 | 0.2940 | 0.0002 | 0.0445 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7254 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.8502 | 0.0000 | 0.0008 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6823 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0362 | 0.0115 | 0.0002 | 0.0293 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0079 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0222 | 0.0000 | 0.0008 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1333 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | nan |
| 2.2701 | 34.62 | 4050 | 2.1953 | 0.0073 | 0.0858 | 0.0516 | 0.0 | 0.0493 | 0.3067 | 0.0001 | 0.0430 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7213 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.8499 | 0.0 | 0.0010 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6893 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | nan | 0.0 | 0.0355 | 0.0119 | 0.0001 | 0.0282 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0085 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0229 | 0.0 | 0.0010 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1312 | 0.0 | 0.0 | 0.0000 | 0.0 | 0.0 | nan |
| 2.5063 | 35.04 | 4100 | 2.2152 | 0.0074 | 0.0856 | 0.0531 | 0.0 | 0.0525 | 0.2859 | 0.0002 | 0.0434 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7272 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.8357 | 0.0000 | 0.0011 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7079 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0376 | 0.0113 | 0.0002 | 0.0286 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0079 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0227 | 0.0000 | 0.0011 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1349 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | nan |
| 2.3872 | 35.47 | 4150 | 2.2011 | 0.0072 | 0.0846 | 0.0519 | 0.0 | 0.0524 | 0.2894 | 0.0001 | 0.0414 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7295 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.8320 | 0.0 | 0.0011 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6776 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0375 | 0.0114 | 0.0001 | 0.0278 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0078 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0228 | 0.0 | 0.0011 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1304 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | nan |
| 2.4513 | 35.9 | 4200 | 2.2279 | 0.0074 | 0.0851 | 0.0519 | 0.0 | 0.0516 | 0.2789 | 0.0002 | 0.0485 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7288 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.8479 | 0.0000 | 0.0010 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6822 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0371 | 0.0111 | 0.0002 | 0.0315 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0076 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0219 | 0.0000 | 0.0010 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1344 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | nan |
| 2.2309 | 36.32 | 4250 | 2.2145 | 0.0073 | 0.0852 | 0.0519 | 0.0 | 0.0514 | 0.2921 | 0.0002 | 0.0441 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7270 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.8437 | 0.0000 | 0.0010 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6814 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0369 | 0.0115 | 0.0002 | 0.0292 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0079 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0224 | 0.0000 | 0.0010 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1322 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | nan |
| 2.225 | 36.75 | 4300 | 2.2065 | 0.0073 | 0.0853 | 0.0533 | 0.0 | 0.0542 | 0.2830 | 0.0002 | 0.0383 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7264 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.8384 | 0.0000 | 0.0012 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7026 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0387 | 0.0112 | 0.0002 | 0.0262 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0080 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0227 | 0.0000 | 0.0011 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1337 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | nan |
| 2.1982 | 37.18 | 4350 | 2.2199 | 0.0074 | 0.0854 | 0.0531 | 0.0 | 0.0537 | 0.2812 | 0.0002 | 0.0431 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7279 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.8403 | 0.0000 | 0.0010 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6995 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0384 | 0.0112 | 0.0002 | 0.0288 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0078 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0224 | 0.0000 | 0.0010 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1351 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | nan |
| 2.1423 | 37.61 | 4400 | 2.2118 | 0.0073 | 0.0850 | 0.0520 | 0.0 | 0.0521 | 0.2897 | 0.0002 | 0.0408 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7267 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.8429 | 0.0000 | 0.0010 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6826 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0374 | 0.0114 | 0.0002 | 0.0276 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0080 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0226 | 0.0000 | 0.0010 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1315 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | nan |
| 2.3004 | 38.03 | 4450 | 2.2279 | 0.0075 | 0.0866 | 0.0528 | 0.0 | 0.0497 | 0.2876 | 0.0002 | 0.0478 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7201 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.8572 | 0.0 | 0.0008 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7209 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0359 | 0.0114 | 0.0002 | 0.0307 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0083 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0223 | 0.0 | 0.0007 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1366 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | nan |
| 2.3405 | 38.46 | 4500 | 2.2198 | 0.0074 | 0.0850 | 0.0519 | 0.0 | 0.0535 | 0.2824 | 0.0002 | 0.0513 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7249 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.8620 | 0.0000 | 0.0011 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6592 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0383 | 0.0112 | 0.0002 | 0.0327 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0080 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0219 | 0.0000 | 0.0011 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1312 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | nan |
| 2.4924 | 38.89 | 4550 | 2.2621 | 0.0076 | 0.0866 | 0.0536 | 0.0 | 0.0498 | 0.2829 | 0.0002 | 0.0517 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7265 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.8296 | 0.0 | 0.0011 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7427 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0359 | 0.0112 | 0.0002 | 0.0325 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0079 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0230 | 0.0 | 0.0010 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1380 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | nan |
| 2.1744 | 39.32 | 4600 | 2.1896 | 0.0072 | 0.0852 | 0.0514 | 0.0 | 0.0507 | 0.2949 | 0.0002 | 0.0365 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7249 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.8512 | 0.0000 | 0.0009 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.6806 | 0.0 | 0.0002 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0364 | 0.0116 | 0.0002 | 0.0253 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0081 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0224 | 0.0000 | 0.0008 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1315 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | nan |
| 2.4663 | 39.74 | 4650 | 2.1850 | 0.0074 | 0.0858 | 0.0532 | 0.0 | 0.0521 | 0.2880 | 0.0002 | 0.0414 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.7261 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.8347 | 0.0 | 0.0011 | 0.0 | nan | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.7158 | 0.0 | 0.0001 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0373 | 0.0114 | 0.0002 | 0.0279 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0081 | 0.0 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0229 | 0.0 | 0.0011 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.1347 | 0.0 | 0.0000 | 0.0 | 0.0 | 0.0 | nan |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.0
|
{"license": "other", "tags": ["vision", "image-segmentation", "generated_from_trainer"], "base_model": "nvidia/mit-b0", "model-index": [{"name": "segformer-b0-finetuned-segments-sidewalk-2", "results": []}]}
|
diegola123/segformer-b0-finetuned-segments-sidewalk-2
| null |
[
"transformers",
"tensorboard",
"safetensors",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"base_model:nvidia/mit-b0",
"license:other",
"endpoints_compatible",
"region:us"
] | null |
2024-04-14T11:03:41+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #segformer #vision #image-segmentation #generated_from_trainer #base_model-nvidia/mit-b0 #license-other #endpoints_compatible #region-us
|
segformer-b0-finetuned-segments-sidewalk-2
==========================================
This model is a fine-tuned version of nvidia/mit-b0 on the segments/sidewalk-semantic dataset.
It achieves the following results on the evaluation set:
* Loss: 2.1850
* Mean Iou: 0.0074
* Mean Accuracy: 0.0858
* Overall Accuracy: 0.0532
* Accuracy Unlabeled: 0.0
* Accuracy Flat-road: 0.0521
* Accuracy Flat-sidewalk: 0.2880
* Accuracy Flat-crosswalk: 0.0002
* Accuracy Flat-cyclinglane: 0.0414
* Accuracy Flat-parkingdriveway: nan
* Accuracy Flat-railtrack: 0.0
* Accuracy Flat-curb: 0.0
* Accuracy Human-person: 0.0
* Accuracy Human-rider: 0.0
* Accuracy Vehicle-car: 0.7261
* Accuracy Vehicle-truck: 0.0
* Accuracy Vehicle-bus: 0.0
* Accuracy Vehicle-tramtrain: 0.0
* Accuracy Vehicle-motorcycle: 0.0
* Accuracy Vehicle-bicycle: nan
* Accuracy Vehicle-caravan: 0.0
* Accuracy Vehicle-cartrailer: 0.0
* Accuracy Construction-building: 0.8347
* Accuracy Construction-door: 0.0
* Accuracy Construction-wall: 0.0011
* Accuracy Construction-fenceguardrail: 0.0
* Accuracy Construction-bridge: nan
* Accuracy Construction-tunnel: 0.0
* Accuracy Construction-stairs: 0.0
* Accuracy Object-pole: 0.0
* Accuracy Object-trafficsign: 0.0
* Accuracy Object-trafficlight: 0.0
* Accuracy Nature-vegetation: 0.7158
* Accuracy Nature-terrain: 0.0
* Accuracy Sky: 0.0001
* Accuracy Void-ground: 0.0
* Accuracy Void-dynamic: 0.0
* Accuracy Void-static: 0.0
* Accuracy Void-unclear: nan
* Iou Unlabeled: 0.0
* Iou Flat-road: 0.0373
* Iou Flat-sidewalk: 0.0114
* Iou Flat-crosswalk: 0.0002
* Iou Flat-cyclinglane: 0.0279
* Iou Flat-parkingdriveway: 0.0
* Iou Flat-railtrack: 0.0
* Iou Flat-curb: 0.0
* Iou Human-person: 0.0
* Iou Human-rider: 0.0
* Iou Vehicle-car: 0.0081
* Iou Vehicle-truck: 0.0
* Iou Vehicle-bus: 0.0
* Iou Vehicle-tramtrain: 0.0
* Iou Vehicle-motorcycle: 0.0
* Iou Vehicle-bicycle: nan
* Iou Vehicle-caravan: 0.0
* Iou Vehicle-cartrailer: 0.0
* Iou Construction-building: 0.0229
* Iou Construction-door: 0.0
* Iou Construction-wall: 0.0011
* Iou Construction-fenceguardrail: 0.0
* Iou Construction-bridge: 0.0
* Iou Construction-tunnel: 0.0
* Iou Construction-stairs: 0.0
* Iou Object-pole: 0.0
* Iou Object-trafficsign: 0.0
* Iou Object-trafficlight: 0.0
* Iou Nature-vegetation: 0.1347
* Iou Nature-terrain: 0.0
* Iou Sky: 0.0000
* Iou Void-ground: 0.0
* Iou Void-dynamic: 0.0
* Iou Void-static: 0.0
* Iou Void-unclear: nan
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-06
* train\_batch\_size: 6
* eval\_batch\_size: 6
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 40
### Training results
### Framework versions
* Transformers 4.35.2
* Pytorch 2.1.0+cu121
* Datasets 2.18.0
* Tokenizers 0.15.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-06\n* train\\_batch\\_size: 6\n* eval\\_batch\\_size: 6\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 40",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.35.2\n* Pytorch 2.1.0+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.0"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #segformer #vision #image-segmentation #generated_from_trainer #base_model-nvidia/mit-b0 #license-other #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-06\n* train\\_batch\\_size: 6\n* eval\\_batch\\_size: 6\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 40",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.35.2\n* Pytorch 2.1.0+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.0"
] |
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ShaderLLM-CodeLlama-13b-it
This model is a fine-tuned version of [codellama/CodeLlama-13b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-13b-Instruct-hf) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7616
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0025
- train_batch_size: 4
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: polynomial
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5616 | 0.24 | 1000 | 1.2367 |
| 1.0239 | 0.49 | 2000 | 1.0618 |
| 0.9445 | 0.73 | 3000 | 0.9659 |
| 1.0534 | 0.98 | 4000 | 0.8703 |
| 0.653 | 1.22 | 5000 | 0.8155 |
| 0.6389 | 1.46 | 6000 | 0.7856 |
| 0.6987 | 1.71 | 7000 | 0.7670 |
| 0.6333 | 1.95 | 8000 | 0.7616 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "llama2", "tags": ["trl", "sft", "unsloth", "generated_from_trainer", "unsloth"], "datasets": ["generator"], "base_model": "codellama/CodeLlama-13b-Instruct-hf", "model-index": [{"name": "ShaderLLM-CodeLlama-13b-it", "results": []}]}
|
seanmemery/ShaderLLM-CodeLlama-13b-it
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"unsloth",
"generated_from_trainer",
"conversational",
"dataset:generator",
"base_model:codellama/CodeLlama-13b-Instruct-hf",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T11:06:23+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #trl #sft #unsloth #generated_from_trainer #conversational #dataset-generator #base_model-codellama/CodeLlama-13b-Instruct-hf #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
ShaderLLM-CodeLlama-13b-it
==========================
This model is a fine-tuned version of codellama/CodeLlama-13b-Instruct-hf on the generator dataset.
It achieves the following results on the evaluation set:
* Loss: 0.7616
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0025
* train\_batch\_size: 4
* eval\_batch\_size: 1
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: polynomial
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.39.3
* Pytorch 2.2.2
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0025\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: polynomial\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #trl #sft #unsloth #generated_from_trainer #conversational #dataset-generator #base_model-codellama/CodeLlama-13b-Instruct-hf #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0025\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 1\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: polynomial\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.3\n* Pytorch 2.2.2\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null |
transformers
|
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Vezora/Narwhal-7b-v3
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Narwhal-7b-v3-GGUF/resolve/main/Narwhal-7b-v3.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Narwhal-7b-v3-GGUF/resolve/main/Narwhal-7b-v3.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Narwhal-7b-v3-GGUF/resolve/main/Narwhal-7b-v3.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Narwhal-7b-v3-GGUF/resolve/main/Narwhal-7b-v3.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Narwhal-7b-v3-GGUF/resolve/main/Narwhal-7b-v3.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Narwhal-7b-v3-GGUF/resolve/main/Narwhal-7b-v3.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Narwhal-7b-v3-GGUF/resolve/main/Narwhal-7b-v3.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Narwhal-7b-v3-GGUF/resolve/main/Narwhal-7b-v3.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Narwhal-7b-v3-GGUF/resolve/main/Narwhal-7b-v3.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Narwhal-7b-v3-GGUF/resolve/main/Narwhal-7b-v3.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Narwhal-7b-v3-GGUF/resolve/main/Narwhal-7b-v3.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Narwhal-7b-v3-GGUF/resolve/main/Narwhal-7b-v3.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Narwhal-7b-v3-GGUF/resolve/main/Narwhal-7b-v3.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Narwhal-7b-v3-GGUF/resolve/main/Narwhal-7b-v3.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "base_model": "Vezora/Narwhal-7b-v3", "quantized_by": "mradermacher"}
|
mradermacher/Narwhal-7b-v3-GGUF
| null |
[
"transformers",
"gguf",
"en",
"base_model:Vezora/Narwhal-7b-v3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-14T11:06:57+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #gguf #en #base_model-Vezora/Narwhal-7b-v3 #license-apache-2.0 #endpoints_compatible #region-us
|
About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
|
[] |
[
"TAGS\n#transformers #gguf #en #base_model-Vezora/Narwhal-7b-v3 #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
reinforcement-learning
|
stable-baselines3
|
# **PPO** Agent playing **CartPole-v1**
This is a trained model of a **PPO** agent playing **CartPole-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
{"library_name": "stable-baselines3", "tags": ["CartPole-v1", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "CartPole-v1", "type": "CartPole-v1"}, "metrics": [{"type": "mean_reward", "value": "479.40 +/- 29.14", "name": "mean_reward", "verified": false}]}]}]}
|
SubhasishSaha/ppo-cart-pole-sb3
| null |
[
"stable-baselines3",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null |
2024-04-14T11:08:28+00:00
|
[] |
[] |
TAGS
#stable-baselines3 #CartPole-v1 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# PPO Agent playing CartPole-v1
This is a trained model of a PPO agent playing CartPole-v1
using the stable-baselines3 library.
## Usage (with Stable-baselines3)
TODO: Add your code
|
[
"# PPO Agent playing CartPole-v1\nThis is a trained model of a PPO agent playing CartPole-v1\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
[
"TAGS\n#stable-baselines3 #CartPole-v1 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# PPO Agent playing CartPole-v1\nThis is a trained model of a PPO agent playing CartPole-v1\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# amazon_kindle_sentiment_analysis_definitivo
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9897
- Accuracy: 0.585
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.6088 | 0.01 | 10 | 1.5857 | 0.265 |
| 1.6469 | 0.02 | 20 | 1.5750 | 0.2617 |
| 1.5407 | 0.03 | 30 | 1.5206 | 0.295 |
| 1.5096 | 0.03 | 40 | 1.5134 | 0.3792 |
| 1.5668 | 0.04 | 50 | 1.4435 | 0.33 |
| 1.386 | 0.05 | 60 | 1.3578 | 0.32 |
| 1.3041 | 0.06 | 70 | 1.2950 | 0.4167 |
| 1.2491 | 0.07 | 80 | 1.2376 | 0.4242 |
| 1.4186 | 0.07 | 90 | 1.3518 | 0.4175 |
| 1.3238 | 0.08 | 100 | 1.1709 | 0.4675 |
| 1.1596 | 0.09 | 110 | 1.1853 | 0.4417 |
| 1.1351 | 0.1 | 120 | 1.3158 | 0.4083 |
| 1.1573 | 0.11 | 130 | 1.1438 | 0.475 |
| 1.1858 | 0.12 | 140 | 1.2280 | 0.45 |
| 1.268 | 0.12 | 150 | 1.3686 | 0.3767 |
| 1.3871 | 0.13 | 160 | 1.2159 | 0.4525 |
| 1.1129 | 0.14 | 170 | 1.1402 | 0.4783 |
| 1.1144 | 0.15 | 180 | 1.2366 | 0.4558 |
| 1.1953 | 0.16 | 190 | 1.1209 | 0.4717 |
| 1.2515 | 0.17 | 200 | 1.1857 | 0.4408 |
| 1.0826 | 0.17 | 210 | 1.1044 | 0.48 |
| 1.0192 | 0.18 | 220 | 1.0932 | 0.4925 |
| 1.2467 | 0.19 | 230 | 1.0608 | 0.5058 |
| 0.9914 | 0.2 | 240 | 1.1134 | 0.4942 |
| 1.1065 | 0.21 | 250 | 1.1115 | 0.4833 |
| 1.1161 | 0.22 | 260 | 1.2943 | 0.485 |
| 1.4564 | 0.23 | 270 | 1.3899 | 0.3892 |
| 1.4043 | 0.23 | 280 | 1.1529 | 0.4742 |
| 1.0993 | 0.24 | 290 | 1.3811 | 0.4167 |
| 1.1307 | 0.25 | 300 | 1.0985 | 0.4892 |
| 1.1536 | 0.26 | 310 | 1.0903 | 0.5133 |
| 1.0491 | 0.27 | 320 | 1.1709 | 0.4875 |
| 1.1946 | 0.28 | 330 | 1.1875 | 0.4725 |
| 1.1956 | 0.28 | 340 | 1.0579 | 0.5292 |
| 0.8626 | 0.29 | 350 | 1.2314 | 0.48 |
| 1.2908 | 0.3 | 360 | 1.0875 | 0.5225 |
| 1.1227 | 0.31 | 370 | 1.1000 | 0.4975 |
| 1.0407 | 0.32 | 380 | 1.1035 | 0.5267 |
| 1.2242 | 0.33 | 390 | 1.1243 | 0.4833 |
| 1.2052 | 0.33 | 400 | 1.0719 | 0.5067 |
| 1.1526 | 0.34 | 410 | 1.0351 | 0.5442 |
| 0.9881 | 0.35 | 420 | 1.0394 | 0.5333 |
| 1.0651 | 0.36 | 430 | 1.0422 | 0.5317 |
| 1.0571 | 0.37 | 440 | 1.0310 | 0.5408 |
| 1.22 | 0.38 | 450 | 1.0176 | 0.5358 |
| 0.9914 | 0.38 | 460 | 1.2306 | 0.4733 |
| 1.0956 | 0.39 | 470 | 1.0239 | 0.5358 |
| 0.9464 | 0.4 | 480 | 1.0895 | 0.51 |
| 1.0855 | 0.41 | 490 | 1.0398 | 0.5292 |
| 1.2345 | 0.42 | 500 | 1.1024 | 0.5133 |
| 1.1624 | 0.42 | 510 | 1.1720 | 0.4733 |
| 1.1251 | 0.43 | 520 | 1.1044 | 0.4858 |
| 1.0896 | 0.44 | 530 | 1.0415 | 0.5225 |
| 0.9643 | 0.45 | 540 | 1.0211 | 0.5383 |
| 1.1421 | 0.46 | 550 | 1.1593 | 0.5017 |
| 1.0463 | 0.47 | 560 | 1.0246 | 0.52 |
| 1.0508 | 0.47 | 570 | 1.0377 | 0.515 |
| 1.0507 | 0.48 | 580 | 1.0565 | 0.5408 |
| 0.8932 | 0.49 | 590 | 1.0147 | 0.5483 |
| 0.8834 | 0.5 | 600 | 1.0191 | 0.5458 |
| 1.0548 | 0.51 | 610 | 1.0668 | 0.5392 |
| 1.1106 | 0.52 | 620 | 1.0086 | 0.53 |
| 1.0587 | 0.53 | 630 | 1.0144 | 0.5483 |
| 0.9468 | 0.53 | 640 | 1.1663 | 0.5042 |
| 1.0948 | 0.54 | 650 | 1.0263 | 0.5458 |
| 1.2202 | 0.55 | 660 | 0.9932 | 0.5358 |
| 0.898 | 0.56 | 670 | 1.0217 | 0.52 |
| 1.2074 | 0.57 | 680 | 1.0416 | 0.5333 |
| 1.1777 | 0.57 | 690 | 0.9986 | 0.5483 |
| 1.0448 | 0.58 | 700 | 0.9836 | 0.5558 |
| 0.9387 | 0.59 | 710 | 1.0127 | 0.5392 |
| 1.0905 | 0.6 | 720 | 1.0633 | 0.5183 |
| 0.9262 | 0.61 | 730 | 1.0046 | 0.5375 |
| 1.0691 | 0.62 | 740 | 1.0005 | 0.5458 |
| 0.8828 | 0.62 | 750 | 1.0031 | 0.55 |
| 1.1497 | 0.63 | 760 | 1.0785 | 0.4925 |
| 0.9907 | 0.64 | 770 | 1.0094 | 0.54 |
| 0.9741 | 0.65 | 780 | 0.9794 | 0.555 |
| 0.8731 | 0.66 | 790 | 1.0327 | 0.5217 |
| 1.1001 | 0.67 | 800 | 1.0335 | 0.5325 |
| 1.0796 | 0.68 | 810 | 1.0004 | 0.5492 |
| 1.1743 | 0.68 | 820 | 1.0022 | 0.5425 |
| 1.0616 | 0.69 | 830 | 1.0307 | 0.5375 |
| 0.9953 | 0.7 | 840 | 0.9799 | 0.555 |
| 1.0607 | 0.71 | 850 | 1.1107 | 0.5108 |
| 1.2028 | 0.72 | 860 | 0.9770 | 0.55 |
| 0.9749 | 0.72 | 870 | 0.9927 | 0.5483 |
| 0.9752 | 0.73 | 880 | 1.0249 | 0.5342 |
| 0.9905 | 0.74 | 890 | 0.9946 | 0.5408 |
| 0.9116 | 0.75 | 900 | 1.0538 | 0.5433 |
| 1.1579 | 0.76 | 910 | 0.9914 | 0.555 |
| 1.0955 | 0.77 | 920 | 1.0265 | 0.5383 |
| 1.1222 | 0.78 | 930 | 1.0443 | 0.5175 |
| 0.9873 | 0.78 | 940 | 0.9877 | 0.5408 |
| 0.8737 | 0.79 | 950 | 1.0376 | 0.5442 |
| 1.0869 | 0.8 | 960 | 0.9777 | 0.555 |
| 1.0751 | 0.81 | 970 | 0.9655 | 0.5675 |
| 1.092 | 0.82 | 980 | 0.9720 | 0.5533 |
| 1.0741 | 0.82 | 990 | 0.9939 | 0.5325 |
| 1.0502 | 0.83 | 1000 | 0.9864 | 0.5517 |
| 1.0623 | 0.84 | 1010 | 0.9637 | 0.5567 |
| 1.0641 | 0.85 | 1020 | 0.9590 | 0.565 |
| 0.9818 | 0.86 | 1030 | 1.0268 | 0.5317 |
| 1.01 | 0.87 | 1040 | 0.9562 | 0.5517 |
| 0.9202 | 0.88 | 1050 | 0.9766 | 0.5458 |
| 0.9179 | 0.88 | 1060 | 0.9771 | 0.55 |
| 1.0009 | 0.89 | 1070 | 1.0164 | 0.535 |
| 0.9891 | 0.9 | 1080 | 0.9699 | 0.5542 |
| 0.9137 | 0.91 | 1090 | 1.0187 | 0.5325 |
| 0.9941 | 0.92 | 1100 | 0.9797 | 0.5592 |
| 0.9203 | 0.93 | 1110 | 1.0172 | 0.5292 |
| 0.8416 | 0.93 | 1120 | 1.0945 | 0.505 |
| 1.0899 | 0.94 | 1130 | 0.9963 | 0.55 |
| 1.0149 | 0.95 | 1140 | 0.9716 | 0.5592 |
| 0.9339 | 0.96 | 1150 | 0.9762 | 0.5492 |
| 1.0562 | 0.97 | 1160 | 1.0362 | 0.5258 |
| 1.0929 | 0.97 | 1170 | 0.9954 | 0.5433 |
| 1.0686 | 0.98 | 1180 | 1.0128 | 0.5342 |
| 1.1207 | 0.99 | 1190 | 0.9771 | 0.5525 |
| 0.9934 | 1.0 | 1200 | 0.9731 | 0.5575 |
| 0.8436 | 1.01 | 1210 | 0.9501 | 0.5558 |
| 0.7829 | 1.02 | 1220 | 0.9517 | 0.5708 |
| 0.7667 | 1.02 | 1230 | 0.9789 | 0.565 |
| 0.8093 | 1.03 | 1240 | 1.0047 | 0.5683 |
| 0.9297 | 1.04 | 1250 | 0.9831 | 0.5642 |
| 0.7154 | 1.05 | 1260 | 1.0401 | 0.5425 |
| 0.78 | 1.06 | 1270 | 0.9859 | 0.5683 |
| 0.8144 | 1.07 | 1280 | 0.9833 | 0.565 |
| 0.9511 | 1.07 | 1290 | 0.9870 | 0.5675 |
| 0.781 | 1.08 | 1300 | 0.9851 | 0.5633 |
| 0.8336 | 1.09 | 1310 | 0.9990 | 0.5625 |
| 0.9651 | 1.1 | 1320 | 1.0068 | 0.5542 |
| 0.7268 | 1.11 | 1330 | 0.9673 | 0.5742 |
| 0.7733 | 1.12 | 1340 | 0.9806 | 0.5692 |
| 0.7022 | 1.12 | 1350 | 1.0552 | 0.5508 |
| 0.8362 | 1.13 | 1360 | 0.9981 | 0.5683 |
| 0.9729 | 1.14 | 1370 | 1.0001 | 0.5683 |
| 0.7756 | 1.15 | 1380 | 0.9706 | 0.5625 |
| 0.7695 | 1.16 | 1390 | 1.0897 | 0.5392 |
| 0.7771 | 1.17 | 1400 | 1.0611 | 0.5483 |
| 0.6836 | 1.18 | 1410 | 1.0292 | 0.5575 |
| 0.8588 | 1.18 | 1420 | 0.9883 | 0.5767 |
| 0.7796 | 1.19 | 1430 | 1.0347 | 0.5658 |
| 0.8175 | 1.2 | 1440 | 1.0069 | 0.5717 |
| 0.6805 | 1.21 | 1450 | 1.0415 | 0.5525 |
| 0.7783 | 1.22 | 1460 | 1.0041 | 0.5708 |
| 1.046 | 1.23 | 1470 | 1.0039 | 0.5592 |
| 0.8762 | 1.23 | 1480 | 0.9609 | 0.5667 |
| 0.8282 | 1.24 | 1490 | 0.9625 | 0.5567 |
| 0.7038 | 1.25 | 1500 | 0.9559 | 0.5675 |
| 0.6776 | 1.26 | 1510 | 0.9826 | 0.5625 |
| 0.6715 | 1.27 | 1520 | 1.0019 | 0.5625 |
| 0.6957 | 1.27 | 1530 | 1.0005 | 0.5667 |
| 0.8419 | 1.28 | 1540 | 0.9876 | 0.575 |
| 0.7598 | 1.29 | 1550 | 1.0067 | 0.57 |
| 0.8714 | 1.3 | 1560 | 1.0743 | 0.55 |
| 0.864 | 1.31 | 1570 | 1.0003 | 0.5767 |
| 0.7178 | 1.32 | 1580 | 1.0116 | 0.5642 |
| 0.7912 | 1.32 | 1590 | 1.0323 | 0.5642 |
| 0.7834 | 1.33 | 1600 | 1.0123 | 0.5675 |
| 0.6978 | 1.34 | 1610 | 1.0530 | 0.55 |
| 0.7452 | 1.35 | 1620 | 1.0123 | 0.5658 |
| 0.8377 | 1.36 | 1630 | 1.0238 | 0.5608 |
| 0.7119 | 1.37 | 1640 | 1.0407 | 0.5642 |
| 0.7891 | 1.38 | 1650 | 1.0125 | 0.5692 |
| 0.7185 | 1.38 | 1660 | 1.0460 | 0.5483 |
| 0.7011 | 1.39 | 1670 | 1.0203 | 0.5658 |
| 0.8356 | 1.4 | 1680 | 1.0003 | 0.5667 |
| 0.6473 | 1.41 | 1690 | 0.9958 | 0.5742 |
| 0.6722 | 1.42 | 1700 | 0.9979 | 0.5817 |
| 0.7462 | 1.43 | 1710 | 0.9990 | 0.5817 |
| 0.6933 | 1.43 | 1720 | 1.0167 | 0.5758 |
| 0.6566 | 1.44 | 1730 | 1.0205 | 0.5825 |
| 0.7495 | 1.45 | 1740 | 1.0854 | 0.5483 |
| 0.9585 | 1.46 | 1750 | 1.0658 | 0.5567 |
| 0.8849 | 1.47 | 1760 | 1.0129 | 0.5708 |
| 0.9289 | 1.48 | 1770 | 0.9918 | 0.5942 |
| 0.751 | 1.48 | 1780 | 0.9849 | 0.5875 |
| 0.9082 | 1.49 | 1790 | 0.9887 | 0.5692 |
| 0.8307 | 1.5 | 1800 | 0.9978 | 0.5758 |
| 0.7014 | 1.51 | 1810 | 1.0261 | 0.5567 |
| 0.6632 | 1.52 | 1820 | 1.0294 | 0.5567 |
| 0.6885 | 1.52 | 1830 | 1.0054 | 0.5683 |
| 0.8374 | 1.53 | 1840 | 0.9983 | 0.5717 |
| 0.73 | 1.54 | 1850 | 0.9974 | 0.5792 |
| 0.7691 | 1.55 | 1860 | 0.9933 | 0.5775 |
| 0.795 | 1.56 | 1870 | 0.9918 | 0.5742 |
| 0.8298 | 1.57 | 1880 | 0.9970 | 0.5733 |
| 0.7621 | 1.57 | 1890 | 0.9981 | 0.5708 |
| 0.6753 | 1.58 | 1900 | 1.0033 | 0.5733 |
| 0.5386 | 1.59 | 1910 | 1.0098 | 0.5758 |
| 1.1066 | 1.6 | 1920 | 0.9923 | 0.5842 |
| 0.9523 | 1.61 | 1930 | 0.9987 | 0.5692 |
| 0.7225 | 1.62 | 1940 | 0.9958 | 0.5675 |
| 0.7592 | 1.62 | 1950 | 0.9800 | 0.58 |
| 0.7368 | 1.63 | 1960 | 1.0065 | 0.5658 |
| 0.7683 | 1.64 | 1970 | 0.9865 | 0.5708 |
| 0.5852 | 1.65 | 1980 | 0.9991 | 0.5675 |
| 0.7919 | 1.66 | 1990 | 1.0034 | 0.5708 |
| 0.7784 | 1.67 | 2000 | 0.9961 | 0.5717 |
| 0.8155 | 1.68 | 2010 | 0.9812 | 0.575 |
| 0.6281 | 1.68 | 2020 | 0.9803 | 0.5825 |
| 0.6084 | 1.69 | 2030 | 0.9802 | 0.5733 |
| 0.6207 | 1.7 | 2040 | 0.9843 | 0.5767 |
| 0.8847 | 1.71 | 2050 | 0.9871 | 0.5817 |
| 0.7049 | 1.72 | 2060 | 0.9897 | 0.5783 |
| 0.7144 | 1.73 | 2070 | 0.9914 | 0.5808 |
| 0.5971 | 1.73 | 2080 | 0.9915 | 0.5883 |
| 0.7566 | 1.74 | 2090 | 0.9888 | 0.5833 |
| 0.8263 | 1.75 | 2100 | 1.0017 | 0.5775 |
| 0.6402 | 1.76 | 2110 | 0.9872 | 0.5833 |
| 0.9838 | 1.77 | 2120 | 0.9852 | 0.5833 |
| 0.5518 | 1.77 | 2130 | 0.9803 | 0.585 |
| 0.737 | 1.78 | 2140 | 0.9892 | 0.5883 |
| 0.8021 | 1.79 | 2150 | 0.9917 | 0.585 |
| 0.6804 | 1.8 | 2160 | 0.9928 | 0.5775 |
| 0.6661 | 1.81 | 2170 | 0.9921 | 0.5808 |
| 0.6192 | 1.82 | 2180 | 0.9941 | 0.5833 |
| 0.7101 | 1.82 | 2190 | 0.9980 | 0.5858 |
| 0.7373 | 1.83 | 2200 | 1.0018 | 0.5825 |
| 0.845 | 1.84 | 2210 | 1.0030 | 0.5808 |
| 0.6556 | 1.85 | 2220 | 1.0077 | 0.5758 |
| 0.7979 | 1.86 | 2230 | 1.0115 | 0.5708 |
| 0.5802 | 1.87 | 2240 | 1.0065 | 0.5767 |
| 0.6794 | 1.88 | 2250 | 0.9945 | 0.5842 |
| 0.8538 | 1.88 | 2260 | 0.9901 | 0.5817 |
| 0.884 | 1.89 | 2270 | 0.9877 | 0.58 |
| 0.8306 | 1.9 | 2280 | 0.9850 | 0.5825 |
| 0.7196 | 1.91 | 2290 | 0.9846 | 0.5775 |
| 0.6548 | 1.92 | 2300 | 0.9850 | 0.5825 |
| 0.7692 | 1.93 | 2310 | 0.9863 | 0.5833 |
| 0.6386 | 1.93 | 2320 | 0.9880 | 0.5842 |
| 0.9404 | 1.94 | 2330 | 0.9919 | 0.5842 |
| 0.6133 | 1.95 | 2340 | 0.9920 | 0.5825 |
| 0.7229 | 1.96 | 2350 | 0.9898 | 0.5825 |
| 0.6681 | 1.97 | 2360 | 0.9887 | 0.585 |
| 0.7672 | 1.98 | 2370 | 0.9884 | 0.585 |
| 0.6217 | 1.98 | 2380 | 0.9893 | 0.5858 |
| 0.7101 | 1.99 | 2390 | 0.9897 | 0.585 |
| 0.6067 | 2.0 | 2400 | 0.9897 | 0.585 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "bert-base-uncased", "model-index": [{"name": "amazon_kindle_sentiment_analysis_definitivo", "results": []}]}
|
denise227/amazon_kindle_sentiment_analysis_definitivo
| null |
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-14T11:08:43+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
amazon\_kindle\_sentiment\_analysis\_definitivo
===============================================
This model is a fine-tuned version of bert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9897
* Accuracy: 0.585
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 2
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #bert #text-classification #generated_from_trainer #base_model-bert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 2",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
OsakanaTeishoku/mixtral_4x300m_dummy
| null |
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T11:12:02+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #mixtral #text-generation #custom_code #arxiv-1910.09700 #autotrain_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #mixtral #text-generation #custom_code #arxiv-1910.09700 #autotrain_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
unrented5443/7gqog17
| null |
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-14T11:16:39+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
transformers
|
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/ibivibiv/aegolius-acadicus-34b-v3
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/aegolius-acadicus-34b-v3-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-34b-v3-i1-GGUF/resolve/main/aegolius-acadicus-34b-v3.i1-IQ1_S.gguf) | i1-IQ1_S | 7.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-34b-v3-i1-GGUF/resolve/main/aegolius-acadicus-34b-v3.i1-IQ1_M.gguf) | i1-IQ1_M | 8.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-34b-v3-i1-GGUF/resolve/main/aegolius-acadicus-34b-v3.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-34b-v3-i1-GGUF/resolve/main/aegolius-acadicus-34b-v3.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-34b-v3-i1-GGUF/resolve/main/aegolius-acadicus-34b-v3.i1-IQ2_S.gguf) | i1-IQ2_S | 10.7 | |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-34b-v3-i1-GGUF/resolve/main/aegolius-acadicus-34b-v3.i1-IQ2_M.gguf) | i1-IQ2_M | 11.8 | |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-34b-v3-i1-GGUF/resolve/main/aegolius-acadicus-34b-v3.i1-Q2_K.gguf) | i1-Q2_K | 13.0 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-34b-v3-i1-GGUF/resolve/main/aegolius-acadicus-34b-v3.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 13.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-34b-v3-i1-GGUF/resolve/main/aegolius-acadicus-34b-v3.i1-IQ3_XS.gguf) | i1-IQ3_XS | 14.6 | |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-34b-v3-i1-GGUF/resolve/main/aegolius-acadicus-34b-v3.i1-Q3_K_S.gguf) | i1-Q3_K_S | 15.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-34b-v3-i1-GGUF/resolve/main/aegolius-acadicus-34b-v3.i1-IQ3_S.gguf) | i1-IQ3_S | 15.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-34b-v3-i1-GGUF/resolve/main/aegolius-acadicus-34b-v3.i1-IQ3_M.gguf) | i1-IQ3_M | 15.7 | |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-34b-v3-i1-GGUF/resolve/main/aegolius-acadicus-34b-v3.i1-Q3_K_M.gguf) | i1-Q3_K_M | 17.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-34b-v3-i1-GGUF/resolve/main/aegolius-acadicus-34b-v3.i1-Q3_K_L.gguf) | i1-Q3_K_L | 18.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-34b-v3-i1-GGUF/resolve/main/aegolius-acadicus-34b-v3.i1-IQ4_XS.gguf) | i1-IQ4_XS | 19.0 | |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-34b-v3-i1-GGUF/resolve/main/aegolius-acadicus-34b-v3.i1-Q4_0.gguf) | i1-Q4_0 | 20.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-34b-v3-i1-GGUF/resolve/main/aegolius-acadicus-34b-v3.i1-Q4_K_S.gguf) | i1-Q4_K_S | 20.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-34b-v3-i1-GGUF/resolve/main/aegolius-acadicus-34b-v3.i1-Q4_K_M.gguf) | i1-Q4_K_M | 21.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-34b-v3-i1-GGUF/resolve/main/aegolius-acadicus-34b-v3.i1-Q5_K_S.gguf) | i1-Q5_K_S | 24.5 | |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-34b-v3-i1-GGUF/resolve/main/aegolius-acadicus-34b-v3.i1-Q5_K_M.gguf) | i1-Q5_K_M | 25.2 | |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-34b-v3-i1-GGUF/resolve/main/aegolius-acadicus-34b-v3.i1-Q6_K.gguf) | i1-Q6_K | 29.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["moe"], "base_model": "ibivibiv/aegolius-acadicus-34b-v3", "quantized_by": "mradermacher"}
|
mradermacher/aegolius-acadicus-34b-v3-i1-GGUF
| null |
[
"transformers",
"gguf",
"moe",
"en",
"base_model:ibivibiv/aegolius-acadicus-34b-v3",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-14T11:26:07+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #gguf #moe #en #base_model-ibivibiv/aegolius-acadicus-34b-v3 #license-apache-2.0 #endpoints_compatible #region-us
|
About
-----
weighted/imatrix quants of URL
static quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
|
[] |
[
"TAGS\n#transformers #gguf #moe #en #base_model-ibivibiv/aegolius-acadicus-34b-v3 #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
reinforcement-learning
|
stable-baselines3
|
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
{"library_name": "stable-baselines3", "tags": ["LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "stable-baselines3"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "220.11 +/- 60.28", "name": "mean_reward", "verified": false}]}]}]}
|
Jones189/ppo-LunarLander-v2
| null |
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | null |
2024-04-14T11:26:29+00:00
|
[] |
[] |
TAGS
#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us
|
# PPO Agent playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2
using the stable-baselines3 library.
## Usage (with Stable-baselines3)
TODO: Add your code
|
[
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
[
"TAGS\n#stable-baselines3 #LunarLander-v2 #deep-reinforcement-learning #reinforcement-learning #model-index #region-us \n",
"# PPO Agent playing LunarLander-v2\nThis is a trained model of a PPO agent playing LunarLander-v2\nusing the stable-baselines3 library.",
"## Usage (with Stable-baselines3)\nTODO: Add your code"
] |
null |
transformers
|
# Uploaded model
- **Developed by:** sbawa
- **License:** apache-2.0
- **Finetuned from model :** TinyLlama/TinyLlama-1.1B-Chat-v1.0
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "gguf"], "base_model": "TinyLlama/TinyLlama-1.1B-Chat-v1.0"}
|
sbawa/elysa-beta-gguf
| null |
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-14T11:27:14+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: sbawa
- License: apache-2.0
- Finetuned from model : TinyLlama/TinyLlama-1.1B-Chat-v1.0
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
|
[
"# Uploaded model\n\n- Developed by: sbawa\n- License: apache-2.0\n- Finetuned from model : TinyLlama/TinyLlama-1.1B-Chat-v1.0\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
[
"TAGS\n#transformers #gguf #llama #text-generation-inference #unsloth #en #base_model-TinyLlama/TinyLlama-1.1B-Chat-v1.0 #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: sbawa\n- License: apache-2.0\n- Finetuned from model : TinyLlama/TinyLlama-1.1B-Chat-v1.0\n\nThis llama model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
text-generation
|
transformers
|
# Uploaded model
- **Developed by:** LeroyDyer
- **License:** apache-2.0
- **Finetuned from model :** LeroyDyer/Mixtral_AI_CyberTron_Ultra
### Ok Its a Great MODEL !
Highly Math Trained As well as many TextBooks and Lessons Highly fit datasets as well as Coding Datasets highly tuned!
This model has absorbed all its previous generations as well as ALL high performers and Specialist models (mistral) It has absorb many foriegn languge models and still stays as an english model !
Very impressive responses Short and long as also it was trained on some binary datasets to return a direct answer! and others to perform step by step response as wel as other to perform interactive response with clients for vairous tasks, such as product design and system design discussion:
Finacial information and other finacial tasks have been highly tunes also : Infact when returning to previous aligned datasets they stayed in line and was sdtill able to achieve High tuning!
Hence a process of merging with a specific topic or role and then training for the role and topic on themed data, hence previous itterations heavily tuned for medical or law or role play as the conception was that intergating the model into a single enity may even corrput them , so the decision to seperate concerns was taken :
This enabled for ssstrategic merging and tuning !
Concepts : chain of thought and functin calling Self rag ! Thoughts , emotive responses have been enhance where possibel with the data given . even sexy books have been highly tuned into the model :
but also i think american genera books (sci fi, fantasy, romance novels are required) for great role play which some expect: )
I have recently seen a strategy in which prompts can be embedded into the adapter to Trigger Specific Roles :
I hae tried to remove such prompting as you are a helpful ai to a character theme instead such as you are a cyber hacker by day and business man by night ! ie to give the model various internal personas !
after some training i noticed it was also talking to itself !! (rehersing) but the tokens for thought were missing so it lookeed strange until i noticed the bug;
After removing the thought tokens they were displayed in the output as the tokenizer was masking them !
But Still a Great Model , Given a Task based data set it Coverges Super quickly hence my enjoyment of the model as training of it is super quick !
Now when ii load up datasets : they are generally only a few bad steps before it begins to drop below zero maintaining a steady 0.6 etc whilst loading the unnseen new dataset , hence not needing so many epochs to adjust the matrix to the new information !
Im not sure if Lora actually works when you save them but i do save some and use them to load models for training ! as they are jump starts for model which did not recive that fine tuning , they can be merged and alligned ! (probably thiey are Good! )
### MOTTO FOR MODEL!
****Models are the same as loras , take them with light weight like tablets of knowledge!
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "code", "medical ", "farmer", "doctor", "Mega-Series", "Cyber-Series", "Role-Play", "Self-Rag", "ThinkingBot"], "datasets": ["gretelai/synthetic_text_to_sql", "HuggingFaceTB/cosmopedia", "teknium/OpenHermes-2.5", "Open-Orca/SlimOrca", "Open-Orca/OpenOrca", "cognitivecomputations/dolphin-coder", "databricks/databricks-dolly-15k", "yahma/alpaca-cleaned", "uonlp/CulturaX", "mwitiderrick/SwahiliPlatypus", "swahili", "Rogendo/English-Swahili-Sentence-Pairs", "ise-uiuc/Magicoder-Evol-Instruct-110K", "meta-math/MetaMathQA"], "metrics": ["accuracy", "bertscore", "bleu", "brier_score", "cer", "character", "charcut_mt", "chrf", "code_eval"], "base_model": "LeroyDyer/Mixtral_AI_CyberTron_Ultra"}
|
hflog/LeroyDyer-Mixtral_AI_CyberTron_Ultra
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"code",
"medical ",
"farmer",
"doctor",
"Mega-Series",
"Cyber-Series",
"Role-Play",
"Self-Rag",
"ThinkingBot",
"conversational",
"en",
"dataset:gretelai/synthetic_text_to_sql",
"dataset:HuggingFaceTB/cosmopedia",
"dataset:teknium/OpenHermes-2.5",
"dataset:Open-Orca/SlimOrca",
"dataset:Open-Orca/OpenOrca",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:databricks/databricks-dolly-15k",
"dataset:yahma/alpaca-cleaned",
"dataset:uonlp/CulturaX",
"dataset:mwitiderrick/SwahiliPlatypus",
"dataset:swahili",
"dataset:Rogendo/English-Swahili-Sentence-Pairs",
"dataset:ise-uiuc/Magicoder-Evol-Instruct-110K",
"dataset:meta-math/MetaMathQA",
"base_model:LeroyDyer/Mixtral_AI_CyberTron_Ultra",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-14T11:32:46+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #code #medical #farmer #doctor #Mega-Series #Cyber-Series #Role-Play #Self-Rag #ThinkingBot #conversational #en #dataset-gretelai/synthetic_text_to_sql #dataset-HuggingFaceTB/cosmopedia #dataset-teknium/OpenHermes-2.5 #dataset-Open-Orca/SlimOrca #dataset-Open-Orca/OpenOrca #dataset-cognitivecomputations/dolphin-coder #dataset-databricks/databricks-dolly-15k #dataset-yahma/alpaca-cleaned #dataset-uonlp/CulturaX #dataset-mwitiderrick/SwahiliPlatypus #dataset-swahili #dataset-Rogendo/English-Swahili-Sentence-Pairs #dataset-ise-uiuc/Magicoder-Evol-Instruct-110K #dataset-meta-math/MetaMathQA #base_model-LeroyDyer/Mixtral_AI_CyberTron_Ultra #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: LeroyDyer
- License: apache-2.0
- Finetuned from model : LeroyDyer/Mixtral_AI_CyberTron_Ultra
### Ok Its a Great MODEL !
Highly Math Trained As well as many TextBooks and Lessons Highly fit datasets as well as Coding Datasets highly tuned!
This model has absorbed all its previous generations as well as ALL high performers and Specialist models (mistral) It has absorb many foriegn languge models and still stays as an english model !
Very impressive responses Short and long as also it was trained on some binary datasets to return a direct answer! and others to perform step by step response as wel as other to perform interactive response with clients for vairous tasks, such as product design and system design discussion:
Finacial information and other finacial tasks have been highly tunes also : Infact when returning to previous aligned datasets they stayed in line and was sdtill able to achieve High tuning!
Hence a process of merging with a specific topic or role and then training for the role and topic on themed data, hence previous itterations heavily tuned for medical or law or role play as the conception was that intergating the model into a single enity may even corrput them , so the decision to seperate concerns was taken :
This enabled for ssstrategic merging and tuning !
Concepts : chain of thought and functin calling Self rag ! Thoughts , emotive responses have been enhance where possibel with the data given . even sexy books have been highly tuned into the model :
but also i think american genera books (sci fi, fantasy, romance novels are required) for great role play which some expect: )
I have recently seen a strategy in which prompts can be embedded into the adapter to Trigger Specific Roles :
I hae tried to remove such prompting as you are a helpful ai to a character theme instead such as you are a cyber hacker by day and business man by night ! ie to give the model various internal personas !
after some training i noticed it was also talking to itself !! (rehersing) but the tokens for thought were missing so it lookeed strange until i noticed the bug;
After removing the thought tokens they were displayed in the output as the tokenizer was masking them !
But Still a Great Model , Given a Task based data set it Coverges Super quickly hence my enjoyment of the model as training of it is super quick !
Now when ii load up datasets : they are generally only a few bad steps before it begins to drop below zero maintaining a steady 0.6 etc whilst loading the unnseen new dataset , hence not needing so many epochs to adjust the matrix to the new information !
Im not sure if Lora actually works when you save them but i do save some and use them to load models for training ! as they are jump starts for model which did not recive that fine tuning , they can be merged and alligned ! (probably thiey are Good! )
### MOTTO FOR MODEL!
Models are the same as loras , take them with light weight like tablets of knowledge!
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
|
[
"# Uploaded model\n\n- Developed by: LeroyDyer\n- License: apache-2.0\n- Finetuned from model : LeroyDyer/Mixtral_AI_CyberTron_Ultra",
"### Ok Its a Great MODEL !\n\n\nHighly Math Trained As well as many TextBooks and Lessons Highly fit datasets as well as Coding Datasets highly tuned! \n\nThis model has absorbed all its previous generations as well as ALL high performers and Specialist models (mistral) It has absorb many foriegn languge models and still stays as an english model !\n\nVery impressive responses Short and long as also it was trained on some binary datasets to return a direct answer! and others to perform step by step response as wel as other to perform interactive response with clients for vairous tasks, such as product design and system design discussion:\n\nFinacial information and other finacial tasks have been highly tunes also : Infact when returning to previous aligned datasets they stayed in line and was sdtill able to achieve High tuning!\nHence a process of merging with a specific topic or role and then training for the role and topic on themed data, hence previous itterations heavily tuned for medical or law or role play as the conception was that intergating the model into a single enity may even corrput them , so the decision to seperate concerns was taken :\nThis enabled for ssstrategic merging and tuning !\n\nConcepts : chain of thought and functin calling Self rag ! Thoughts , emotive responses have been enhance where possibel with the data given . even sexy books have been highly tuned into the model : \nbut also i think american genera books (sci fi, fantasy, romance novels are required) for great role play which some expect: )\nI have recently seen a strategy in which prompts can be embedded into the adapter to Trigger Specific Roles : \nI hae tried to remove such prompting as you are a helpful ai to a character theme instead such as you are a cyber hacker by day and business man by night ! ie to give the model various internal personas !\nafter some training i noticed it was also talking to itself !! (rehersing) but the tokens for thought were missing so it lookeed strange until i noticed the bug; \nAfter removing the thought tokens they were displayed in the output as the tokenizer was masking them !\n\nBut Still a Great Model , Given a Task based data set it Coverges Super quickly hence my enjoyment of the model as training of it is super quick !\nNow when ii load up datasets : they are generally only a few bad steps before it begins to drop below zero maintaining a steady 0.6 etc whilst loading the unnseen new dataset , hence not needing so many epochs to adjust the matrix to the new information !\n\nIm not sure if Lora actually works when you save them but i do save some and use them to load models for training ! as they are jump starts for model which did not recive that fine tuning , they can be merged and alligned ! (probably thiey are Good! )",
"### MOTTO FOR MODEL!\n\nModels are the same as loras , take them with light weight like tablets of knowledge! \n\n\n\n\n\n\n\n\n\n\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #text-generation-inference #unsloth #trl #code #medical #farmer #doctor #Mega-Series #Cyber-Series #Role-Play #Self-Rag #ThinkingBot #conversational #en #dataset-gretelai/synthetic_text_to_sql #dataset-HuggingFaceTB/cosmopedia #dataset-teknium/OpenHermes-2.5 #dataset-Open-Orca/SlimOrca #dataset-Open-Orca/OpenOrca #dataset-cognitivecomputations/dolphin-coder #dataset-databricks/databricks-dolly-15k #dataset-yahma/alpaca-cleaned #dataset-uonlp/CulturaX #dataset-mwitiderrick/SwahiliPlatypus #dataset-swahili #dataset-Rogendo/English-Swahili-Sentence-Pairs #dataset-ise-uiuc/Magicoder-Evol-Instruct-110K #dataset-meta-math/MetaMathQA #base_model-LeroyDyer/Mixtral_AI_CyberTron_Ultra #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: LeroyDyer\n- License: apache-2.0\n- Finetuned from model : LeroyDyer/Mixtral_AI_CyberTron_Ultra",
"### Ok Its a Great MODEL !\n\n\nHighly Math Trained As well as many TextBooks and Lessons Highly fit datasets as well as Coding Datasets highly tuned! \n\nThis model has absorbed all its previous generations as well as ALL high performers and Specialist models (mistral) It has absorb many foriegn languge models and still stays as an english model !\n\nVery impressive responses Short and long as also it was trained on some binary datasets to return a direct answer! and others to perform step by step response as wel as other to perform interactive response with clients for vairous tasks, such as product design and system design discussion:\n\nFinacial information and other finacial tasks have been highly tunes also : Infact when returning to previous aligned datasets they stayed in line and was sdtill able to achieve High tuning!\nHence a process of merging with a specific topic or role and then training for the role and topic on themed data, hence previous itterations heavily tuned for medical or law or role play as the conception was that intergating the model into a single enity may even corrput them , so the decision to seperate concerns was taken :\nThis enabled for ssstrategic merging and tuning !\n\nConcepts : chain of thought and functin calling Self rag ! Thoughts , emotive responses have been enhance where possibel with the data given . even sexy books have been highly tuned into the model : \nbut also i think american genera books (sci fi, fantasy, romance novels are required) for great role play which some expect: )\nI have recently seen a strategy in which prompts can be embedded into the adapter to Trigger Specific Roles : \nI hae tried to remove such prompting as you are a helpful ai to a character theme instead such as you are a cyber hacker by day and business man by night ! ie to give the model various internal personas !\nafter some training i noticed it was also talking to itself !! (rehersing) but the tokens for thought were missing so it lookeed strange until i noticed the bug; \nAfter removing the thought tokens they were displayed in the output as the tokenizer was masking them !\n\nBut Still a Great Model , Given a Task based data set it Coverges Super quickly hence my enjoyment of the model as training of it is super quick !\nNow when ii load up datasets : they are generally only a few bad steps before it begins to drop below zero maintaining a steady 0.6 etc whilst loading the unnseen new dataset , hence not needing so many epochs to adjust the matrix to the new information !\n\nIm not sure if Lora actually works when you save them but i do save some and use them to load models for training ! as they are jump starts for model which did not recive that fine tuning , they can be merged and alligned ! (probably thiey are Good! )",
"### MOTTO FOR MODEL!\n\nModels are the same as loras , take them with light weight like tablets of knowledge! \n\n\n\n\n\n\n\n\n\n\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null |
transformers
|
# LeroyDyer/Mixtral_AI_CyberTron_Ultra-Q4_K_S-GGUF
This model was converted to GGUF format from [`LeroyDyer/Mixtral_AI_CyberTron_Ultra`](https://huggingface.co/LeroyDyer/Mixtral_AI_CyberTron_Ultra) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/LeroyDyer/Mixtral_AI_CyberTron_Ultra) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo LeroyDyer/Mixtral_AI_CyberTron_Ultra-Q4_K_S-GGUF --model mixtral_ai_cybertron_ultra.Q4_K_S.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo LeroyDyer/Mixtral_AI_CyberTron_Ultra-Q4_K_S-GGUF --model mixtral_ai_cybertron_ultra.Q4_K_S.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mixtral_ai_cybertron_ultra.Q4_K_S.gguf -n 128
```
|
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "code", "medical ", "farmer", "doctor", "Mega-Series", "Cyber-Series", "Role-Play", "Self-Rag", "ThinkingBot", "llama-cpp", "gguf-my-repo"], "datasets": ["gretelai/synthetic_text_to_sql", "HuggingFaceTB/cosmopedia", "teknium/OpenHermes-2.5", "Open-Orca/SlimOrca", "Open-Orca/OpenOrca", "cognitivecomputations/dolphin-coder", "databricks/databricks-dolly-15k", "yahma/alpaca-cleaned", "uonlp/CulturaX", "mwitiderrick/SwahiliPlatypus", "swahili", "Rogendo/English-Swahili-Sentence-Pairs", "ise-uiuc/Magicoder-Evol-Instruct-110K", "meta-math/MetaMathQA"], "metrics": ["accuracy", "bertscore", "bleu", "brier_score", "cer", "character", "charcut_mt", "chrf", "code_eval"], "base_model": "LeroyDyer/Mixtral_AI_CyberTron_Ultra"}
|
LeroyDyer/Mixtral_AI_CyberTron_Ultra7b
| null |
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"code",
"medical ",
"farmer",
"doctor",
"Mega-Series",
"Cyber-Series",
"Role-Play",
"Self-Rag",
"ThinkingBot",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:gretelai/synthetic_text_to_sql",
"dataset:HuggingFaceTB/cosmopedia",
"dataset:teknium/OpenHermes-2.5",
"dataset:Open-Orca/SlimOrca",
"dataset:Open-Orca/OpenOrca",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:databricks/databricks-dolly-15k",
"dataset:yahma/alpaca-cleaned",
"dataset:uonlp/CulturaX",
"dataset:mwitiderrick/SwahiliPlatypus",
"dataset:swahili",
"dataset:Rogendo/English-Swahili-Sentence-Pairs",
"dataset:ise-uiuc/Magicoder-Evol-Instruct-110K",
"dataset:meta-math/MetaMathQA",
"base_model:LeroyDyer/Mixtral_AI_CyberTron_Ultra",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-14T11:35:56+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #gguf #text-generation-inference #unsloth #mistral #trl #code #medical #farmer #doctor #Mega-Series #Cyber-Series #Role-Play #Self-Rag #ThinkingBot #llama-cpp #gguf-my-repo #en #dataset-gretelai/synthetic_text_to_sql #dataset-HuggingFaceTB/cosmopedia #dataset-teknium/OpenHermes-2.5 #dataset-Open-Orca/SlimOrca #dataset-Open-Orca/OpenOrca #dataset-cognitivecomputations/dolphin-coder #dataset-databricks/databricks-dolly-15k #dataset-yahma/alpaca-cleaned #dataset-uonlp/CulturaX #dataset-mwitiderrick/SwahiliPlatypus #dataset-swahili #dataset-Rogendo/English-Swahili-Sentence-Pairs #dataset-ise-uiuc/Magicoder-Evol-Instruct-110K #dataset-meta-math/MetaMathQA #base_model-LeroyDyer/Mixtral_AI_CyberTron_Ultra #license-apache-2.0 #endpoints_compatible #region-us
|
# LeroyDyer/Mixtral_AI_CyberTron_Ultra-Q4_K_S-GGUF
This model was converted to GGUF format from 'LeroyDyer/Mixtral_AI_CyberTron_Ultra' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
|
[
"# LeroyDyer/Mixtral_AI_CyberTron_Ultra-Q4_K_S-GGUF\nThis model was converted to GGUF format from 'LeroyDyer/Mixtral_AI_CyberTron_Ultra' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
[
"TAGS\n#transformers #gguf #text-generation-inference #unsloth #mistral #trl #code #medical #farmer #doctor #Mega-Series #Cyber-Series #Role-Play #Self-Rag #ThinkingBot #llama-cpp #gguf-my-repo #en #dataset-gretelai/synthetic_text_to_sql #dataset-HuggingFaceTB/cosmopedia #dataset-teknium/OpenHermes-2.5 #dataset-Open-Orca/SlimOrca #dataset-Open-Orca/OpenOrca #dataset-cognitivecomputations/dolphin-coder #dataset-databricks/databricks-dolly-15k #dataset-yahma/alpaca-cleaned #dataset-uonlp/CulturaX #dataset-mwitiderrick/SwahiliPlatypus #dataset-swahili #dataset-Rogendo/English-Swahili-Sentence-Pairs #dataset-ise-uiuc/Magicoder-Evol-Instruct-110K #dataset-meta-math/MetaMathQA #base_model-LeroyDyer/Mixtral_AI_CyberTron_Ultra #license-apache-2.0 #endpoints_compatible #region-us \n",
"# LeroyDyer/Mixtral_AI_CyberTron_Ultra-Q4_K_S-GGUF\nThis model was converted to GGUF format from 'LeroyDyer/Mixtral_AI_CyberTron_Ultra' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
cilantro9246/qd28c05
| null |
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-14T11:38:01+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spark-name-ka-to-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ka-en](https://huggingface.co/Helsinki-NLP/opus-mt-ka-en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1121
- Bleu: 6.5037
- Gen Len: 7.0421
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 95 | 4.4596 | 2.2031 | 8.9789 |
| No log | 2.0 | 190 | 4.2461 | 2.3667 | 6.4632 |
| No log | 3.0 | 285 | 4.1547 | 3.4814 | 7.0 |
| No log | 4.0 | 380 | 4.1211 | 6.7762 | 6.8211 |
| No log | 5.0 | 475 | 4.1121 | 6.5037 | 7.0421 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.2.1+cpu
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["bleu"], "base_model": "Helsinki-NLP/opus-mt-ka-en", "model-index": [{"name": "spark-name-ka-to-en", "results": []}]}
|
ihebaker10/spark-name-ka-to-en
| null |
[
"transformers",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-ka-en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-14T11:40:29+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #marian #text2text-generation #generated_from_trainer #base_model-Helsinki-NLP/opus-mt-ka-en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
spark-name-ka-to-en
===================
This model is a fine-tuned version of Helsinki-NLP/opus-mt-ka-en on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 4.1121
* Bleu: 6.5037
* Gen Len: 7.0421
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.39.1
* Pytorch 2.2.1+cpu
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.1\n* Pytorch 2.2.1+cpu\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #safetensors #marian #text2text-generation #generated_from_trainer #base_model-Helsinki-NLP/opus-mt-ka-en #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.1\n* Pytorch 2.2.1+cpu\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2-7b-wo-live_qa-sft
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the HuggingFaceH4/deita-10k-v0-sft dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1023 | 1.0 | 7 | 1.3855 |
| 0.983 | 2.0 | 14 | 1.4054 |
| 0.7795 | 3.0 | 21 | 1.4111 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.2
|
{"tags": ["alignment-handbook", "trl", "sft", "generated_from_trainer", "trl", "sft", "generated_from_trainer"], "datasets": ["HuggingFaceH4/deita-10k-v0-sft"], "base_model": "meta-llama/Llama-2-7b-hf", "model-index": [{"name": "llama2-7b-wo-live_qa-sft", "results": []}]}
|
Minbyul/llama2-7b-wo-live_qa-sft
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"dataset:HuggingFaceH4/deita-10k-v0-sft",
"base_model:meta-llama/Llama-2-7b-hf",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T11:41:57+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #alignment-handbook #trl #sft #generated_from_trainer #dataset-HuggingFaceH4/deita-10k-v0-sft #base_model-meta-llama/Llama-2-7b-hf #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
llama2-7b-wo-live\_qa-sft
=========================
This model is a fine-tuned version of meta-llama/Llama-2-7b-hf on the HuggingFaceH4/deita-10k-v0-sft dataset.
It achieves the following results on the evaluation set:
* Loss: 1.4111
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* distributed\_type: multi-GPU
* num\_devices: 4
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 64
* total\_eval\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.39.0.dev0
* Pytorch 2.1.2
* Datasets 2.14.6
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #alignment-handbook #trl #sft #generated_from_trainer #dataset-HuggingFaceH4/deita-10k-v0-sft #base_model-meta-llama/Llama-2-7b-hf #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
] |
null |
transformers
|
# LeroyDyer/Mixtral_AI_CyberTron_Ultra-Q3_K_M-GGUF
This model was converted to GGUF format from [`LeroyDyer/Mixtral_AI_CyberTron_Ultra`](https://huggingface.co/LeroyDyer/Mixtral_AI_CyberTron_Ultra) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/LeroyDyer/Mixtral_AI_CyberTron_Ultra) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo LeroyDyer/Mixtral_AI_CyberTron_Ultra-Q3_K_M-GGUF --model mixtral_ai_cybertron_ultra.Q3_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo LeroyDyer/Mixtral_AI_CyberTron_Ultra-Q3_K_M-GGUF --model mixtral_ai_cybertron_ultra.Q3_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mixtral_ai_cybertron_ultra.Q3_K_M.gguf -n 128
```
|
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "code", "medical ", "farmer", "doctor", "Mega-Series", "Cyber-Series", "Role-Play", "Self-Rag", "ThinkingBot", "llama-cpp", "gguf-my-repo"], "datasets": ["gretelai/synthetic_text_to_sql", "HuggingFaceTB/cosmopedia", "teknium/OpenHermes-2.5", "Open-Orca/SlimOrca", "Open-Orca/OpenOrca", "cognitivecomputations/dolphin-coder", "databricks/databricks-dolly-15k", "yahma/alpaca-cleaned", "uonlp/CulturaX", "mwitiderrick/SwahiliPlatypus", "swahili", "Rogendo/English-Swahili-Sentence-Pairs", "ise-uiuc/Magicoder-Evol-Instruct-110K", "meta-math/MetaMathQA"], "metrics": ["accuracy", "bertscore", "bleu", "brier_score", "cer", "character", "charcut_mt", "chrf", "code_eval"], "base_model": "LeroyDyer/Mixtral_AI_CyberTron_Ultra"}
|
LeroyDyer/Mixtral_AI_CyberTron_Ultra-Q3_K_M-GGUF
| null |
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"code",
"medical ",
"farmer",
"doctor",
"Mega-Series",
"Cyber-Series",
"Role-Play",
"Self-Rag",
"ThinkingBot",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:gretelai/synthetic_text_to_sql",
"dataset:HuggingFaceTB/cosmopedia",
"dataset:teknium/OpenHermes-2.5",
"dataset:Open-Orca/SlimOrca",
"dataset:Open-Orca/OpenOrca",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:databricks/databricks-dolly-15k",
"dataset:yahma/alpaca-cleaned",
"dataset:uonlp/CulturaX",
"dataset:mwitiderrick/SwahiliPlatypus",
"dataset:swahili",
"dataset:Rogendo/English-Swahili-Sentence-Pairs",
"dataset:ise-uiuc/Magicoder-Evol-Instruct-110K",
"dataset:meta-math/MetaMathQA",
"base_model:LeroyDyer/Mixtral_AI_CyberTron_Ultra",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-14T11:43:03+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #gguf #text-generation-inference #unsloth #mistral #trl #code #medical #farmer #doctor #Mega-Series #Cyber-Series #Role-Play #Self-Rag #ThinkingBot #llama-cpp #gguf-my-repo #en #dataset-gretelai/synthetic_text_to_sql #dataset-HuggingFaceTB/cosmopedia #dataset-teknium/OpenHermes-2.5 #dataset-Open-Orca/SlimOrca #dataset-Open-Orca/OpenOrca #dataset-cognitivecomputations/dolphin-coder #dataset-databricks/databricks-dolly-15k #dataset-yahma/alpaca-cleaned #dataset-uonlp/CulturaX #dataset-mwitiderrick/SwahiliPlatypus #dataset-swahili #dataset-Rogendo/English-Swahili-Sentence-Pairs #dataset-ise-uiuc/Magicoder-Evol-Instruct-110K #dataset-meta-math/MetaMathQA #base_model-LeroyDyer/Mixtral_AI_CyberTron_Ultra #license-apache-2.0 #endpoints_compatible #region-us
|
# LeroyDyer/Mixtral_AI_CyberTron_Ultra-Q3_K_M-GGUF
This model was converted to GGUF format from 'LeroyDyer/Mixtral_AI_CyberTron_Ultra' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
|
[
"# LeroyDyer/Mixtral_AI_CyberTron_Ultra-Q3_K_M-GGUF\nThis model was converted to GGUF format from 'LeroyDyer/Mixtral_AI_CyberTron_Ultra' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
[
"TAGS\n#transformers #gguf #text-generation-inference #unsloth #mistral #trl #code #medical #farmer #doctor #Mega-Series #Cyber-Series #Role-Play #Self-Rag #ThinkingBot #llama-cpp #gguf-my-repo #en #dataset-gretelai/synthetic_text_to_sql #dataset-HuggingFaceTB/cosmopedia #dataset-teknium/OpenHermes-2.5 #dataset-Open-Orca/SlimOrca #dataset-Open-Orca/OpenOrca #dataset-cognitivecomputations/dolphin-coder #dataset-databricks/databricks-dolly-15k #dataset-yahma/alpaca-cleaned #dataset-uonlp/CulturaX #dataset-mwitiderrick/SwahiliPlatypus #dataset-swahili #dataset-Rogendo/English-Swahili-Sentence-Pairs #dataset-ise-uiuc/Magicoder-Evol-Instruct-110K #dataset-meta-math/MetaMathQA #base_model-LeroyDyer/Mixtral_AI_CyberTron_Ultra #license-apache-2.0 #endpoints_compatible #region-us \n",
"# LeroyDyer/Mixtral_AI_CyberTron_Ultra-Q3_K_M-GGUF\nThis model was converted to GGUF format from 'LeroyDyer/Mixtral_AI_CyberTron_Ultra' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
null |
transformers
|
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
weighted/imatrix quants of https://huggingface.co/ibivibiv/aegolius-acadicus-24b-v2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/aegolius-acadicus-24b-v2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-24b-v2-i1-GGUF/resolve/main/aegolius-acadicus-24b-v2.i1-IQ1_S.gguf) | i1-IQ1_S | 5.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-24b-v2-i1-GGUF/resolve/main/aegolius-acadicus-24b-v2.i1-IQ1_M.gguf) | i1-IQ1_M | 5.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-24b-v2-i1-GGUF/resolve/main/aegolius-acadicus-24b-v2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-24b-v2-i1-GGUF/resolve/main/aegolius-acadicus-24b-v2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-24b-v2-i1-GGUF/resolve/main/aegolius-acadicus-24b-v2.i1-IQ2_S.gguf) | i1-IQ2_S | 7.4 | |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-24b-v2-i1-GGUF/resolve/main/aegolius-acadicus-24b-v2.i1-IQ2_M.gguf) | i1-IQ2_M | 8.1 | |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-24b-v2-i1-GGUF/resolve/main/aegolius-acadicus-24b-v2.i1-Q2_K.gguf) | i1-Q2_K | 8.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-24b-v2-i1-GGUF/resolve/main/aegolius-acadicus-24b-v2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 9.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-24b-v2-i1-GGUF/resolve/main/aegolius-acadicus-24b-v2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 10.0 | |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-24b-v2-i1-GGUF/resolve/main/aegolius-acadicus-24b-v2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 10.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-24b-v2-i1-GGUF/resolve/main/aegolius-acadicus-24b-v2.i1-IQ3_S.gguf) | i1-IQ3_S | 10.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-24b-v2-i1-GGUF/resolve/main/aegolius-acadicus-24b-v2.i1-IQ3_M.gguf) | i1-IQ3_M | 10.7 | |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-24b-v2-i1-GGUF/resolve/main/aegolius-acadicus-24b-v2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 11.7 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-24b-v2-i1-GGUF/resolve/main/aegolius-acadicus-24b-v2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 12.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-24b-v2-i1-GGUF/resolve/main/aegolius-acadicus-24b-v2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 13.0 | |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-24b-v2-i1-GGUF/resolve/main/aegolius-acadicus-24b-v2.i1-Q4_0.gguf) | i1-Q4_0 | 13.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-24b-v2-i1-GGUF/resolve/main/aegolius-acadicus-24b-v2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 13.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-24b-v2-i1-GGUF/resolve/main/aegolius-acadicus-24b-v2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 14.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-24b-v2-i1-GGUF/resolve/main/aegolius-acadicus-24b-v2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 16.7 | |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-24b-v2-i1-GGUF/resolve/main/aegolius-acadicus-24b-v2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 17.2 | |
| [GGUF](https://huggingface.co/mradermacher/aegolius-acadicus-24b-v2-i1-GGUF/resolve/main/aegolius-acadicus-24b-v2.i1-Q6_K.gguf) | i1-Q6_K | 19.9 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["moe", "moerge"], "base_model": "ibivibiv/aegolius-acadicus-24b-v2", "quantized_by": "mradermacher"}
|
mradermacher/aegolius-acadicus-24b-v2-i1-GGUF
| null |
[
"transformers",
"gguf",
"moe",
"moerge",
"en",
"base_model:ibivibiv/aegolius-acadicus-24b-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-14T11:43:45+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #gguf #moe #moerge #en #base_model-ibivibiv/aegolius-acadicus-24b-v2 #license-apache-2.0 #endpoints_compatible #region-us
|
About
-----
weighted/imatrix quants of URL
static quants are available at URL
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
|
[] |
[
"TAGS\n#transformers #gguf #moe #moerge #en #base_model-ibivibiv/aegolius-acadicus-24b-v2 #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
null |
transformers
|
# LeroyDyer/Mixtral_AI_CyberTron_Ultra-Q8_0-GGUF
This model was converted to GGUF format from [`LeroyDyer/Mixtral_AI_CyberTron_Ultra`](https://huggingface.co/LeroyDyer/Mixtral_AI_CyberTron_Ultra) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/LeroyDyer/Mixtral_AI_CyberTron_Ultra) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo LeroyDyer/Mixtral_AI_CyberTron_Ultra-Q8_0-GGUF --model mixtral_ai_cybertron_ultra.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo LeroyDyer/Mixtral_AI_CyberTron_Ultra-Q8_0-GGUF --model mixtral_ai_cybertron_ultra.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m mixtral_ai_cybertron_ultra.Q8_0.gguf -n 128
```
|
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "code", "medical ", "farmer", "doctor", "Mega-Series", "Cyber-Series", "Role-Play", "Self-Rag", "ThinkingBot", "llama-cpp", "gguf-my-repo"], "datasets": ["gretelai/synthetic_text_to_sql", "HuggingFaceTB/cosmopedia", "teknium/OpenHermes-2.5", "Open-Orca/SlimOrca", "Open-Orca/OpenOrca", "cognitivecomputations/dolphin-coder", "databricks/databricks-dolly-15k", "yahma/alpaca-cleaned", "uonlp/CulturaX", "mwitiderrick/SwahiliPlatypus", "swahili", "Rogendo/English-Swahili-Sentence-Pairs", "ise-uiuc/Magicoder-Evol-Instruct-110K", "meta-math/MetaMathQA"], "metrics": ["accuracy", "bertscore", "bleu", "brier_score", "cer", "character", "charcut_mt", "chrf", "code_eval"], "base_model": "LeroyDyer/Mixtral_AI_CyberTron_Ultra"}
|
LeroyDyer/Mixtral_AI_CyberTron_Ultra_Q8
| null |
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"code",
"medical ",
"farmer",
"doctor",
"Mega-Series",
"Cyber-Series",
"Role-Play",
"Self-Rag",
"ThinkingBot",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:gretelai/synthetic_text_to_sql",
"dataset:HuggingFaceTB/cosmopedia",
"dataset:teknium/OpenHermes-2.5",
"dataset:Open-Orca/SlimOrca",
"dataset:Open-Orca/OpenOrca",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:databricks/databricks-dolly-15k",
"dataset:yahma/alpaca-cleaned",
"dataset:uonlp/CulturaX",
"dataset:mwitiderrick/SwahiliPlatypus",
"dataset:swahili",
"dataset:Rogendo/English-Swahili-Sentence-Pairs",
"dataset:ise-uiuc/Magicoder-Evol-Instruct-110K",
"dataset:meta-math/MetaMathQA",
"base_model:LeroyDyer/Mixtral_AI_CyberTron_Ultra",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-14T11:44:34+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #gguf #text-generation-inference #unsloth #mistral #trl #code #medical #farmer #doctor #Mega-Series #Cyber-Series #Role-Play #Self-Rag #ThinkingBot #llama-cpp #gguf-my-repo #en #dataset-gretelai/synthetic_text_to_sql #dataset-HuggingFaceTB/cosmopedia #dataset-teknium/OpenHermes-2.5 #dataset-Open-Orca/SlimOrca #dataset-Open-Orca/OpenOrca #dataset-cognitivecomputations/dolphin-coder #dataset-databricks/databricks-dolly-15k #dataset-yahma/alpaca-cleaned #dataset-uonlp/CulturaX #dataset-mwitiderrick/SwahiliPlatypus #dataset-swahili #dataset-Rogendo/English-Swahili-Sentence-Pairs #dataset-ise-uiuc/Magicoder-Evol-Instruct-110K #dataset-meta-math/MetaMathQA #base_model-LeroyDyer/Mixtral_AI_CyberTron_Ultra #license-apache-2.0 #endpoints_compatible #region-us
|
# LeroyDyer/Mixtral_AI_CyberTron_Ultra-Q8_0-GGUF
This model was converted to GGUF format from 'LeroyDyer/Mixtral_AI_CyberTron_Ultra' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
|
[
"# LeroyDyer/Mixtral_AI_CyberTron_Ultra-Q8_0-GGUF\nThis model was converted to GGUF format from 'LeroyDyer/Mixtral_AI_CyberTron_Ultra' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
[
"TAGS\n#transformers #gguf #text-generation-inference #unsloth #mistral #trl #code #medical #farmer #doctor #Mega-Series #Cyber-Series #Role-Play #Self-Rag #ThinkingBot #llama-cpp #gguf-my-repo #en #dataset-gretelai/synthetic_text_to_sql #dataset-HuggingFaceTB/cosmopedia #dataset-teknium/OpenHermes-2.5 #dataset-Open-Orca/SlimOrca #dataset-Open-Orca/OpenOrca #dataset-cognitivecomputations/dolphin-coder #dataset-databricks/databricks-dolly-15k #dataset-yahma/alpaca-cleaned #dataset-uonlp/CulturaX #dataset-mwitiderrick/SwahiliPlatypus #dataset-swahili #dataset-Rogendo/English-Swahili-Sentence-Pairs #dataset-ise-uiuc/Magicoder-Evol-Instruct-110K #dataset-meta-math/MetaMathQA #base_model-LeroyDyer/Mixtral_AI_CyberTron_Ultra #license-apache-2.0 #endpoints_compatible #region-us \n",
"# LeroyDyer/Mixtral_AI_CyberTron_Ultra-Q8_0-GGUF\nThis model was converted to GGUF format from 'LeroyDyer/Mixtral_AI_CyberTron_Ultra' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text2text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
AdityaNamuduri/testing
| null |
[
"transformers",
"safetensors",
"bart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-14T11:58:45+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #bart #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #bart #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)
* [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: NousResearch/Hermes-2-Pro-Mistral-7B
- model: WizardLM/WizardMath-7B-V1.1
merge_method: slerp
base_model: NousResearch/Hermes-2-Pro-Mistral-7B
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
```
|
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["NousResearch/Hermes-2-Pro-Mistral-7B", "WizardLM/WizardMath-7B-V1.1"]}
|
mergekit-community/mergekit-slerp-ovodfvr
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:NousResearch/Hermes-2-Pro-Mistral-7B",
"base_model:WizardLM/WizardMath-7B-V1.1",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T12:02:02+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-NousResearch/Hermes-2-Pro-Mistral-7B #base_model-WizardLM/WizardMath-7B-V1.1 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* NousResearch/Hermes-2-Pro-Mistral-7B
* WizardLM/WizardMath-7B-V1.1
### Configuration
The following YAML configuration was used to produce this model:
|
[
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* NousResearch/Hermes-2-Pro-Mistral-7B\n* WizardLM/WizardMath-7B-V1.1",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-NousResearch/Hermes-2-Pro-Mistral-7B #base_model-WizardLM/WizardMath-7B-V1.1 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* NousResearch/Hermes-2-Pro-Mistral-7B\n* WizardLM/WizardMath-7B-V1.1",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# en_vi_envit5-base_conv_train
This model is a fine-tuned version of [VietAI/envit5-base](https://huggingface.co/VietAI/envit5-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Framework versions
- Transformers 4.37.2
- Pytorch 1.12.1+cu116
- Datasets 2.18.0
- Tokenizers 0.15.1
|
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "VietAI/envit5-base", "model-index": [{"name": "en_vi_envit5-base_conv_train", "results": []}]}
|
yuufong/en_vi_envit5-base_conv_train
| null |
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/envit5-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T12:04:06+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/envit5-base #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# en_vi_envit5-base_conv_train
This model is a fine-tuned version of VietAI/envit5-base on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Framework versions
- Transformers 4.37.2
- Pytorch 1.12.1+cu116
- Datasets 2.18.0
- Tokenizers 0.15.1
|
[
"# en_vi_envit5-base_conv_train\n\nThis model is a fine-tuned version of VietAI/envit5-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- gradient_accumulation_steps: 16\n- total_train_batch_size: 256\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20",
"### Framework versions\n\n- Transformers 4.37.2\n- Pytorch 1.12.1+cu116\n- Datasets 2.18.0\n- Tokenizers 0.15.1"
] |
[
"TAGS\n#transformers #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/envit5-base #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# en_vi_envit5-base_conv_train\n\nThis model is a fine-tuned version of VietAI/envit5-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- gradient_accumulation_steps: 16\n- total_train_batch_size: 256\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20",
"### Framework versions\n\n- Transformers 4.37.2\n- Pytorch 1.12.1+cu116\n- Datasets 2.18.0\n- Tokenizers 0.15.1"
] |
text-to-image
|
diffusers
|
# LoRA DreamBooth - ClaireOzzz/murtceps
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0 trained on @fffiloni's SD-XL trainer.
The weights were trained on the concept prompt:
```
murtceps
```
Use this keyword to trigger your custom model in your prompts.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Usage
Make sure to upgrade diffusers to >= 0.19.0:
```
pip install diffusers --upgrade
```
In addition make sure to install transformers, safetensors, accelerate as well as the invisible watermark:
```
pip install invisible_watermark transformers accelerate safetensors
```
To just use the base model, you can run:
```python
import torch
from diffusers import DiffusionPipeline, AutoencoderKL
device = "cuda" if torch.cuda.is_available() else "cpu"
vae = AutoencoderKL.from_pretrained('madebyollin/sdxl-vae-fp16-fix', torch_dtype=torch.float16)
pipe = DiffusionPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
vae=vae, torch_dtype=torch.float16, variant="fp16",
use_safetensors=True
)
pipe.to(device)
# This is where you load your trained weights
specific_safetensors = "pytorch_lora_weights.safetensors"
lora_scale = 0.9
pipe.load_lora_weights(
'ClaireOzzz/murtceps',
weight_name = specific_safetensors,
# use_auth_token = True
)
prompt = "A majestic murtceps jumping from a big stone at night"
image = pipe(
prompt=prompt,
num_inference_steps=50,
cross_attention_kwargs={"scale": lora_scale}
).images[0]
```
|
{"tags": ["stable-diffusion-xl", "stable-diffusion-xl-diffusers", "text-to-image", "diffusers", "lora"], "datasets": ["ClaireOzzz/specbnw3"], "base_model": "stabilityai/stable-diffusion-xl-base-1.0", "instance_prompt": "murtceps", "inference": false}
|
ClaireOzzz/murtceps
| null |
[
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"dataset:ClaireOzzz/specbnw3",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | null |
2024-04-14T12:05:00+00:00
|
[] |
[] |
TAGS
#diffusers #tensorboard #stable-diffusion-xl #stable-diffusion-xl-diffusers #text-to-image #lora #dataset-ClaireOzzz/specbnw3 #base_model-stabilityai/stable-diffusion-xl-base-1.0 #region-us
|
# LoRA DreamBooth - ClaireOzzz/murtceps
These are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0 trained on @fffiloni's SD-XL trainer.
The weights were trained on the concept prompt:
Use this keyword to trigger your custom model in your prompts.
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Usage
Make sure to upgrade diffusers to >= 0.19.0:
In addition make sure to install transformers, safetensors, accelerate as well as the invisible watermark:
To just use the base model, you can run:
|
[
"# LoRA DreamBooth - ClaireOzzz/murtceps\nThese are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0 trained on @fffiloni's SD-XL trainer. \nThe weights were trained on the concept prompt: \n \nUse this keyword to trigger your custom model in your prompts. \nLoRA for the text encoder was enabled: False.\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.",
"## Usage\nMake sure to upgrade diffusers to >= 0.19.0:\n\nIn addition make sure to install transformers, safetensors, accelerate as well as the invisible watermark:\n\nTo just use the base model, you can run:"
] |
[
"TAGS\n#diffusers #tensorboard #stable-diffusion-xl #stable-diffusion-xl-diffusers #text-to-image #lora #dataset-ClaireOzzz/specbnw3 #base_model-stabilityai/stable-diffusion-xl-base-1.0 #region-us \n",
"# LoRA DreamBooth - ClaireOzzz/murtceps\nThese are LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0 trained on @fffiloni's SD-XL trainer. \nThe weights were trained on the concept prompt: \n \nUse this keyword to trigger your custom model in your prompts. \nLoRA for the text encoder was enabled: False.\nSpecial VAE used for training: madebyollin/sdxl-vae-fp16-fix.",
"## Usage\nMake sure to upgrade diffusers to >= 0.19.0:\n\nIn addition make sure to install transformers, safetensors, accelerate as well as the invisible watermark:\n\nTo just use the base model, you can run:"
] |
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# meditron-7b-wo-live_qa-sft
This model is a fine-tuned version of [epfl-llm/meditron-7b](https://huggingface.co/epfl-llm/meditron-7b) on the HuggingFaceH4/deita-10k-v0-sft dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4959
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1705 | 1.0 | 7 | 1.4560 |
| 1.0304 | 2.0 | 14 | 1.4817 |
| 0.7906 | 3.0 | 21 | 1.4959 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.2
|
{"license": "llama2", "tags": ["alignment-handbook", "trl", "sft", "generated_from_trainer", "trl", "sft", "generated_from_trainer"], "datasets": ["HuggingFaceH4/deita-10k-v0-sft"], "base_model": "epfl-llm/meditron-7b", "model-index": [{"name": "meditron-7b-wo-live_qa-sft", "results": []}]}
|
Minbyul/meditron-7b-wo-live_qa-sft
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"dataset:HuggingFaceH4/deita-10k-v0-sft",
"base_model:epfl-llm/meditron-7b",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T12:06:11+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #alignment-handbook #trl #sft #generated_from_trainer #dataset-HuggingFaceH4/deita-10k-v0-sft #base_model-epfl-llm/meditron-7b #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
meditron-7b-wo-live\_qa-sft
===========================
This model is a fine-tuned version of epfl-llm/meditron-7b on the HuggingFaceH4/deita-10k-v0-sft dataset.
It achieves the following results on the evaluation set:
* Loss: 1.4959
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* distributed\_type: multi-GPU
* num\_devices: 4
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 64
* total\_eval\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.39.0.dev0
* Pytorch 2.1.2
* Datasets 2.14.6
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #alignment-handbook #trl #sft #generated_from_trainer #dataset-HuggingFaceH4/deita-10k-v0-sft #base_model-epfl-llm/meditron-7b #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
] |
video-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ViVitTrained-ABAW
This model is a fine-tuned version of [google/vivit-b-16x2-kinetics400](https://huggingface.co/google/vivit-b-16x2-kinetics400) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "google/vivit-b-16x2-kinetics400", "model-index": [{"name": "ViVitTrained-ABAW", "results": []}]}
|
Sapezb/ViVitTrained-ABAW
| null |
[
"transformers",
"tensorboard",
"safetensors",
"vivit",
"video-classification",
"generated_from_trainer",
"base_model:google/vivit-b-16x2-kinetics400",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2024-04-14T12:10:18+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #vivit #video-classification #generated_from_trainer #base_model-google/vivit-b-16x2-kinetics400 #license-mit #endpoints_compatible #region-us
|
# ViVitTrained-ABAW
This model is a fine-tuned version of google/vivit-b-16x2-kinetics400 on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# ViVitTrained-ABAW\n\nThis model is a fine-tuned version of google/vivit-b-16x2-kinetics400 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 3\n- eval_batch_size: 3\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #vivit #video-classification #generated_from_trainer #base_model-google/vivit-b-16x2-kinetics400 #license-mit #endpoints_compatible #region-us \n",
"# ViVitTrained-ABAW\n\nThis model is a fine-tuned version of google/vivit-b-16x2-kinetics400 on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0002\n- train_batch_size: 3\n- eval_batch_size: 3\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 1",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [unsloth/codegemma-7b](https://huggingface.co/unsloth/codegemma-7b)
* [cloudyu/google-gemma-7b-chinese-sft-v1](https://huggingface.co/cloudyu/google-gemma-7b-chinese-sft-v1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: cloudyu/google-gemma-7b-chinese-sft-v1
- model: unsloth/codegemma-7b
merge_method: slerp
base_model: unsloth/codegemma-7b
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
```
|
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["unsloth/codegemma-7b", "cloudyu/google-gemma-7b-chinese-sft-v1"]}
|
mergekit-community/mergekit-slerp-ynceepa
| null |
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"mergekit",
"merge",
"base_model:unsloth/codegemma-7b",
"base_model:cloudyu/google-gemma-7b-chinese-sft-v1",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T12:10:19+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #gemma #text-generation #mergekit #merge #base_model-unsloth/codegemma-7b #base_model-cloudyu/google-gemma-7b-chinese-sft-v1 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* unsloth/codegemma-7b
* cloudyu/google-gemma-7b-chinese-sft-v1
### Configuration
The following YAML configuration was used to produce this model:
|
[
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* unsloth/codegemma-7b\n* cloudyu/google-gemma-7b-chinese-sft-v1",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
[
"TAGS\n#transformers #safetensors #gemma #text-generation #mergekit #merge #base_model-unsloth/codegemma-7b #base_model-cloudyu/google-gemma-7b-chinese-sft-v1 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* unsloth/codegemma-7b\n* cloudyu/google-gemma-7b-chinese-sft-v1",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-to-audio
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_kazakh_tts2_1
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the KazakhTTS2 dataset (Mussakhojayeva, S., Khassanov, Y., & Varol, H.A. (2022). KazakhTTS2: Extending the Open-Source Kazakh TTS Corpus With More Data, Speakers, and Topics. International Conference on Language Resources and Evaluation).
It achieves the following results on the evaluation set:
- Loss: 0.4600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.725 | 0.06 | 100 | 0.6639 |
| 0.6132 | 0.11 | 200 | 0.5466 |
| 0.571 | 0.17 | 300 | 0.5207 |
| 0.5647 | 0.22 | 400 | 0.5120 |
| 0.5556 | 0.28 | 500 | 0.5047 |
| 0.5475 | 0.34 | 600 | 0.5003 |
| 0.5432 | 0.39 | 700 | 0.4975 |
| 0.5366 | 0.45 | 800 | 0.4944 |
| 0.5376 | 0.5 | 900 | 0.4913 |
| 0.5325 | 0.56 | 1000 | 0.4868 |
| 0.5281 | 0.62 | 1100 | 0.4861 |
| 0.5288 | 0.67 | 1200 | 0.4848 |
| 0.5251 | 0.73 | 1300 | 0.4825 |
| 0.5213 | 0.78 | 1400 | 0.4818 |
| 0.5225 | 0.84 | 1500 | 0.4823 |
| 0.5199 | 0.9 | 1600 | 0.4812 |
| 0.5211 | 0.95 | 1700 | 0.4816 |
| 0.5194 | 1.01 | 1800 | 0.4826 |
| 0.5224 | 1.06 | 1900 | 0.4798 |
| 0.5213 | 1.12 | 2000 | 0.4800 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.1+cu118
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "microsoft/speecht5_tts", "model-index": [{"name": "speecht5_finetuned_kazakh_tts2_1", "results": []}]}
|
zizzimars/speecht5_finetuned_kazakh_tts2_1
| null |
[
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"base_model:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | null |
2024-04-14T12:11:28+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #speecht5 #text-to-audio #generated_from_trainer #base_model-microsoft/speecht5_tts #license-mit #endpoints_compatible #region-us
|
speecht5\_finetuned\_kazakh\_tts2\_1
====================================
This model is a fine-tuned version of microsoft/speecht5\_tts on the KazakhTTS2 dataset (Mussakhojayeva, S., Khassanov, Y., & Varol, H.A. (2022). KazakhTTS2: Extending the Open-Source Kazakh TTS Corpus With More Data, Speakers, and Topics. International Conference on Language Resources and Evaluation).
It achieves the following results on the evaluation set:
* Loss: 0.4600
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 1e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* gradient\_accumulation\_steps: 16
* total\_train\_batch\_size: 64
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 200
* training\_steps: 2000
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.38.1
* Pytorch 2.2.1+cu118
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 200\n* training\\_steps: 2000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.1\n* Pytorch 2.2.1+cu118\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #speecht5 #text-to-audio #generated_from_trainer #base_model-microsoft/speecht5_tts #license-mit #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 1e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* gradient\\_accumulation\\_steps: 16\n* total\\_train\\_batch\\_size: 64\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 200\n* training\\_steps: 2000\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.1\n* Pytorch 2.2.1+cu118\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text2text-generation
|
transformers
|
# Model Card for Model ID
GAI Project 2.b Text summarization
|
{"library_name": "transformers", "tags": []}
|
FelixChao/T5-Chinese-Summarization
| null |
[
"transformers",
"safetensors",
"mt5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T12:15:22+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mt5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
GAI Project 2.b Text summarization
|
[
"# Model Card for Model ID\n\nGAI Project 2.b Text summarization"
] |
[
"TAGS\n#transformers #safetensors #mt5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID\n\nGAI Project 2.b Text summarization"
] |
text-generation
|
transformers
|
# Japanese-Starling-ChatV-7B
このモデルは"chatntq-ja-7b-v1.0"をベースにした7Bパラメータの日本語チャットモデルです。
<br>"Japanese-Starling-ChatV-7B" is a Japanese chat model built on top of "chatntq-ja-7b-v1.0", originally based on Mistral-7B-v0.1.
### <a href="https://huggingface.co/TFMC/Japanese-Starling-ChatV-7B-GGUF">詳細とGGUF版はこちら。Details and GGUFs are here. </a>
|
{"language": ["ja"], "license": "apache-2.0", "tags": ["Mistral"], "pipeline_tag": "text-generation"}
|
TFMC/Japanese-Starling-ChatV-7B
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"Mistral",
"ja",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T12:18:31+00:00
|
[] |
[
"ja"
] |
TAGS
#transformers #safetensors #mistral #text-generation #Mistral #ja #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Japanese-Starling-ChatV-7B
このモデルは"chatntq-ja-7b-v1.0"をベースにした7Bパラメータの日本語チャットモデルです。
<br>"Japanese-Starling-ChatV-7B" is a Japanese chat model built on top of "chatntq-ja-7b-v1.0", originally based on Mistral-7B-v0.1.
### <a href="URL>詳細とGGUF版はこちら。Details and GGUFs are here. </a>
|
[
"# Japanese-Starling-ChatV-7B\n\nこのモデルは\"chatntq-ja-7b-v1.0\"をベースにした7Bパラメータの日本語チャットモデルです。\n<br>\"Japanese-Starling-ChatV-7B\" is a Japanese chat model built on top of \"chatntq-ja-7b-v1.0\", originally based on Mistral-7B-v0.1.",
"### <a href=\"URL>詳細とGGUF版はこちら。Details and GGUFs are here. </a>"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #Mistral #ja #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Japanese-Starling-ChatV-7B\n\nこのモデルは\"chatntq-ja-7b-v1.0\"をベースにした7Bパラメータの日本語チャットモデルです。\n<br>\"Japanese-Starling-ChatV-7B\" is a Japanese chat model built on top of \"chatntq-ja-7b-v1.0\", originally based on Mistral-7B-v0.1.",
"### <a href=\"URL>詳細とGGUF版はこちら。Details and GGUFs are here. </a>"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
gotchachurchkhela/SN6-23
| null |
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-14T12:21:49+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null | null |
# AGCobra/OpenHermes-Emojitron-001-Q8_0-GGUF
This model was converted to GGUF format from [`movaxbx/OpenHermes-Emojitron-001`](https://huggingface.co/movaxbx/OpenHermes-Emojitron-001) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/movaxbx/OpenHermes-Emojitron-001) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo AGCobra/OpenHermes-Emojitron-001-Q8_0-GGUF --model openhermes-emojitron-001.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo AGCobra/OpenHermes-Emojitron-001-Q8_0-GGUF --model openhermes-emojitron-001.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m openhermes-emojitron-001.Q8_0.gguf -n 128
```
|
{"language": ["en"], "license": "apache-2.0", "tags": ["mistral", "instruct", "finetune", "chatml", "llama-cpp", "gguf-my-repo"], "base_model": "teknium/OpenHermes-2.5-Mistral-7B", "model-index": [{"name": "OpenHermes-Emojitron-001", "results": []}]}
|
AGCobra/OpenHermes-Emojitron-001-Q8_0-GGUF
| null |
[
"gguf",
"mistral",
"instruct",
"finetune",
"chatml",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"region:us"
] | null |
2024-04-14T12:23:58+00:00
|
[] |
[
"en"
] |
TAGS
#gguf #mistral #instruct #finetune #chatml #llama-cpp #gguf-my-repo #en #base_model-teknium/OpenHermes-2.5-Mistral-7B #license-apache-2.0 #region-us
|
# AGCobra/OpenHermes-Emojitron-001-Q8_0-GGUF
This model was converted to GGUF format from 'movaxbx/OpenHermes-Emojitron-001' using URL via the URL's GGUF-my-repo space.
Refer to the original model card for more details on the model.
## Use with URL
Install URL through brew.
Invoke the URL server or the CLI.
CLI:
Server:
Note: You can also use this checkpoint directly through the usage steps listed in the URL repo as well.
|
[
"# AGCobra/OpenHermes-Emojitron-001-Q8_0-GGUF\nThis model was converted to GGUF format from 'movaxbx/OpenHermes-Emojitron-001' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
[
"TAGS\n#gguf #mistral #instruct #finetune #chatml #llama-cpp #gguf-my-repo #en #base_model-teknium/OpenHermes-2.5-Mistral-7B #license-apache-2.0 #region-us \n",
"# AGCobra/OpenHermes-Emojitron-001-Q8_0-GGUF\nThis model was converted to GGUF format from 'movaxbx/OpenHermes-Emojitron-001' using URL via the URL's GGUF-my-repo space.\nRefer to the original model card for more details on the model.",
"## Use with URL\n\nInstall URL through brew.\n\n\nInvoke the URL server or the CLI.\n\nCLI:\n\n\n\nServer:\n\n\n\nNote: You can also use this checkpoint directly through the usage steps listed in the URL repo as well."
] |
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biomistral-7b-wo-medication_qa-sft
This model is a fine-tuned version of [BioMistral/BioMistral-7B](https://huggingface.co/BioMistral/BioMistral-7B) on the HuggingFaceH4/deita-10k-v0-sft dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6409
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3711 | 1.0 | 6 | 1.7329 |
| 1.0734 | 2.0 | 12 | 1.6324 |
| 0.8291 | 3.0 | 18 | 1.6409 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["alignment-handbook", "trl", "sft", "generated_from_trainer", "trl", "sft", "generated_from_trainer"], "datasets": ["HuggingFaceH4/deita-10k-v0-sft"], "base_model": "BioMistral/BioMistral-7B", "model-index": [{"name": "biomistral-7b-wo-medication_qa-sft", "results": []}]}
|
Minbyul/biomistral-7b-wo-medication_qa-sft
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:HuggingFaceH4/deita-10k-v0-sft",
"base_model:BioMistral/BioMistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T12:26:28+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #sft #generated_from_trainer #conversational #dataset-HuggingFaceH4/deita-10k-v0-sft #base_model-BioMistral/BioMistral-7B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
biomistral-7b-wo-medication\_qa-sft
===================================
This model is a fine-tuned version of BioMistral/BioMistral-7B on the HuggingFaceH4/deita-10k-v0-sft dataset.
It achieves the following results on the evaluation set:
* Loss: 1.6409
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* distributed\_type: multi-GPU
* num\_devices: 4
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 64
* total\_eval\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.39.0.dev0
* Pytorch 2.1.2
* Datasets 2.14.6
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #sft #generated_from_trainer #conversational #dataset-HuggingFaceH4/deita-10k-v0-sft #base_model-BioMistral/BioMistral-7B #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
] |
token-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 14apr-bert-uncased
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1141
- Precision: 0.9797
- Recall: 0.9796
- F1: 0.9797
- Accuracy: 0.9774
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1405 | 1.0 | 2500 | 0.1016 | 0.9731 | 0.9761 | 0.9746 | 0.9721 |
| 0.0994 | 2.0 | 5000 | 0.0939 | 0.9776 | 0.9774 | 0.9775 | 0.9750 |
| 0.0731 | 3.0 | 7500 | 0.0968 | 0.9783 | 0.9790 | 0.9787 | 0.9767 |
| 0.045 | 4.0 | 10000 | 0.1075 | 0.9790 | 0.9798 | 0.9794 | 0.9773 |
| 0.035 | 5.0 | 12500 | 0.1141 | 0.9797 | 0.9796 | 0.9797 | 0.9774 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "base_model": "bert-base-cased", "model-index": [{"name": "14apr-bert-uncased", "results": []}]}
|
sandeepmaddu/14apr-bert-cased
| null |
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-14T12:26:53+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #bert #token-classification #generated_from_trainer #base_model-bert-base-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
14apr-bert-uncased
==================
This model is a fine-tuned version of bert-base-cased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.1141
* Precision: 0.9797
* Recall: 0.9796
* F1: 0.9797
* Accuracy: 0.9774
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 5
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #bert #token-classification #generated_from_trainer #base_model-bert-base-cased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 5",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
wookyungseo/qlora-koalpaca-polyglot-12.8b-500step
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-14T12:30:34+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"license": "afl-3.0", "library_name": "transformers"}
|
metterian/gemma-pro-ko-10b
| null |
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T12:31:18+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #gemma #text-generation #arxiv-1910.09700 #license-afl-3.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #gemma #text-generation #arxiv-1910.09700 #license-afl-3.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trained_weigths_2
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3984
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.8026 | 1.0 | 5194 | 0.4062 |
| 0.817 | 2.0 | 10388 | 0.3952 |
| 0.6804 | 3.0 | 15582 | 0.3953 |
| 0.725 | 4.0 | 20776 | 0.3984 |
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "meta-llama/Llama-2-7b-chat-hf", "model-index": [{"name": "trained_weigths_2", "results": []}]}
|
Yan777/trained_weigths_2
| null |
[
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null |
2024-04-14T12:35:56+00:00
|
[] |
[] |
TAGS
#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #region-us
|
trained\_weigths\_2
===================
This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3984
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0002
* train\_batch\_size: 4
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.03
* num\_epochs: 4
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* PEFT 0.7.2.dev0
* Transformers 4.36.2
* Pytorch 2.2.1
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.03\n* num\\_epochs: 4\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.7.2.dev0\n* Transformers 4.36.2\n* Pytorch 2.2.1\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#peft #tensorboard #safetensors #trl #sft #generated_from_trainer #base_model-meta-llama/Llama-2-7b-chat-hf #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0002\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.03\n* num\\_epochs: 4\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* PEFT 0.7.2.dev0\n* Transformers 4.36.2\n* Pytorch 2.2.1\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-cybersecurity_readme
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9861
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 125 | 3.0330 |
| No log | 2.0 | 250 | 2.9910 |
| No log | 3.0 | 375 | 2.9861 |
### Framework versions
- Transformers 4.40.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "distilgpt2", "model-index": [{"name": "distilgpt2-finetuned-cybersecurity_readme", "results": []}]}
|
LDDon/distilgpt2-finetuned-cybersecurity_readme
| null |
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:distilgpt2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T12:37:15+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-distilgpt2 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
distilgpt2-finetuned-cybersecurity\_readme
==========================================
This model is a fine-tuned version of distilgpt2 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 2.9861
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.40.0.dev0
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #gpt2 #text-generation #generated_from_trainer #base_model-distilgpt2 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-subjqa-movies_2
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "cc-by-4.0", "tags": ["generated_from_trainer"], "base_model": "deepset/roberta-base-squad2", "model-index": [{"name": "roberta-finetuned-subjqa-movies_2", "results": []}]}
|
Manishonly/roberta-finetuned-subjqa-movies_2
| null |
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"question-answering",
"generated_from_trainer",
"base_model:deepset/roberta-base-squad2",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-14T12:40:05+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #roberta #question-answering #generated_from_trainer #base_model-deepset/roberta-base-squad2 #license-cc-by-4.0 #endpoints_compatible #region-us
|
# roberta-finetuned-subjqa-movies_2
This model is a fine-tuned version of deepset/roberta-base-squad2 on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
[
"# roberta-finetuned-subjqa-movies_2\n\nThis model is a fine-tuned version of deepset/roberta-base-squad2 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #roberta #question-answering #generated_from_trainer #base_model-deepset/roberta-base-squad2 #license-cc-by-4.0 #endpoints_compatible #region-us \n",
"# roberta-finetuned-subjqa-movies_2\n\nThis model is a fine-tuned version of deepset/roberta-base-squad2 on the None dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5",
"### Training results",
"### Framework versions\n\n- Transformers 4.38.2\n- Pytorch 2.2.1+cu121\n- Datasets 2.18.0\n- Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-7b-wo-medication_qa-sft
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the HuggingFaceH4/deita-10k-v0-sft dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5099
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3958 | 1.0 | 6 | 1.6723 |
| 1.0573 | 2.0 | 12 | 1.5254 |
| 0.8462 | 3.0 | 18 | 1.5099 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["alignment-handbook", "trl", "sft", "generated_from_trainer", "trl", "sft", "generated_from_trainer"], "datasets": ["HuggingFaceH4/deita-10k-v0-sft"], "base_model": "mistralai/Mistral-7B-v0.1", "model-index": [{"name": "mistral-7b-wo-medication_qa-sft", "results": []}]}
|
Minbyul/mistral-7b-wo-medication_qa-sft
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"dataset:HuggingFaceH4/deita-10k-v0-sft",
"base_model:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T12:40:44+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #sft #generated_from_trainer #dataset-HuggingFaceH4/deita-10k-v0-sft #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
mistral-7b-wo-medication\_qa-sft
================================
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the HuggingFaceH4/deita-10k-v0-sft dataset.
It achieves the following results on the evaluation set:
* Loss: 1.5099
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* distributed\_type: multi-GPU
* num\_devices: 4
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 64
* total\_eval\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.39.0.dev0
* Pytorch 2.1.2
* Datasets 2.14.6
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #alignment-handbook #trl #sft #generated_from_trainer #dataset-HuggingFaceH4/deita-10k-v0-sft #base_model-mistralai/Mistral-7B-v0.1 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
] |
text2text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
todayboard/trevor
| null |
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T12:41:13+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #tensorboard #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# OGSQL-7B

### Model Description
OGSQL-7B was fine-tuned for the task of converting natural language text into SQL queries.
- **Model type**: Transformer
- **Language(s) (NLP)**: SQL (target language for generation)
- **Finetuned from model**: codellama 7b
## Use Case
OGSQL-7B is designed to facilitate the conversion of natural language queries into structured SQL commands, aiding in database querying without the need for manual SQL knowledge.
## How to Get Started with the Model
```python
# Example code to load and use the model
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model_name = "OGSQL-7B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
def generate_sql(query):
inputs = tokenizer.encode(query, return_tensors="pt")
outputs = model.generate(inputs)
return tokenizer.decode(outputs[0], skip_special_tokens=True)
# Example use
query = """
using this context:
-- Create Customers Table
CREATE TABLE Customers (
customer_id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
email TEXT,
join_date DATE
);
-- Create Products Table
CREATE TABLE Products (
product_id INTEGER PRIMARY KEY,
name TEXT NOT NULL,
price DECIMAL(10, 2)
);
-- Create Orders Table
CREATE TABLE Orders (
order_id INTEGER PRIMARY KEY,
customer_id INTEGER,
product_id INTEGER,
order_date DATE,
quantity INTEGER,
total_price DECIMAL(10, 2),
FOREIGN KEY (customer_id) REFERENCES Customers(customer_id),
FOREIGN KEY (product_id) REFERENCES Products(product_id)
);
show me all the orders from last month , sort by date
"""
print(generate_sql(query))
```
## alternatively you can use this notebook:
[](https://colab.research.google.com/drive/1zfuzV3R1GQflHV_va03WArb8vhwPh_2T)
|
{"language": ["en"], "license": "cc-by-4.0", "tags": ["Text-to-sql"]}
|
OneGate/OG-SQL-7B
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"Text-to-sql",
"en",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T12:41:38+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #safetensors #llama #text-generation #Text-to-sql #en #license-cc-by-4.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# OGSQL-7B
!image/png
### Model Description
OGSQL-7B was fine-tuned for the task of converting natural language text into SQL queries.
- Model type: Transformer
- Language(s) (NLP): SQL (target language for generation)
- Finetuned from model: codellama 7b
## Use Case
OGSQL-7B is designed to facilitate the conversion of natural language queries into structured SQL commands, aiding in database querying without the need for manual SQL knowledge.
## How to Get Started with the Model
## alternatively you can use this notebook:
 (NLP): SQL (target language for generation)\n- Finetuned from model: codellama 7b",
"## Use Case\nOGSQL-7B is designed to facilitate the conversion of natural language queries into structured SQL commands, aiding in database querying without the need for manual SQL knowledge.",
"## How to Get Started with the Model",
"## alternatively you can use this notebook: \n (NLP): SQL (target language for generation)\n- Finetuned from model: codellama 7b",
"## Use Case\nOGSQL-7B is designed to facilitate the conversion of natural language queries into structured SQL commands, aiding in database querying without the need for manual SQL knowledge.",
"## How to Get Started with the Model",
"## alternatively you can use this notebook: \n。
### Performance
<table>
<tr>
<th>Model<br>(Q8_0 quant)</th>
<th><a href="https://huggingface.co/andrewcanis/c4ai-command-r-v01-GGUF">c4ai-command-r-v01-GGUF</a></th>
<th>JA-Starling-ChatV-7B-GGUF (This model)</th>
<th><a href="https://huggingface.co/TFMC/ChatNTQ-JA-7b-v1.0-GGUF">ChatNTQ-JA-7b-v1.0-GGUF</a></th>
<th><a href="https://huggingface.co/mmnga/RakutenAI-7B-chat-gguf">RakutenAI-7B-chat-gguf</a></th>
<th><a href="https://huggingface.co/mmnga/ELYZA-japanese-Llama-2-7b-instruct-gguf">ELYZA-japanese-Llama-2-7b-instruct-gguf</a></th>
</tr>
<tr>
<td>Parameters</td>
<td>35B</td>
<td>7B(Mistral)</td>
<td>7B(Mistral)</td>
<td>7B(Mistral)</td>
<td>7B(Llama-2)</td>
</tr>
<tr>
<td>ELYZAtasks100<br>average score</td>
<td>3.42</td>
<td>3.42</td>
<td>3.06</td>
<td>2.82</td>
<td>2.46</td>
</tr>
</table>
Scores on "<a href="https://huggingface.co/datasets/elyza/ELYZA-tasks-100">ELYZA-tasks-100</a>" benchmark for the instruction-tuned Japanese models evaluated by GPT-4-0125-preview. Please note that this is a simplified evaluation using the Q8 quantized models.
このスコアはinstruction-tuningを行った日本語モデルのベンチマーク「ELYZA-tasks-100」を使い、GPT-4-0125-previewにより評価させたものです。Q8量子化モデルを用いた簡易的な評価であることにご留意ください。
### Prompt Template
<pre><code>[INST] <<SYS>>\nあなたは役に立つアシスタントです。\n<</SYS>>\n\n{prompt} [/INST]</code></pre>
|
{"language": ["ja"], "license": "apache-2.0", "tags": ["Mistral"], "pipeline_tag": "text-generation"}
|
TFMC/Japanese-Starling-ChatV-7B-GGUF
| null |
[
"gguf",
"Mistral",
"text-generation",
"ja",
"license:apache-2.0",
"region:us"
] | null |
2024-04-14T12:42:00+00:00
|
[] |
[
"ja"
] |
TAGS
#gguf #Mistral #text-generation #ja #license-apache-2.0 #region-us
|
Japanese-Starling-ChatV-7B-GGUF
===============================
GGUF conversion of "<a href="URL
"Japanese-Starling-ChatV-7B" is a Japanese chat model built on top of "<a href="URL originally based on Mistral-7B-v0.1.
I applied the chat vector acquired by subtracting the weights of Mistral-7B-v0.1 from the weights of "<a href="URL to this model.
このモデルはchatntq-ja-7b-v1.0をベースにした7Bパラメータの日本語チャットモデルです。高性能の英語モデルであるStarling-LM-7B-betaの重みからMistral-7B-v0.1の重みを差し引くことで得たchat vectorを適用しています(<a href="URL>ブログ記事)。
### Performance
Scores on "<a href="URL benchmark for the instruction-tuned Japanese models evaluated by GPT-4-0125-preview. Please note that this is a simplified evaluation using the Q8 quantized models.
このスコアはinstruction-tuningを行った日本語モデルのベンチマーク「ELYZA-tasks-100」を使い、GPT-4-0125-previewにより評価させたものです。Q8量子化モデルを用いた簡易的な評価であることにご留意ください。
### Prompt Template
```
[INST] <<SYS>>\nあなたは役に立つアシスタントです。\n<</SYS>>\n\n{prompt} [/INST]
```
|
[
"### Performance\n\n\n\nScores on \"<a href=\"URL benchmark for the instruction-tuned Japanese models evaluated by GPT-4-0125-preview. Please note that this is a simplified evaluation using the Q8 quantized models.\n\n\nこのスコアはinstruction-tuningを行った日本語モデルのベンチマーク「ELYZA-tasks-100」を使い、GPT-4-0125-previewにより評価させたものです。Q8量子化モデルを用いた簡易的な評価であることにご留意ください。",
"### Prompt Template\n\n\n\n```\n[INST] <<SYS>>\\nあなたは役に立つアシスタントです。\\n<</SYS>>\\n\\n{prompt} [/INST]\n```"
] |
[
"TAGS\n#gguf #Mistral #text-generation #ja #license-apache-2.0 #region-us \n",
"### Performance\n\n\n\nScores on \"<a href=\"URL benchmark for the instruction-tuned Japanese models evaluated by GPT-4-0125-preview. Please note that this is a simplified evaluation using the Q8 quantized models.\n\n\nこのスコアはinstruction-tuningを行った日本語モデルのベンチマーク「ELYZA-tasks-100」を使い、GPT-4-0125-previewにより評価させたものです。Q8量子化モデルを用いた簡易的な評価であることにご留意ください。",
"### Prompt Template\n\n\n\n```\n[INST] <<SYS>>\\nあなたは役に立つアシスタントです。\\n<</SYS>>\\n\\n{prompt} [/INST]\n```"
] |
text-generation
|
transformers
|
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [NousResearch/Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)
* [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: NousResearch/Hermes-2-Pro-Mistral-7B
- model: WizardLM/WizardMath-7B-V1.1
merge_method: slerp
base_model: NousResearch/Hermes-2-Pro-Mistral-7B
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
```
|
{"library_name": "transformers", "tags": ["mergekit", "merge"], "base_model": ["NousResearch/Hermes-2-Pro-Mistral-7B", "WizardLM/WizardMath-7B-V1.1"]}
|
mergekit-community/mergekit-slerp-llfrpky
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:NousResearch/Hermes-2-Pro-Mistral-7B",
"base_model:WizardLM/WizardMath-7B-V1.1",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T12:48:49+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-NousResearch/Hermes-2-Pro-Mistral-7B #base_model-WizardLM/WizardMath-7B-V1.1 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# merge
This is a merge of pre-trained language models created using mergekit.
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* NousResearch/Hermes-2-Pro-Mistral-7B
* WizardLM/WizardMath-7B-V1.1
### Configuration
The following YAML configuration was used to produce this model:
|
[
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* NousResearch/Hermes-2-Pro-Mistral-7B\n* WizardLM/WizardMath-7B-V1.1",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #mergekit #merge #conversational #base_model-NousResearch/Hermes-2-Pro-Mistral-7B #base_model-WizardLM/WizardMath-7B-V1.1 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# merge\n\nThis is a merge of pre-trained language models created using mergekit.",
"## Merge Details",
"### Merge Method\n\nThis model was merged using the SLERP merge method.",
"### Models Merged\n\nThe following models were included in the merge:\n* NousResearch/Hermes-2-Pro-Mistral-7B\n* WizardLM/WizardMath-7B-V1.1",
"### Configuration\n\nThe following YAML configuration was used to produce this model:"
] |
text-generation
|
transformers
|
# NeuralPipe-7B-slerp
NeuralPipe-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: OpenPipe/mistral-ft-optimized-1218
layer_range: [0, 32]
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: OpenPipe/mistral-ft-optimized-1218
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "JoPmt/NeuralPipe-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
{"tags": ["merge", "mergekit", "lazymergekit", "OpenPipe/mistral-ft-optimized-1218", "mlabonne/NeuralHermes-2.5-Mistral-7B"], "base_model": ["OpenPipe/mistral-ft-optimized-1218", "mlabonne/NeuralHermes-2.5-Mistral-7B"]}
|
JoPmt/NeuralPipe-7B-slerp
| null |
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"OpenPipe/mistral-ft-optimized-1218",
"mlabonne/NeuralHermes-2.5-Mistral-7B",
"base_model:OpenPipe/mistral-ft-optimized-1218",
"base_model:mlabonne/NeuralHermes-2.5-Mistral-7B",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T12:48:50+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #OpenPipe/mistral-ft-optimized-1218 #mlabonne/NeuralHermes-2.5-Mistral-7B #base_model-OpenPipe/mistral-ft-optimized-1218 #base_model-mlabonne/NeuralHermes-2.5-Mistral-7B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# NeuralPipe-7B-slerp
NeuralPipe-7B-slerp is a merge of the following models using LazyMergekit:
* OpenPipe/mistral-ft-optimized-1218
* mlabonne/NeuralHermes-2.5-Mistral-7B
## Configuration
## Usage
|
[
"# NeuralPipe-7B-slerp\n\nNeuralPipe-7B-slerp is a merge of the following models using LazyMergekit:\n* OpenPipe/mistral-ft-optimized-1218\n* mlabonne/NeuralHermes-2.5-Mistral-7B",
"## Configuration",
"## Usage"
] |
[
"TAGS\n#transformers #safetensors #mistral #text-generation #merge #mergekit #lazymergekit #OpenPipe/mistral-ft-optimized-1218 #mlabonne/NeuralHermes-2.5-Mistral-7B #base_model-OpenPipe/mistral-ft-optimized-1218 #base_model-mlabonne/NeuralHermes-2.5-Mistral-7B #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# NeuralPipe-7B-slerp\n\nNeuralPipe-7B-slerp is a merge of the following models using LazyMergekit:\n* OpenPipe/mistral-ft-optimized-1218\n* mlabonne/NeuralHermes-2.5-Mistral-7B",
"## Configuration",
"## Usage"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# en_vi_envit5-base_doc_train
This model is a fine-tuned version of [VietAI/envit5-base](https://huggingface.co/VietAI/envit5-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Framework versions
- Transformers 4.37.2
- Pytorch 1.12.1+cu116
- Datasets 2.18.0
- Tokenizers 0.15.1
|
{"license": "mit", "tags": ["generated_from_trainer"], "base_model": "VietAI/envit5-base", "model-index": [{"name": "en_vi_envit5-base_doc_train", "results": []}]}
|
yuufong/en_vi_envit5-base_doc_train
| null |
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/envit5-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T12:50:39+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/envit5-base #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# en_vi_envit5-base_doc_train
This model is a fine-tuned version of VietAI/envit5-base on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Framework versions
- Transformers 4.37.2
- Pytorch 1.12.1+cu116
- Datasets 2.18.0
- Tokenizers 0.15.1
|
[
"# en_vi_envit5-base_doc_train\n\nThis model is a fine-tuned version of VietAI/envit5-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- gradient_accumulation_steps: 16\n- total_train_batch_size: 256\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20",
"### Framework versions\n\n- Transformers 4.37.2\n- Pytorch 1.12.1+cu116\n- Datasets 2.18.0\n- Tokenizers 0.15.1"
] |
[
"TAGS\n#transformers #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-VietAI/envit5-base #license-mit #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# en_vi_envit5-base_doc_train\n\nThis model is a fine-tuned version of VietAI/envit5-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 16\n- eval_batch_size: 16\n- seed: 42\n- gradient_accumulation_steps: 16\n- total_train_batch_size: 256\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 20",
"### Framework versions\n\n- Transformers 4.37.2\n- Pytorch 1.12.1+cu116\n- Datasets 2.18.0\n- Tokenizers 0.15.1"
] |
null | null |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-xl-absa-multitask-laptops
This model is a fine-tuned version of [ybelkada/flan-t5-xl-sharded-bf16](https://huggingface.co/ybelkada/flan-t5-xl-sharded-bf16) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0986
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.7611 | 0.32 | 200 | 3.2781 |
| 1.4428 | 0.63 | 400 | 0.4469 |
| 0.4548 | 0.95 | 600 | 0.2874 |
| 0.3146 | 1.26 | 800 | 0.2316 |
| 0.2675 | 1.58 | 1000 | 0.2096 |
| 0.2438 | 1.9 | 1200 | 0.1935 |
| 0.2244 | 2.21 | 1400 | 0.1714 |
| 0.2127 | 2.53 | 1600 | 0.1587 |
| 0.1927 | 2.84 | 1800 | 0.1541 |
| 0.1787 | 3.16 | 2000 | 0.1467 |
| 0.1715 | 3.48 | 2200 | 0.1350 |
| 0.1625 | 3.79 | 2400 | 0.1357 |
| 0.1579 | 4.11 | 2600 | 0.1304 |
| 0.1522 | 4.42 | 2800 | 0.1222 |
| 0.1417 | 4.74 | 3000 | 0.1204 |
| 0.1399 | 5.06 | 3200 | 0.1234 |
| 0.1303 | 5.37 | 3400 | 0.1211 |
| 0.1326 | 5.69 | 3600 | 0.1093 |
| 0.1241 | 6.0 | 3800 | 0.1090 |
| 0.1212 | 6.32 | 4000 | 0.1127 |
| 0.1189 | 6.64 | 4200 | 0.1045 |
| 0.124 | 6.95 | 4400 | 0.1077 |
| 0.1152 | 7.27 | 4600 | 0.1024 |
| 0.1141 | 7.58 | 4800 | 0.1008 |
| 0.1072 | 7.9 | 5000 | 0.1043 |
| 0.1146 | 8.21 | 5200 | 0.1011 |
| 0.1071 | 8.53 | 5400 | 0.0996 |
| 0.1149 | 8.85 | 5600 | 0.0990 |
| 0.1088 | 9.16 | 5800 | 0.1003 |
| 0.1064 | 9.48 | 6000 | 0.0988 |
| 0.1049 | 9.79 | 6200 | 0.0986 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"tags": ["generated_from_trainer"], "base_model": "ybelkada/flan-t5-xl-sharded-bf16", "model-index": [{"name": "flan-t5-xl-absa-multitask-laptops", "results": []}]}
|
Shakhovak/flan-t5-xl-absa-multitask-laptops
| null |
[
"generated_from_trainer",
"base_model:ybelkada/flan-t5-xl-sharded-bf16",
"region:us"
] | null |
2024-04-14T12:54:34+00:00
|
[] |
[] |
TAGS
#generated_from_trainer #base_model-ybelkada/flan-t5-xl-sharded-bf16 #region-us
|
flan-t5-xl-absa-multitask-laptops
=================================
This model is a fine-tuned version of ybelkada/flan-t5-xl-sharded-bf16 on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 0.0986
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 5e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 10
### Training results
### Framework versions
* Transformers 4.38.2
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#generated_from_trainer #base_model-ybelkada/flan-t5-xl-sharded-bf16 #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 5e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 10",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.38.2\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama2-7b-wo-medication_qa-sft
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the HuggingFaceH4/deita-10k-v0-sft dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1052 | 0.92 | 6 | 1.2976 |
| 0.9691 | 2.0 | 13 | 1.2458 |
| 0.871 | 2.77 | 18 | 1.2333 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.2
|
{"tags": ["alignment-handbook", "trl", "sft", "generated_from_trainer", "trl", "sft", "generated_from_trainer"], "datasets": ["HuggingFaceH4/deita-10k-v0-sft"], "base_model": "meta-llama/Llama-2-7b-hf", "model-index": [{"name": "llama2-7b-wo-medication_qa-sft", "results": []}]}
|
Minbyul/llama2-7b-wo-medication_qa-sft
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"dataset:HuggingFaceH4/deita-10k-v0-sft",
"base_model:meta-llama/Llama-2-7b-hf",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T12:54:42+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #alignment-handbook #trl #sft #generated_from_trainer #dataset-HuggingFaceH4/deita-10k-v0-sft #base_model-meta-llama/Llama-2-7b-hf #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
llama2-7b-wo-medication\_qa-sft
===============================
This model is a fine-tuned version of meta-llama/Llama-2-7b-hf on the HuggingFaceH4/deita-10k-v0-sft dataset.
It achieves the following results on the evaluation set:
* Loss: 1.2333
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* distributed\_type: multi-GPU
* num\_devices: 4
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 64
* total\_eval\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.39.0.dev0
* Pytorch 2.1.2
* Datasets 2.14.6
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #alignment-handbook #trl #sft #generated_from_trainer #dataset-HuggingFaceH4/deita-10k-v0-sft #base_model-meta-llama/Llama-2-7b-hf #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
] |
text2text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
OvrK12/t5large521109
| null |
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T12:55:33+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #t5 #text2text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# GreenBit LLMs
This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance.
Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information.
|
{"license": "apache-2.0"}
|
GreenBitAI/Qwen-1.5-1.8B-channel-mix-bpw-2.2
| null |
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T12:56:50+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# GreenBit LLMs
This is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.
Please refer to our Github page for the code to run the model and more information.
|
[
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] |
[
"TAGS\n#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
tom-brady/sn6_248
| null |
[
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-14T12:56:57+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #stablelm #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# GreenBit LLMs
This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance.
Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information.
|
{"license": "apache-2.0"}
|
GreenBitAI/Qwen-1.5-1.8B-channel-mix-bpw-2.5
| null |
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T12:57:07+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# GreenBit LLMs
This is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.
Please refer to our Github page for the code to run the model and more information.
|
[
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] |
[
"TAGS\n#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] |
text-generation
|
transformers
|
# GreenBit LLMs
This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance.
Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information.
|
{"license": "apache-2.0"}
|
GreenBitAI/Qwen-1.5-1.8B-channel-mix-bpw-3.0
| null |
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T12:57:17+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# GreenBit LLMs
This is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.
Please refer to our Github page for the code to run the model and more information.
|
[
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] |
[
"TAGS\n#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] |
text-generation
|
transformers
|
# GreenBit LLMs
This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance.
Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information.
|
{"license": "apache-2.0"}
|
GreenBitAI/Qwen-1.5-4B-channel-mix-bpw-2.2
| null |
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T12:57:26+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# GreenBit LLMs
This is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.
Please refer to our Github page for the code to run the model and more information.
|
[
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] |
[
"TAGS\n#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] |
text-generation
|
transformers
|
# GreenBit LLMs
This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance.
Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information.
|
{"license": "apache-2.0"}
|
GreenBitAI/Qwen-1.5-4B-channel-mix-bpw-2.5
| null |
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T12:57:32+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# GreenBit LLMs
This is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.
Please refer to our Github page for the code to run the model and more information.
|
[
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] |
[
"TAGS\n#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] |
text-generation
|
transformers
|
# GreenBit LLMs
This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance.
Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information.
|
{"license": "apache-2.0"}
|
GreenBitAI/Qwen-1.5-4B-channel-mix-bpw-3.0
| null |
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T12:57:43+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# GreenBit LLMs
This is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.
Please refer to our Github page for the code to run the model and more information.
|
[
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] |
[
"TAGS\n#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] |
text-generation
|
transformers
|
# GreenBit LLMs
This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance.
Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information.
|
{"license": "apache-2.0"}
|
GreenBitAI/Qwen-1.5-7B-channel-mix-bpw-2.2
| null |
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T12:57:54+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# GreenBit LLMs
This is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.
Please refer to our Github page for the code to run the model and more information.
|
[
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] |
[
"TAGS\n#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] |
text-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DISTILBERT-IMDB-HUGGINGFACE
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3762
- Accuracy: 0.912
- F1: 0.9113
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "DISTILBERT-IMDB-HUGGINGFACE", "results": []}]}
|
cyh002/DISTILBERT-IMDB-HUGGINGFACE
| null |
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-14T12:57:54+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# DISTILBERT-IMDB-HUGGINGFACE
This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3762
- Accuracy: 0.912
- F1: 0.9113
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
[
"# DISTILBERT-IMDB-HUGGINGFACE\n\nThis model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.3762\n- Accuracy: 0.912\n- F1: 0.9113",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #distilbert #text-classification #generated_from_trainer #base_model-distilbert-base-uncased #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# DISTILBERT-IMDB-HUGGINGFACE\n\nThis model is a fine-tuned version of distilbert-base-uncased on an unknown dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 0.3762\n- Accuracy: 0.912\n- F1: 0.9113",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.40.0\n- Pytorch 2.2.1+cu121\n- Datasets 2.19.0\n- Tokenizers 0.19.1"
] |
text-generation
|
transformers
|
# GreenBit LLMs
This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance.
Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information.
|
{"license": "apache-2.0"}
|
GreenBitAI/Qwen-1.5-7B-channel-mix-bpw-2.5
| null |
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T12:58:00+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# GreenBit LLMs
This is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.
Please refer to our Github page for the code to run the model and more information.
|
[
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] |
[
"TAGS\n#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] |
text-generation
|
transformers
|
# GreenBit LLMs
This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance.
Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information.
|
{"license": "apache-2.0"}
|
GreenBitAI/Qwen-1.5-7B-channel-mix-bpw-3.0
| null |
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T12:58:07+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# GreenBit LLMs
This is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.
Please refer to our Github page for the code to run the model and more information.
|
[
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] |
[
"TAGS\n#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] |
text-generation
|
transformers
|
# GreenBit LLMs
This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance.
Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information.
|
{"license": "apache-2.0"}
|
GreenBitAI/Qwen-1.5-14B-channel-mix-bpw-2.2
| null |
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T12:58:16+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# GreenBit LLMs
This is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.
Please refer to our Github page for the code to run the model and more information.
|
[
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] |
[
"TAGS\n#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] |
text-generation
|
transformers
|
# GreenBit LLMs
This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance.
Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information.
|
{"license": "apache-2.0"}
|
GreenBitAI/Qwen-1.5-14B-channel-mix-bpw-2.5
| null |
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T12:58:21+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# GreenBit LLMs
This is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.
Please refer to our Github page for the code to run the model and more information.
|
[
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] |
[
"TAGS\n#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] |
text-generation
|
transformers
|
# GreenBit LLMs
This is GreenBitAI's pretrained **low-bit** LLMs with extreme compression yet still strong performance.
Please refer to our [Github page](https://github.com/GreenBitAI/green-bit-llm) for the code to run the model and more information.
|
{"license": "apache-2.0"}
|
GreenBitAI/Qwen-1.5-14B-channel-mix-bpw-3.0
| null |
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T12:58:28+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# GreenBit LLMs
This is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.
Please refer to our Github page for the code to run the model and more information.
|
[
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] |
[
"TAGS\n#transformers #safetensors #qwen2 #text-generation #conversational #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# GreenBit LLMs\n\nThis is GreenBitAI's pretrained low-bit LLMs with extreme compression yet still strong performance.\n\nPlease refer to our Github page for the code to run the model and more information."
] |
null |
peft
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.10.0
|
{"library_name": "peft", "base_model": "meta-llama/Llama-2-7b-hf"}
|
cgihlstorf/llama27b-finetuned_32_1_0.0003_alternate_no_output_random_train_nonrandom_val
| null |
[
"peft",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"region:us"
] | null |
2024-04-14T13:01:46+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#peft #arxiv-1910.09700 #base_model-meta-llama/Llama-2-7b-hf #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.10.0
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] |
[
"TAGS\n#peft #arxiv-1910.09700 #base_model-meta-llama/Llama-2-7b-hf #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.10.0"
] |
null | null |
iMatrix gguf quants of a newer finetune of Mixtral-8x22B
EdgeQuants still underway, IQ4XS version recommended. Make sure to combine/merge the parts back together before using
```
cat tessIQ4XS.gguf.part* > tessIQ4XS.gguf
```
Then use with llama.cpp version from April 12 or older. April 13 release had massive changes and messed up inferene for MoE models
|
{"license": "apache-2.0", "base_model": "migtissera/Tess-2.0-Mixtral-8x22B"}
|
nisten/Tess-Mixtral-8x22B-imatrix-gguf
| null |
[
"gguf",
"base_model:migtissera/Tess-2.0-Mixtral-8x22B",
"license:apache-2.0",
"region:us"
] | null |
2024-04-14T13:02:00+00:00
|
[] |
[] |
TAGS
#gguf #base_model-migtissera/Tess-2.0-Mixtral-8x22B #license-apache-2.0 #region-us
|
iMatrix gguf quants of a newer finetune of Mixtral-8x22B
EdgeQuants still underway, IQ4XS version recommended. Make sure to combine/merge the parts back together before using
Then use with URL version from April 12 or older. April 13 release had massive changes and messed up inferene for MoE models
|
[] |
[
"TAGS\n#gguf #base_model-migtissera/Tess-2.0-Mixtral-8x22B #license-apache-2.0 #region-us \n"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
tomaszki/mistral-32
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T13:04:08+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
XsoraS/outputs3
| null |
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T13:06:24+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
tomaszki/mistral-32-a
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T13:06:44+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
XsoraS/outputs2
| null |
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T13:07:09+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #gpt_neox #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #gpt_neox #text-generation #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# meditron-7b-wo-medication_qa-sft
This model is a fine-tuned version of [epfl-llm/meditron-7b](https://huggingface.co/epfl-llm/meditron-7b) on the HuggingFaceH4/deita-10k-v0-sft dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3274
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1713 | 0.92 | 6 | 1.3683 |
| 1.0185 | 2.0 | 13 | 1.3435 |
| 0.9011 | 2.77 | 18 | 1.3274 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.2
|
{"license": "llama2", "tags": ["alignment-handbook", "trl", "sft", "generated_from_trainer", "trl", "sft", "generated_from_trainer"], "datasets": ["HuggingFaceH4/deita-10k-v0-sft"], "base_model": "epfl-llm/meditron-7b", "model-index": [{"name": "meditron-7b-wo-medication_qa-sft", "results": []}]}
|
Minbyul/meditron-7b-wo-medication_qa-sft
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"dataset:HuggingFaceH4/deita-10k-v0-sft",
"base_model:epfl-llm/meditron-7b",
"license:llama2",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T13:08:36+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #alignment-handbook #trl #sft #generated_from_trainer #dataset-HuggingFaceH4/deita-10k-v0-sft #base_model-epfl-llm/meditron-7b #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
meditron-7b-wo-medication\_qa-sft
=================================
This model is a fine-tuned version of epfl-llm/meditron-7b on the HuggingFaceH4/deita-10k-v0-sft dataset.
It achieves the following results on the evaluation set:
* Loss: 1.3274
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* distributed\_type: multi-GPU
* num\_devices: 4
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 64
* total\_eval\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.39.0.dev0
* Pytorch 2.1.2
* Datasets 2.14.6
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #alignment-handbook #trl #sft #generated_from_trainer #dataset-HuggingFaceH4/deita-10k-v0-sft #base_model-epfl-llm/meditron-7b #license-llama2 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
] |
null |
transformers
|
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/jondurbin/bagel-7b-v0.5
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/bagel-7b-v0.5-GGUF/resolve/main/bagel-7b-v0.5.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-7b-v0.5-GGUF/resolve/main/bagel-7b-v0.5.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-7b-v0.5-GGUF/resolve/main/bagel-7b-v0.5.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-7b-v0.5-GGUF/resolve/main/bagel-7b-v0.5.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/bagel-7b-v0.5-GGUF/resolve/main/bagel-7b-v0.5.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-7b-v0.5-GGUF/resolve/main/bagel-7b-v0.5.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/bagel-7b-v0.5-GGUF/resolve/main/bagel-7b-v0.5.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-7b-v0.5-GGUF/resolve/main/bagel-7b-v0.5.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-7b-v0.5-GGUF/resolve/main/bagel-7b-v0.5.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/bagel-7b-v0.5-GGUF/resolve/main/bagel-7b-v0.5.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/bagel-7b-v0.5-GGUF/resolve/main/bagel-7b-v0.5.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-7b-v0.5-GGUF/resolve/main/bagel-7b-v0.5.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/bagel-7b-v0.5-GGUF/resolve/main/bagel-7b-v0.5.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/bagel-7b-v0.5-GGUF/resolve/main/bagel-7b-v0.5.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "datasets": ["ai2_arc", "allenai/ultrafeedback_binarized_cleaned", "argilla/distilabel-intel-orca-dpo-pairs", "jondurbin/airoboros-3.2", "codeparrot/apps", "facebook/belebele", "bluemoon-fandom-1-1-rp-cleaned", "boolq", "camel-ai/biology", "camel-ai/chemistry", "camel-ai/math", "camel-ai/physics", "jondurbin/contextual-dpo-v0.1", "jondurbin/gutenberg-dpo-v0.1", "jondurbin/py-dpo-v0.1", "jondurbin/truthy-dpo-v0.1", "LDJnr/Capybara", "jondurbin/cinematika-v0.1", "WizardLM/WizardLM_evol_instruct_70k", "glaiveai/glaive-function-calling-v2", "jondurbin/gutenberg-dpo-v0.1", "grimulkan/LimaRP-augmented", "lmsys/lmsys-chat-1m", "ParisNeo/lollms_aware_dataset", "TIGER-Lab/MathInstruct", "Muennighoff/natural-instructions", "openbookqa", "kingbri/PIPPA-shareGPT", "piqa", "Vezora/Tested-22k-Python-Alpaca", "ropes", "cakiki/rosetta-code", "Open-Orca/SlimOrca", "b-mc2/sql-create-context", "squad_v2", "mattpscott/airoboros-summarization", "migtissera/Synthia-v1.3", "unalignment/toxic-dpo-v0.2", "WhiteRabbitNeo/WRN-Chapter-1", "WhiteRabbitNeo/WRN-Chapter-2", "winogrande"], "base_model": "jondurbin/bagel-7b-v0.5", "quantized_by": "mradermacher"}
|
mradermacher/bagel-7b-v0.5-GGUF
| null |
[
"transformers",
"gguf",
"en",
"dataset:ai2_arc",
"dataset:allenai/ultrafeedback_binarized_cleaned",
"dataset:argilla/distilabel-intel-orca-dpo-pairs",
"dataset:jondurbin/airoboros-3.2",
"dataset:codeparrot/apps",
"dataset:facebook/belebele",
"dataset:bluemoon-fandom-1-1-rp-cleaned",
"dataset:boolq",
"dataset:camel-ai/biology",
"dataset:camel-ai/chemistry",
"dataset:camel-ai/math",
"dataset:camel-ai/physics",
"dataset:jondurbin/contextual-dpo-v0.1",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:jondurbin/py-dpo-v0.1",
"dataset:jondurbin/truthy-dpo-v0.1",
"dataset:LDJnr/Capybara",
"dataset:jondurbin/cinematika-v0.1",
"dataset:WizardLM/WizardLM_evol_instruct_70k",
"dataset:glaiveai/glaive-function-calling-v2",
"dataset:grimulkan/LimaRP-augmented",
"dataset:lmsys/lmsys-chat-1m",
"dataset:ParisNeo/lollms_aware_dataset",
"dataset:TIGER-Lab/MathInstruct",
"dataset:Muennighoff/natural-instructions",
"dataset:openbookqa",
"dataset:kingbri/PIPPA-shareGPT",
"dataset:piqa",
"dataset:Vezora/Tested-22k-Python-Alpaca",
"dataset:ropes",
"dataset:cakiki/rosetta-code",
"dataset:Open-Orca/SlimOrca",
"dataset:b-mc2/sql-create-context",
"dataset:squad_v2",
"dataset:mattpscott/airoboros-summarization",
"dataset:migtissera/Synthia-v1.3",
"dataset:unalignment/toxic-dpo-v0.2",
"dataset:WhiteRabbitNeo/WRN-Chapter-1",
"dataset:WhiteRabbitNeo/WRN-Chapter-2",
"dataset:winogrande",
"base_model:jondurbin/bagel-7b-v0.5",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-14T13:09:41+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #gguf #en #dataset-ai2_arc #dataset-allenai/ultrafeedback_binarized_cleaned #dataset-argilla/distilabel-intel-orca-dpo-pairs #dataset-jondurbin/airoboros-3.2 #dataset-codeparrot/apps #dataset-facebook/belebele #dataset-bluemoon-fandom-1-1-rp-cleaned #dataset-boolq #dataset-camel-ai/biology #dataset-camel-ai/chemistry #dataset-camel-ai/math #dataset-camel-ai/physics #dataset-jondurbin/contextual-dpo-v0.1 #dataset-jondurbin/gutenberg-dpo-v0.1 #dataset-jondurbin/py-dpo-v0.1 #dataset-jondurbin/truthy-dpo-v0.1 #dataset-LDJnr/Capybara #dataset-jondurbin/cinematika-v0.1 #dataset-WizardLM/WizardLM_evol_instruct_70k #dataset-glaiveai/glaive-function-calling-v2 #dataset-grimulkan/LimaRP-augmented #dataset-lmsys/lmsys-chat-1m #dataset-ParisNeo/lollms_aware_dataset #dataset-TIGER-Lab/MathInstruct #dataset-Muennighoff/natural-instructions #dataset-openbookqa #dataset-kingbri/PIPPA-shareGPT #dataset-piqa #dataset-Vezora/Tested-22k-Python-Alpaca #dataset-ropes #dataset-cakiki/rosetta-code #dataset-Open-Orca/SlimOrca #dataset-b-mc2/sql-create-context #dataset-squad_v2 #dataset-mattpscott/airoboros-summarization #dataset-migtissera/Synthia-v1.3 #dataset-unalignment/toxic-dpo-v0.2 #dataset-WhiteRabbitNeo/WRN-Chapter-1 #dataset-WhiteRabbitNeo/WRN-Chapter-2 #dataset-winogrande #base_model-jondurbin/bagel-7b-v0.5 #license-apache-2.0 #endpoints_compatible #region-us
|
About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
|
[] |
[
"TAGS\n#transformers #gguf #en #dataset-ai2_arc #dataset-allenai/ultrafeedback_binarized_cleaned #dataset-argilla/distilabel-intel-orca-dpo-pairs #dataset-jondurbin/airoboros-3.2 #dataset-codeparrot/apps #dataset-facebook/belebele #dataset-bluemoon-fandom-1-1-rp-cleaned #dataset-boolq #dataset-camel-ai/biology #dataset-camel-ai/chemistry #dataset-camel-ai/math #dataset-camel-ai/physics #dataset-jondurbin/contextual-dpo-v0.1 #dataset-jondurbin/gutenberg-dpo-v0.1 #dataset-jondurbin/py-dpo-v0.1 #dataset-jondurbin/truthy-dpo-v0.1 #dataset-LDJnr/Capybara #dataset-jondurbin/cinematika-v0.1 #dataset-WizardLM/WizardLM_evol_instruct_70k #dataset-glaiveai/glaive-function-calling-v2 #dataset-grimulkan/LimaRP-augmented #dataset-lmsys/lmsys-chat-1m #dataset-ParisNeo/lollms_aware_dataset #dataset-TIGER-Lab/MathInstruct #dataset-Muennighoff/natural-instructions #dataset-openbookqa #dataset-kingbri/PIPPA-shareGPT #dataset-piqa #dataset-Vezora/Tested-22k-Python-Alpaca #dataset-ropes #dataset-cakiki/rosetta-code #dataset-Open-Orca/SlimOrca #dataset-b-mc2/sql-create-context #dataset-squad_v2 #dataset-mattpscott/airoboros-summarization #dataset-migtissera/Synthia-v1.3 #dataset-unalignment/toxic-dpo-v0.2 #dataset-WhiteRabbitNeo/WRN-Chapter-1 #dataset-WhiteRabbitNeo/WRN-Chapter-2 #dataset-winogrande #base_model-jondurbin/bagel-7b-v0.5 #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
null |
peft
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-13b-hf-platypus-lamini-vxxiii-chat
This model is a fine-tuned version of [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.1
- Pytorch 2.2.0+cu121
- Datasets 2.14.6
- Tokenizers 0.15.1
|
{"library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "meta-llama/Llama-2-13b-hf", "model-index": [{"name": "llama-13b-hf-platypus-lamini-vxxiii-chat", "results": []}]}
|
NassimB/llama-13b-hf-platypus-lamini-vxxiii-chat
| null |
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-13b-hf",
"region:us"
] | null |
2024-04-14T13:11:08+00:00
|
[] |
[] |
TAGS
#peft #safetensors #trl #sft #generated_from_trainer #base_model-meta-llama/Llama-2-13b-hf #region-us
|
# llama-13b-hf-platypus-lamini-vxxiii-chat
This model is a fine-tuned version of meta-llama/Llama-2-13b-hf on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.1
- Pytorch 2.2.0+cu121
- Datasets 2.14.6
- Tokenizers 0.15.1
|
[
"# llama-13b-hf-platypus-lamini-vxxiii-chat\n\nThis model is a fine-tuned version of meta-llama/Llama-2-13b-hf on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 16\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_steps: 100\n- num_epochs: 1\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.1\n- Pytorch 2.2.0+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.1"
] |
[
"TAGS\n#peft #safetensors #trl #sft #generated_from_trainer #base_model-meta-llama/Llama-2-13b-hf #region-us \n",
"# llama-13b-hf-platypus-lamini-vxxiii-chat\n\nThis model is a fine-tuned version of meta-llama/Llama-2-13b-hf on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0003\n- train_batch_size: 1\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 16\n- total_train_batch_size: 16\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: cosine\n- lr_scheduler_warmup_steps: 100\n- num_epochs: 1\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- PEFT 0.8.2\n- Transformers 4.37.1\n- Pytorch 2.2.0+cu121\n- Datasets 2.14.6\n- Tokenizers 0.15.1"
] |
null |
transformers
|
## About
<!-- ### quantize_version: 1 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/LeroyDyer/Mixtral_AI_CyberTron_Ultra
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberTron_Ultra-GGUF/resolve/main/Mixtral_AI_CyberTron_Ultra.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberTron_Ultra-GGUF/resolve/main/Mixtral_AI_CyberTron_Ultra.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberTron_Ultra-GGUF/resolve/main/Mixtral_AI_CyberTron_Ultra.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberTron_Ultra-GGUF/resolve/main/Mixtral_AI_CyberTron_Ultra.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberTron_Ultra-GGUF/resolve/main/Mixtral_AI_CyberTron_Ultra.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberTron_Ultra-GGUF/resolve/main/Mixtral_AI_CyberTron_Ultra.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberTron_Ultra-GGUF/resolve/main/Mixtral_AI_CyberTron_Ultra.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberTron_Ultra-GGUF/resolve/main/Mixtral_AI_CyberTron_Ultra.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberTron_Ultra-GGUF/resolve/main/Mixtral_AI_CyberTron_Ultra.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberTron_Ultra-GGUF/resolve/main/Mixtral_AI_CyberTron_Ultra.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberTron_Ultra-GGUF/resolve/main/Mixtral_AI_CyberTron_Ultra.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberTron_Ultra-GGUF/resolve/main/Mixtral_AI_CyberTron_Ultra.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberTron_Ultra-GGUF/resolve/main/Mixtral_AI_CyberTron_Ultra.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mixtral_AI_CyberTron_Ultra-GGUF/resolve/main/Mixtral_AI_CyberTron_Ultra.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
{"language": ["en"], "license": "apache-2.0", "library_name": "transformers", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "code", "medical ", "farmer", "doctor", "Mega-Series", "Cyber-Series", "Role-Play", "Self-Rag", "ThinkingBot"], "datasets": ["gretelai/synthetic_text_to_sql", "HuggingFaceTB/cosmopedia", "teknium/OpenHermes-2.5", "Open-Orca/SlimOrca", "Open-Orca/OpenOrca", "cognitivecomputations/dolphin-coder", "databricks/databricks-dolly-15k", "yahma/alpaca-cleaned", "uonlp/CulturaX", "mwitiderrick/SwahiliPlatypus", "swahili", "Rogendo/English-Swahili-Sentence-Pairs", "ise-uiuc/Magicoder-Evol-Instruct-110K", "meta-math/MetaMathQA"], "base_model": "LeroyDyer/Mixtral_AI_CyberTron_Ultra", "quantized_by": "mradermacher"}
|
mradermacher/Mixtral_AI_CyberTron_Ultra-GGUF
| null |
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"code",
"medical ",
"farmer",
"doctor",
"Mega-Series",
"Cyber-Series",
"Role-Play",
"Self-Rag",
"ThinkingBot",
"en",
"dataset:gretelai/synthetic_text_to_sql",
"dataset:HuggingFaceTB/cosmopedia",
"dataset:teknium/OpenHermes-2.5",
"dataset:Open-Orca/SlimOrca",
"dataset:Open-Orca/OpenOrca",
"dataset:cognitivecomputations/dolphin-coder",
"dataset:databricks/databricks-dolly-15k",
"dataset:yahma/alpaca-cleaned",
"dataset:uonlp/CulturaX",
"dataset:mwitiderrick/SwahiliPlatypus",
"dataset:swahili",
"dataset:Rogendo/English-Swahili-Sentence-Pairs",
"dataset:ise-uiuc/Magicoder-Evol-Instruct-110K",
"dataset:meta-math/MetaMathQA",
"base_model:LeroyDyer/Mixtral_AI_CyberTron_Ultra",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-14T13:13:04+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #gguf #text-generation-inference #unsloth #mistral #trl #code #medical #farmer #doctor #Mega-Series #Cyber-Series #Role-Play #Self-Rag #ThinkingBot #en #dataset-gretelai/synthetic_text_to_sql #dataset-HuggingFaceTB/cosmopedia #dataset-teknium/OpenHermes-2.5 #dataset-Open-Orca/SlimOrca #dataset-Open-Orca/OpenOrca #dataset-cognitivecomputations/dolphin-coder #dataset-databricks/databricks-dolly-15k #dataset-yahma/alpaca-cleaned #dataset-uonlp/CulturaX #dataset-mwitiderrick/SwahiliPlatypus #dataset-swahili #dataset-Rogendo/English-Swahili-Sentence-Pairs #dataset-ise-uiuc/Magicoder-Evol-Instruct-110K #dataset-meta-math/MetaMathQA #base_model-LeroyDyer/Mixtral_AI_CyberTron_Ultra #license-apache-2.0 #endpoints_compatible #region-us
|
About
-----
static quants of URL
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
Usage
-----
If you are unsure how to use GGUF files, refer to one of TheBloke's
READMEs for
more details, including on how to concatenate multi-part files.
Provided Quants
---------------
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
!URL
And here are Artefact2's thoughts on the matter:
URL
FAQ / Model Request
-------------------
See URL for some answers to
questions you might have and/or if you want some other model quantized.
Thanks
------
I thank my company, nethype GmbH, for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
|
[] |
[
"TAGS\n#transformers #gguf #text-generation-inference #unsloth #mistral #trl #code #medical #farmer #doctor #Mega-Series #Cyber-Series #Role-Play #Self-Rag #ThinkingBot #en #dataset-gretelai/synthetic_text_to_sql #dataset-HuggingFaceTB/cosmopedia #dataset-teknium/OpenHermes-2.5 #dataset-Open-Orca/SlimOrca #dataset-Open-Orca/OpenOrca #dataset-cognitivecomputations/dolphin-coder #dataset-databricks/databricks-dolly-15k #dataset-yahma/alpaca-cleaned #dataset-uonlp/CulturaX #dataset-mwitiderrick/SwahiliPlatypus #dataset-swahili #dataset-Rogendo/English-Swahili-Sentence-Pairs #dataset-ise-uiuc/Magicoder-Evol-Instruct-110K #dataset-meta-math/MetaMathQA #base_model-LeroyDyer/Mixtral_AI_CyberTron_Ultra #license-apache-2.0 #endpoints_compatible #region-us \n"
] |
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
NouRed/BioMed-Gemma-2b
| null |
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-14T13:15:09+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
fill-mask
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-cybersecurity_readme2
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7293
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 125 | 1.8151 |
| No log | 2.0 | 250 | 1.7528 |
| No log | 3.0 | 375 | 1.7018 |
### Framework versions
- Transformers 4.40.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "distilroberta-base", "model-index": [{"name": "distilroberta-base-finetuned-cybersecurity_readme2", "results": []}]}
|
LDDon/distilroberta-base-finetuned-cybersecurity_readme2
| null |
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:distilroberta-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2024-04-14T13:15:35+00:00
|
[] |
[] |
TAGS
#transformers #tensorboard #safetensors #roberta #fill-mask #generated_from_trainer #base_model-distilroberta-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
distilroberta-base-finetuned-cybersecurity\_readme2
===================================================
This model is a fine-tuned version of distilroberta-base on an unknown dataset.
It achieves the following results on the evaluation set:
* Loss: 1.7293
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 8
* eval\_batch\_size: 8
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3.0
### Training results
### Framework versions
* Transformers 4.40.0.dev0
* Pytorch 2.2.1+cu121
* Datasets 2.18.0
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #tensorboard #safetensors #roberta #fill-mask #generated_from_trainer #base_model-distilroberta-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 8\n* eval\\_batch\\_size: 8\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3.0",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.40.0.dev0\n* Pytorch 2.2.1+cu121\n* Datasets 2.18.0\n* Tokenizers 0.15.2"
] |
text-generation
|
transformers
|
## Model Introduction:
CTIQwen is a fine-tuned model of the Qwen 1.5 model, specifically optimized for knowledge graph generation from cybersecurity text. It is designed to extend the capabilities of CTIKG.
## Model Progress:
April 2024, we committed the first version of CTIQwen.
|
{"language": ["en"], "license": "other"}
|
revealcti/cti-qwen1.5-70b-awq
| null |
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T13:15:54+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #safetensors #qwen2 #text-generation #conversational #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
## Model Introduction:
CTIQwen is a fine-tuned model of the Qwen 1.5 model, specifically optimized for knowledge graph generation from cybersecurity text. It is designed to extend the capabilities of CTIKG.
## Model Progress:
April 2024, we committed the first version of CTIQwen.
|
[
"## Model Introduction:\nCTIQwen is a fine-tuned model of the Qwen 1.5 model, specifically optimized for knowledge graph generation from cybersecurity text. It is designed to extend the capabilities of CTIKG.",
"## Model Progress:\nApril 2024, we committed the first version of CTIQwen."
] |
[
"TAGS\n#transformers #safetensors #qwen2 #text-generation #conversational #en #license-other #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"## Model Introduction:\nCTIQwen is a fine-tuned model of the Qwen 1.5 model, specifically optimized for knowledge graph generation from cybersecurity text. It is designed to extend the capabilities of CTIKG.",
"## Model Progress:\nApril 2024, we committed the first version of CTIQwen."
] |
text-generation
|
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
tomaszki/mistral-32-b
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T13:16:44+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #conversational #arxiv-1910.09700 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
null |
transformers
|
# Uploaded model
- **Developed by:** darshit0503
- **License:** apache-2.0
- **Finetuned from model :** Open-Orca/Mistral-7B-OpenOrca
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
{"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "gguf"], "datasets": ["Amod/mental_health_counseling_conversations"], "base_model": "Open-Orca/Mistral-7B-OpenOrca"}
|
darshit0503/openorca_Ft_mental_health_counselling_GGUF
| null |
[
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"dataset:Amod/mental_health_counseling_conversations",
"base_model:Open-Orca/Mistral-7B-OpenOrca",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2024-04-14T13:18:39+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #gguf #mistral #text-generation-inference #unsloth #en #dataset-Amod/mental_health_counseling_conversations #base_model-Open-Orca/Mistral-7B-OpenOrca #license-apache-2.0 #endpoints_compatible #region-us
|
# Uploaded model
- Developed by: darshit0503
- License: apache-2.0
- Finetuned from model : Open-Orca/Mistral-7B-OpenOrca
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
<img src="URL width="200"/>
|
[
"# Uploaded model\n\n- Developed by: darshit0503\n- License: apache-2.0\n- Finetuned from model : Open-Orca/Mistral-7B-OpenOrca\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
[
"TAGS\n#transformers #gguf #mistral #text-generation-inference #unsloth #en #dataset-Amod/mental_health_counseling_conversations #base_model-Open-Orca/Mistral-7B-OpenOrca #license-apache-2.0 #endpoints_compatible #region-us \n",
"# Uploaded model\n\n- Developed by: darshit0503\n- License: apache-2.0\n- Finetuned from model : Open-Orca/Mistral-7B-OpenOrca\n\nThis mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.\n\n<img src=\"URL width=\"200\"/>"
] |
null |
transformers
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
{"library_name": "transformers", "tags": []}
|
danelcsb/rtdetr-finetuned-balloon
| null |
[
"transformers",
"safetensors",
"rt_detr",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null |
2024-04-14T13:20:54+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#transformers #safetensors #rt_detr #arxiv-1910.09700 #endpoints_compatible #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
This is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
[
"TAGS\n#transformers #safetensors #rt_detr #arxiv-1910.09700 #endpoints_compatible #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\nThis is the model card of a transformers model that has been pushed on the Hub. This model card has been automatically generated.\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact"
] |
text-generation
|
transformers
|
## Matter 8x22B - 0.2 (Mixtral 8x22B 0.2 Finetune)
Matter 8x22B 0.2 is finetune on the **Matter 0.2 dataset**, which is curated from over 35 datsets analyzing >6B tokens
### Recommended Usage for best results
System Prompt - `You are a helpful assistant`
### Training
Prompt format: This model uses ChatML prompt format.
```
<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
### Function Calling
Model also supports function calling. Additional tokens for function calling
Model function call tokens
- `<tool_call>` - Function call start token
- `</tool_call>` - Function call end token
Function call response tokens
- `<tool_response>` - Function response start token
- `</tool_response>` - Function response end token
Example
```
<|im_start|>system
You are a helpful assistant with access to the following functions. Use them if required -
{ "name": "get_news_headlines",
"description": "Get the latest news headlines",
"parameters":
{ "type": "object",
"properties":
{ "country":
{ "type": "string",
"description": "The country for which to fetch news"
}
},
"required": [ "country" ]
}
}
<|im_end|>
<|im_start|>user
Can you tell me the latest news headlines for the United States?<|im_end|>
<|im_start|>assistant
<tool_call>{"name": "get_news_headlines", "arguments": '{"country": "United States"}'}</tool_call><|im_end|>
<|im_start|>user
<tool_response>{
"headlines":
[
"Biden announces new vaccine mandates",
"Hurricane Ida devastates Louisiana",
"Apple unveils new iPhone",
"NASA's Perseverance rover collects first Mars rock sample"
]
}</tool_response>
<|im_end|>
<|im_start|>assistant
Here are the latest news headlines for the United States:
1. Biden announces new vaccine mandates
2. Hurricane Ida devastates Louisiana
3. Apple unveils new iPhone
4. NASA's Perseverance rover collects first Mars rock sample
<|im_end|>
```
|
{"language": ["en"], "license": "apache-2.0", "datasets": ["0-hero/Matter-0.2-alpha-Slim-A"]}
|
0-hero/Matter-0.2-8x22B
| null |
[
"transformers",
"pytorch",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"en",
"dataset:0-hero/Matter-0.2-alpha-Slim-A",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T13:20:56+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #safetensors #mixtral #text-generation #conversational #en #dataset-0-hero/Matter-0.2-alpha-Slim-A #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
## Matter 8x22B - 0.2 (Mixtral 8x22B 0.2 Finetune)
Matter 8x22B 0.2 is finetune on the Matter 0.2 dataset, which is curated from over 35 datsets analyzing >6B tokens
### Recommended Usage for best results
System Prompt - 'You are a helpful assistant'
### Training
Prompt format: This model uses ChatML prompt format.
### Function Calling
Model also supports function calling. Additional tokens for function calling
Model function call tokens
- '<tool_call>' - Function call start token
- '</tool_call>' - Function call end token
Function call response tokens
- '<tool_response>' - Function response start token
- '</tool_response>' - Function response end token
Example
|
[
"## Matter 8x22B - 0.2 (Mixtral 8x22B 0.2 Finetune)\n\nMatter 8x22B 0.2 is finetune on the Matter 0.2 dataset, which is curated from over 35 datsets analyzing >6B tokens",
"### Recommended Usage for best results\nSystem Prompt - 'You are a helpful assistant'",
"### Training\n\nPrompt format: This model uses ChatML prompt format.",
"### Function Calling\n\nModel also supports function calling. Additional tokens for function calling \n\nModel function call tokens\n- '<tool_call>' - Function call start token\n- '</tool_call>' - Function call end token\n\nFunction call response tokens\n- '<tool_response>' - Function response start token\n- '</tool_response>' - Function response end token\n\nExample"
] |
[
"TAGS\n#transformers #pytorch #safetensors #mixtral #text-generation #conversational #en #dataset-0-hero/Matter-0.2-alpha-Slim-A #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"## Matter 8x22B - 0.2 (Mixtral 8x22B 0.2 Finetune)\n\nMatter 8x22B 0.2 is finetune on the Matter 0.2 dataset, which is curated from over 35 datsets analyzing >6B tokens",
"### Recommended Usage for best results\nSystem Prompt - 'You are a helpful assistant'",
"### Training\n\nPrompt format: This model uses ChatML prompt format.",
"### Function Calling\n\nModel also supports function calling. Additional tokens for function calling \n\nModel function call tokens\n- '<tool_call>' - Function call start token\n- '</tool_call>' - Function call end token\n\nFunction call response tokens\n- '<tool_response>' - Function response start token\n- '</tool_response>' - Function response end token\n\nExample"
] |
text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# selfbiorag-7b-wo-medication_qa-sft
This model is a fine-tuned version of [dmis-lab/selfbiorag_7b](https://huggingface.co/dmis-lab/selfbiorag_7b) on the HuggingFaceH4/deita-10k-v0-sft dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5396
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.5074 | 0.92 | 6 | 1.5828 |
| 1.2223 | 2.0 | 13 | 1.5458 |
| 1.1253 | 2.77 | 18 | 1.5396 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.1.2
- Datasets 2.14.6
- Tokenizers 0.15.2
|
{"tags": ["alignment-handbook", "trl", "sft", "generated_from_trainer", "trl", "sft", "generated_from_trainer"], "datasets": ["HuggingFaceH4/deita-10k-v0-sft"], "base_model": "dmis-lab/selfbiorag_7b", "model-index": [{"name": "selfbiorag-7b-wo-medication_qa-sft", "results": []}]}
|
Minbyul/selfbiorag-7b-wo-medication_qa-sft
| null |
[
"transformers",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"dataset:HuggingFaceH4/deita-10k-v0-sft",
"base_model:dmis-lab/selfbiorag_7b",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2024-04-14T13:22:57+00:00
|
[] |
[] |
TAGS
#transformers #safetensors #llama #text-generation #alignment-handbook #trl #sft #generated_from_trainer #dataset-HuggingFaceH4/deita-10k-v0-sft #base_model-dmis-lab/selfbiorag_7b #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
selfbiorag-7b-wo-medication\_qa-sft
===================================
This model is a fine-tuned version of dmis-lab/selfbiorag\_7b on the HuggingFaceH4/deita-10k-v0-sft dataset.
It achieves the following results on the evaluation set:
* Loss: 1.5396
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 4
* eval\_batch\_size: 4
* seed: 42
* distributed\_type: multi-GPU
* num\_devices: 4
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 64
* total\_eval\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.1
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.39.0.dev0
* Pytorch 2.1.2
* Datasets 2.14.6
* Tokenizers 0.15.2
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
] |
[
"TAGS\n#transformers #safetensors #llama #text-generation #alignment-handbook #trl #sft #generated_from_trainer #dataset-HuggingFaceH4/deita-10k-v0-sft #base_model-dmis-lab/selfbiorag_7b #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 4\n* eval\\_batch\\_size: 4\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 4\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 64\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.1\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.39.0.dev0\n* Pytorch 2.1.2\n* Datasets 2.14.6\n* Tokenizers 0.15.2"
] |
null |
peft
|
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
{"library_name": "peft", "base_model": "mistralai/Mistral-7B-v0.1"}
|
alexgrigoras/mistral_7b_finetuned_ts_sdg_1
| null |
[
"peft",
"safetensors",
"mistral",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"4-bit",
"region:us"
] | null |
2024-04-14T13:23:39+00:00
|
[
"1910.09700"
] |
[] |
TAGS
#peft #safetensors #mistral #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-v0.1 #4-bit #region-us
|
# Model Card for Model ID
## Model Details
### Model Description
- Developed by:
- Funded by [optional]:
- Shared by [optional]:
- Model type:
- Language(s) (NLP):
- License:
- Finetuned from model [optional]:
### Model Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Downstream Use [optional]
### Out-of-Scope Use
## Bias, Risks, and Limitations
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
## Training Details
### Training Data
### Training Procedure
#### Preprocessing [optional]
#### Training Hyperparameters
- Training regime:
#### Speeds, Sizes, Times [optional]
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
#### Factors
#### Metrics
### Results
#### Summary
## Model Examination [optional]
## Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type:
- Hours used:
- Cloud Provider:
- Compute Region:
- Carbon Emitted:
## Technical Specifications [optional]
### Model Architecture and Objective
### Compute Infrastructure
#### Hardware
#### Software
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Model Card Authors [optional]
## Model Card Contact
### Framework versions
- PEFT 0.7.1
|
[
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.7.1"
] |
[
"TAGS\n#peft #safetensors #mistral #arxiv-1910.09700 #base_model-mistralai/Mistral-7B-v0.1 #4-bit #region-us \n",
"# Model Card for Model ID",
"## Model Details",
"### Model Description\n\n\n\n\n\n- Developed by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Model type: \n- Language(s) (NLP): \n- License: \n- Finetuned from model [optional]:",
"### Model Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Downstream Use [optional]",
"### Out-of-Scope Use",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.",
"## How to Get Started with the Model\n\nUse the code below to get started with the model.",
"## Training Details",
"### Training Data",
"### Training Procedure",
"#### Preprocessing [optional]",
"#### Training Hyperparameters\n\n- Training regime:",
"#### Speeds, Sizes, Times [optional]",
"## Evaluation",
"### Testing Data, Factors & Metrics",
"#### Testing Data",
"#### Factors",
"#### Metrics",
"### Results",
"#### Summary",
"## Model Examination [optional]",
"## Environmental Impact\n\n\n\nCarbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).\n\n- Hardware Type: \n- Hours used: \n- Cloud Provider: \n- Compute Region: \n- Carbon Emitted:",
"## Technical Specifications [optional]",
"### Model Architecture and Objective",
"### Compute Infrastructure",
"#### Hardware",
"#### Software\n\n\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Model Card Authors [optional]",
"## Model Card Contact",
"### Framework versions\n\n- PEFT 0.7.1"
] |
unconditional-image-generation
|
diffusers
|
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Alorel/0414-sd-class-butterflies-64')
image = pipeline().images[0]
image
```
|
{"license": "mit", "tags": ["pytorch", "diffusers", "unconditional-image-generation", "diffusion-models-class"]}
|
Alorel/0414-sd-class-butterflies-64
| null |
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | null |
2024-04-14T13:23:55+00:00
|
[] |
[] |
TAGS
#diffusers #safetensors #pytorch #unconditional-image-generation #diffusion-models-class #license-mit #diffusers-DDPMPipeline #region-us
|
# Model Card for Unit 1 of the Diffusion Models Class
This model is a diffusion model for unconditional image generation of cute .
## Usage
|
[
"# Model Card for Unit 1 of the Diffusion Models Class \n\nThis model is a diffusion model for unconditional image generation of cute .",
"## Usage"
] |
[
"TAGS\n#diffusers #safetensors #pytorch #unconditional-image-generation #diffusion-models-class #license-mit #diffusers-DDPMPipeline #region-us \n",
"# Model Card for Unit 1 of the Diffusion Models Class \n\nThis model is a diffusion model for unconditional image generation of cute .",
"## Usage"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.