Search is not available for this dataset
modelId
stringlengths 5
134
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
223M
| likes
int64 0
10.1k
| library_name
stringclasses 363
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 53
values | createdAt
unknown | card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
manojbaniya/final_v1 | manojbaniya | "2025-02-25T20:10:30Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma2",
"trl",
"en",
"base_model:unsloth/gemma-2-2b-bnb-4bit",
"base_model:finetune:unsloth/gemma-2-2b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-02-25T20:10:20Z" | ---
base_model: unsloth/gemma-2-2b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** manojbaniya
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-2b-bnb-4bit
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
havinash-ai/bae5b9bb-e8df-434d-b3dd-bd327674554e | havinash-ai | "2025-01-24T08:06:46Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:huggyllama/llama-7b",
"base_model:adapter:huggyllama/llama-7b",
"license:other",
"region:us"
] | null | "2025-01-24T07:57:28Z" | ---
library_name: peft
license: other
base_model: huggyllama/llama-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: bae5b9bb-e8df-434d-b3dd-bd327674554e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: huggyllama/llama-7b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 39b56ccbbe49fd54_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/39b56ccbbe49fd54_train_data.json
type:
field_instruction: instruction
field_output: text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: havinash-ai/bae5b9bb-e8df-434d-b3dd-bd327674554e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/39b56ccbbe49fd54_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d0ab2b29-eb50-4068-82e6-4c60f93eb42d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d0ab2b29-eb50-4068-82e6-4c60f93eb42d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# bae5b9bb-e8df-434d-b3dd-bd327674554e
This model is a fine-tuned version of [huggyllama/llama-7b](https://huggingface.co/huggyllama/llama-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0001 | 1 | nan |
| 0.0 | 0.0004 | 3 | nan |
| 0.0 | 0.0007 | 6 | nan |
| 0.0 | 0.0011 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Amarillys/Sakura-1B8-Qwen2beta-v0.9.1-GGUF | Amarillys | "2024-06-07T18:06:55Z" | 0 | 0 | null | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | "2024-06-07T18:06:26Z" | ---
license: cc-by-nc-sa-4.0
---
quantized from https://huggingface.co/SakuraLLM/Sakura-1B8-Qwen2beta-v0.9.1-GGUF |
whathefish/new_version123 | whathefish | "2023-02-14T19:02:04Z" | 4 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-02-14T12:24:07Z" | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: whathefish/new_version123
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# whathefish/new_version123
This model is a fine-tuned version of [distilbert-base-german-cased](https://huggingface.co/distilbert-base-german-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7433
- Validation Loss: 0.7673
- Train Accuracy: 0.3333
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 0.00032, 'decay_steps': 12, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.7433 | 0.7673 | 0.3333 | 0 |
### Framework versions
- Transformers 4.26.1
- TensorFlow 2.9.0
- Datasets 2.9.0
- Tokenizers 0.13.2
|
arcee-ai/Clown-DPO-Extended | arcee-ai | "2024-03-18T21:37:51Z" | 119 | 5 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"CorticalStack/pastiche-crown-clown-7b-dare-dpo",
"base_model:CorticalStack/pastiche-crown-clown-7b-dare-dpo",
"base_model:finetune:CorticalStack/pastiche-crown-clown-7b-dare-dpo",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-13T12:47:49Z" | ---
license: apache-2.0
base_model:
- CorticalStack/pastiche-crown-clown-7b-dare-dpo
library_name: transformers
tags:
- mergekit
- merge
- CorticalStack/pastiche-crown-clown-7b-dare-dpo
---
# Extended Model
This is a extension of a pre-trained language models created using [mergekit](https://github.com/arcee-ai/mergekit).

# Merge Details
### Merge Method
This method employs mergekit's passthrough method to expand blocks within the "CorticalStack/pastiche-crown-clown-7b-dare-dpo" model. For every 5th layer,
a new layer is added, with the `o_proj` and `down_proj` parameters of these added layers initialized to zero, mirroring the approach used in LLaMA Pro.
### It's important to note that this configuration has not undergone fine-tuning. Therefore, when fine-tuning, ensure that only every 5th layer is trainable, while all other layers remain frozen.
### Models Merged
The following models were included in the merge:
* [CorticalStack/pastiche-crown-clown-7b-dare-dpo](https://huggingface.co/CorticalStack/pastiche-crown-clown-7b-dare-dpo)
## ๐ Evaluation
### OpenLLM
CorticalStack/pastiche-crown-clown-7b-dare-dpo OpenLLM benchmark suite
| Model | Average | arc | HellaSwag | mmlu | TruthfulQA | gsm8k |
|---|---:|---:|---:|---:|---:|---:|
| [CorticalStack/pastiche-crown-clown-7b-dare-dpo](https://huggingface.co/arcee-ai/Clown-DPO-Extended/) | 76.93 | 72.18 | 88.90 | 63.45 | 79.15 | 85.71 | 72.18 |
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: CorticalStack/pastiche-crown-clown-7b-dare-dpo
layer_range:
- 0
- 4
- sources:
- model: CorticalStack/pastiche-crown-clown-7b-dare-dpo
layer_range:
- 3
- 4
parameters:
scale:
- filter: o_proj
value: 0
- filter: down_proj
value: 0
- value: 1
- sources:
- model: CorticalStack/pastiche-crown-clown-7b-dare-dpo
layer_range:
- 4
- 8
- sources:
- model: CorticalStack/pastiche-crown-clown-7b-dare-dpo
layer_range:
- 7
- 8
parameters:
scale:
- filter: o_proj
value: 0
- filter: down_proj
value: 0
- value: 1
- sources:
- model: CorticalStack/pastiche-crown-clown-7b-dare-dpo
layer_range:
- 8
- 12
- sources:
- model: CorticalStack/pastiche-crown-clown-7b-dare-dpo
layer_range:
- 11
- 12
parameters:
scale:
- filter: o_proj
value: 0
- filter: down_proj
value: 0
- value: 1
- sources:
- model: CorticalStack/pastiche-crown-clown-7b-dare-dpo
layer_range:
- 12
- 16
- sources:
- model: CorticalStack/pastiche-crown-clown-7b-dare-dpo
layer_range:
- 15
- 16
parameters:
scale:
- filter: o_proj
value: 0
- filter: down_proj
value: 0
- value: 1
- sources:
- model: CorticalStack/pastiche-crown-clown-7b-dare-dpo
layer_range:
- 16
- 20
- sources:
- model: CorticalStack/pastiche-crown-clown-7b-dare-dpo
layer_range:
- 19
- 20
parameters:
scale:
- filter: o_proj
value: 0
- filter: down_proj
value: 0
- value: 1
- sources:
- model: CorticalStack/pastiche-crown-clown-7b-dare-dpo
layer_range:
- 20
- 24
- sources:
- model: CorticalStack/pastiche-crown-clown-7b-dare-dpo
layer_range:
- 23
- 24
parameters:
scale:
- filter: o_proj
value: 0
- filter: down_proj
value: 0
- value: 1
- sources:
- model: CorticalStack/pastiche-crown-clown-7b-dare-dpo
layer_range:
- 24
- 28
- sources:
- model: CorticalStack/pastiche-crown-clown-7b-dare-dpo
layer_range:
- 27
- 28
parameters:
scale:
- filter: o_proj
value: 0
- filter: down_proj
value: 0
- value: 1
- sources:
- model: CorticalStack/pastiche-crown-clown-7b-dare-dpo
layer_range:
- 28
- 32
- sources:
- model: CorticalStack/pastiche-crown-clown-7b-dare-dpo
layer_range:
- 31
- 32
parameters:
scale:
- filter: o_proj
value: 0
- filter: down_proj
value: 0
- value: 1
merge_method: passthrough
dtype: bfloat16
```
# Function to freeze layers
```
from transformers import AutoModelForCausalLM
def enable_grad_only_every_nth(model, n):
"""
This function configures the specified model to enable gradient calculations exclusively for every nth layer, starting
from the first layer (0-indexed), to accommodate newly added blocks for training. Concurrently, it freezes the gradients
for all other components of the model, including the embedding layers and the model's head. This setup is particularly
useful for fine-tuning processes where only a subset of layers are targeted for updates, ensuring efficient training and
adaptation of newly integrated layers while maintaining the pre-trained behavior of other model components.
"""
# Freeze embeddings.
for param in model.model.embed_tokens.parameters():
param.requires_grad = False
# Freeze lm_head.
for param in model.lm_head.parameters():
param.requires_grad = False
# Enable gradients for every nth layer
layers = model.model.layers # Access the ModuleList containing the layers
for index, layer in enumerate(layers):
if (index + 1) % n == 0: # Enables gradients for every nth layer, starting from the layer after the 0th
for param in layer.parameters():
param.requires_grad = True
else:
for param in layer.parameters():
param.requires_grad = False
model = transformers.AutoModelForCausalLM.from_pretrained(
"arcee-ai/Mistral-7B-Instruct-v0.2-expanded"
)
# Update layer gradients, specify the correct value for n based on your model's architecture
n =5
enable_grad_only_every_nth(model, n)
```
|
netrer/distilbert-base-uncased-finetuned-emotion | netrer | "2024-09-22T10:02:11Z" | 90 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-09-22T08:16:11Z" | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2283
- Accuracy: 0.921
- F1: 0.9210
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8368 | 1.0 | 250 | 0.3275 | 0.907 | 0.9057 |
| 0.2517 | 2.0 | 500 | 0.2283 | 0.921 | 0.9210 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
emilykang/Gemma_medquad-symptoms_lora | emilykang | "2024-05-16T14:33:33Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"license:gemma",
"region:us"
] | null | "2024-05-16T11:56:39Z" | ---
license: gemma
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: google/gemma-2b
datasets:
- generator
model-index:
- name: Gemma_medquad-symptoms_lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Gemma_medquad-symptoms_lora
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 10
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.1
- Pytorch 2.0.1+cu117
- Datasets 2.19.0
- Tokenizers 0.19.1 |
VinayHajare/dqn-SpaceInvadersNoFrameskip-v4 | VinayHajare | "2023-11-11T07:03:01Z" | 0 | 1 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-11-11T07:02:18Z" | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 460.50 +/- 94.56
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga VinayHajare -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga VinayHajare -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga VinayHajare
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
pdmct/q-FrozenLake-v1-4x4-noSlippery | pdmct | "2023-01-06T02:16:23Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-01-06T02:16:18Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="pdmct/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
erkam/sg2im-256-bs-16x2-cc-snr-depth-const | erkam | "2023-10-07T00:19:47Z" | 0 | 0 | diffusers | [
"diffusers",
"sg-to-image",
"scene-graph",
"stable-diffusion",
"stable-diffusion-diffusers",
"lora",
"base_model:stabilityai/stable-diffusion-2",
"base_model:adapter:stabilityai/stable-diffusion-2",
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-10-01T02:01:43Z" |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2
tags:
- sg-to-image
- scene-graph
- stable-diffusion
- stable-diffusion-diffusers
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - erkam/sg2im-256-bs-16x2-cc-snr-depth-const
These are LoRA adaption weights for stabilityai/stable-diffusion-2. The weights were fine-tuned on the erkam/clevr-full-v5 dataset. You can find some example images in the following.
|
amazingvince/cryptid_full_tune | amazingvince | "2024-06-25T08:19:33Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.3",
"base_model:finetune:mistralai/Mistral-7B-v0.3",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-25T04:28:41Z" | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.3
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: cryptid_full_tune
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# cryptid_full_tune
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.3](https://huggingface.co/mistralai/Mistral-7B-v0.3) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.41e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.41.1
- Pytorch 2.1.2+cu118
- Datasets 2.19.1
- Tokenizers 0.19.1
|
ingeol/dpo_test_1000 | ingeol | "2023-10-03T07:25:16Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-10-03T07:24:31Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
|
atitaarora/segformer-b0-scene-parse-150 | atitaarora | "2023-04-27T20:33:35Z" | 32 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"segformer",
"generated_from_trainer",
"dataset:scene_parse_150",
"license:other",
"endpoints_compatible",
"region:us"
] | null | "2023-04-27T20:32:49Z" | ---
license: other
tags:
- generated_from_trainer
datasets:
- scene_parse_150
model-index:
- name: segformer-b0-scene-parse-150
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-scene-parse-150
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the scene_parse_150 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3
|
andrr/setfit_healthcare | andrr | "2023-05-31T11:36:43Z" | 3 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | "2023-05-25T13:26:18Z" | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# /var/folders/my/7gpsbyln179fyxzztd61gwwc0000gp/T/tmpvdu9pgj9/andrr/setfit_healthcare
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("/var/folders/my/7gpsbyln179fyxzztd61gwwc0000gp/T/tmpvdu9pgj9/andrr/setfit_healthcare")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst ๐คฎ"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
chujiezheng/Llama3-70B-Chinese-Chat-ExPO | chujiezheng | "2024-05-27T18:24:32Z" | 10 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"en",
"zh",
"arxiv:2404.16792",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-25T06:59:42Z" | ---
license: llama3
language:
- en
- zh
---
# Llama3-8B-Chinese-Chat-ExPO
The extrapolated (ExPO) model based on [`shenzhi-wang/Llama3-70B-Chinese-Chat`](https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat) and [`meta-llama/Meta-Llama-3-70B-Instruct`](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct), as in the "[Weak-to-Strong Extrapolation Expedites Alignment](https://arxiv.org/abs/2404.16792)" paper.
Specifically, we obtain this model by extrapolating **(alpha = 0.3)** from the weights of the SFT and DPO/RLHF checkpoints, achieving superior alignment with human preference.
**Note:** This is an experimental model, as I have not comprehensively evaluated its Chinese ability. **Unexpected issues may occur when we apply extrapolation to the DPO/RLHF alignment training for new languages (e.g., Chinese).**
## Evaluation Results
Evaluation results on the **AlpacaEval 2.0** benchmark (you can find the evaluation outputs on the [official GitHub repo](https://github.com/chujiezheng/LLM-Extrapolation/tree/main/results_alpaca)):
| | Win Rate (Ori) | LC Win Rate (Ori) | Win Rate (+ ExPO) | LC Win Rate (+ ExPO) |
| ------------------------------------ | -------------- | ----------------- | ----------------- | -------------------- |
| `HuggingFaceH4/zephyr-7b-alpha` | 6.7% | 10.0% | **10.6%** | **13.6%** |
| `HuggingFaceH4/zephyr-7b-beta` | 10.2% | 13.2% | **11.1%** | **14.0%** |
| `berkeley-nest/Starling-LM-7B-alpha` | 15.0% | 18.3% | **18.2%** | **19.5%** |
| `Nexusflow/Starling-LM-7B-beta` | 26.6% | 25.8% | **29.6%** | **26.4%** |
| `snorkelai/Snorkel-Mistral-PairRM` | 24.7% | 24.0% | **28.8%** | **26.4%** |
| `RLHFlow/LLaMA3-iterative-DPO-final` | 29.2% | 36.0% | **32.7%** | **37.8%** |
| `internlm/internlm2-chat-1.8b` | 3.8% | 4.0% | **5.2%** | **4.3%** |
| `internlm/internlm2-chat-7b` | 20.5% | 18.3% | **28.1%** | **22.7%** |
| `internlm/internlm2-chat-20b` | 36.1% | 24.9% | **46.2%** | **27.2%** |
| `allenai/tulu-2-dpo-7b` | 8.5% | 10.2% | **11.5%** | **11.7%** |
| `allenai/tulu-2-dpo-13b` | 11.2% | 15.5% | **15.6%** | **17.6%** |
| `allenai/tulu-2-dpo-70b` | 15.4% | 21.2% | **23.0%** | **25.7%** |
Evaluation results on the **MT-Bench** benchmark (you can find the evaluation outputs on the [official GitHub repo](https://github.com/chujiezheng/LLM-Extrapolation/tree/main/results_mtbench)):
| | Original | + ExPO |
| ------------------------------------ | -------- | -------- |
| `HuggingFaceH4/zephyr-7b-alpha` | 6.85 | **6.87** |
| `HuggingFaceH4/zephyr-7b-beta` | 7.02 | **7.06** |
| `berkeley-nest/Starling-LM-7B-alpha` | 7.82 | **7.91** |
| `Nexusflow/Starling-LM-7B-beta` | 8.10 | **8.18** |
| `snorkelai/Snorkel-Mistral-PairRM` | 7.63 | **7.69** |
| `RLHFlow/LLaMA3-iterative-DPO-final` | 8.08 | **8.45** |
| `internlm/internlm2-chat-1.8b` | 5.17 | **5.26** |
| `internlm/internlm2-chat-7b` | 7.72 | **7.80** |
| `internlm/internlm2-chat-20b` | 8.13 | **8.26** |
| `allenai/tulu-2-dpo-7b` | 6.35 | **6.38** |
| `allenai/tulu-2-dpo-13b` | 7.00 | **7.26** |
| `allenai/tulu-2-dpo-70b` | 7.79 | **8.03** |
|
kar-saaragh/a2c-PandaReachDense-v3 | kar-saaragh | "2024-01-09T06:39:43Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-01-09T06:35:02Z" | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.18 +/- 0.09
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
HumanoidTeam/act_aloha_multimalteser_pick_one_lerobot_fix_55k | HumanoidTeam | "2025-02-26T11:18:36Z" | 0 | 0 | null | [
"safetensors",
"dataset:HumanoidTeam/many_maltesers_task",
"region:us"
] | null | "2025-02-26T09:29:18Z" | ---
datasets:
- HumanoidTeam/many_maltesers_task
--- |
philip-hightech/373e6607-6205-4b81-80f5-d8383ca90f49 | philip-hightech | "2025-01-28T13:35:13Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-360M",
"base_model:adapter:unsloth/SmolLM2-360M",
"license:apache-2.0",
"region:us"
] | null | "2025-01-28T13:34:15Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-360M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 373e6607-6205-4b81-80f5-d8383ca90f49
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-360M
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3c8ad6e90bcfaf6a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3c8ad6e90bcfaf6a_train_data.json
type:
field_instruction: prompt
field_output: ground_truth_chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: philip-hightech/373e6607-6205-4b81-80f5-d8383ca90f49
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/3c8ad6e90bcfaf6a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 66cd6a43-a890-4d2e-aaf1-709c8084f2d1
wandb_project: Mine-SN56-21-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 66cd6a43-a890-4d2e-aaf1-709c8084f2d1
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 373e6607-6205-4b81-80f5-d8383ca90f49
This model is a fine-tuned version of [unsloth/SmolLM2-360M](https://huggingface.co/unsloth/SmolLM2-360M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0010 | 1 | nan |
| 0.0 | 0.0131 | 13 | nan |
| 0.0 | 0.0262 | 26 | nan |
| 0.0 | 0.0393 | 39 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
great0001/910390c8-0578-4e21-837d-162c868341d1 | great0001 | "2025-01-29T01:54:45Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"gemma2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/gemma-2-2b-it",
"base_model:adapter:unsloth/gemma-2-2b-it",
"license:gemma",
"region:us"
] | null | "2025-01-29T01:49:09Z" | ---
library_name: peft
license: gemma
base_model: unsloth/gemma-2-2b-it
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 910390c8-0578-4e21-837d-162c868341d1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/gemma-2-2b-it
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5a772dda137685e1_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/5a772dda137685e1_train_data.json
type:
field_input: Hist
field_instruction: Text
field_output: Pred
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: great0001/910390c8-0578-4e21-837d-162c868341d1
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/5a772dda137685e1_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5a7089d0-f943-4a05-b1ed-0497aa1d2982
wandb_project: Birthday-SN56-33-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5a7089d0-f943-4a05-b1ed-0497aa1d2982
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 910390c8-0578-4e21-837d-162c868341d1
This model is a fine-tuned version of [unsloth/gemma-2-2b-it](https://huggingface.co/unsloth/gemma-2-2b-it) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6874
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8584 | 0.0003 | 1 | 0.8700 |
| 0.7673 | 0.0033 | 13 | 0.7328 |
| 0.616 | 0.0065 | 26 | 0.6996 |
| 0.6871 | 0.0098 | 39 | 0.6874 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
zzzdonut/cs224s-ascend-finetuned | zzzdonut | "2024-05-29T22:20:39Z" | 77 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-05-23T01:12:56Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
teven/cross_all-mpnet-base-v2_finetuned_WebNLG2020_relevance | teven | "2022-09-21T15:46:24Z" | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2022-09-21T15:46:16Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# teven/cross_all-mpnet-base-v2_finetuned_WebNLG2020_relevance
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('teven/cross_all-mpnet-base-v2_finetuned_WebNLG2020_relevance')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('teven/cross_all-mpnet-base-v2_finetuned_WebNLG2020_relevance')
model = AutoModel.from_pretrained('teven/cross_all-mpnet-base-v2_finetuned_WebNLG2020_relevance')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=teven/cross_all-mpnet-base-v2_finetuned_WebNLG2020_relevance)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Irina-Igmm/chat-immo-test1 | Irina-Igmm | "2024-06-16T09:40:41Z" | 108 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-06-16T09:40:02Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
thang1943/bge-base-financial-matryoshka | thang1943 | "2025-02-17T10:21:17Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:6300",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2025-02-17T10:20:57Z" | ---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:6300
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: BAAI/bge-base-en-v1.5
widget:
- source_sentence: Termination of the Arm Share Purchase Agreement In February 2022,
NVIDIA and SoftBank Group Corp., or SoftBank, announced the termination of the
Share Purchase Agreement whereby NVIDIA would have acquired Arm Limited, or Arm,
from SoftBank. The parties agreed to terminate because of significant regulatory
challenges preventing the completion of the transaction.
sentences:
- How did eBay's net revenues from the first quarter of 2023 compare to the last
quarter of 2022?
- Why did NVIDIA and SoftBank terminate their Share Purchase Agreement for acquiring
Arm Limited?
- What effects did the implementation of the Reinvention Plan have on the company's
financial statements in fiscal years 2022 and 2023?
- source_sentence: In the fiscal year 2023, it was disclosed that $1,963 million of
certain accumulated foreign earnings continue to be indefinitely reinvested.
sentences:
- What does the company imply about the severity of the lawsuits and regulatory
proceedings they are involved in?
- How much has been indefinitely reinvested from accumulated foreign earnings as
of fiscal year 2023?
- Are the consolidated financial statements and notes included directly in Item
8 of the Annual Report on Form 10-K?
- source_sentence: The November 2029 fixed-to-floating rate Senior Notes bear interest
at a fixed rate of 6.196%, payable semi-annually, until the interest reset date
on November 17, 2028.
sentences:
- What is the fixed interest rate for the November 2029 fixed-to-floating rate Senior
Notes before the reset date?
- What is the weighted-average remaining term of the financing obligations as of
December 31, 2023?
- How long has Humana participated in the Medicare program for private health plans?
- source_sentence: Our material cash requirements include debt repayment obligations
of $1.9 billion.
sentences:
- What percentage is the initial preferred distribution for the April preferreds
issued by AT&T in 2023?
- What are the two main service segments of The Charles Schwab Corporation?
- What is the total debt repayment obligation mentioned in the financial outline?
- source_sentence: New stores | 131 | | 333 | | 464 | | 311 | | 225 | 536
sentences:
- How many new stores did the Dollar Tree segment open in the fiscal year ending
January 28, 2023?
- How is the discount rate for the Family Dollar goodwill impairment evaluation
determined?
- What does IBMโs 2023 Annual Report to Stockholders include?
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: BGE base Financial Matryoshka
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.6628571428571428
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8128571428571428
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8385714285714285
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8871428571428571
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6628571428571428
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.270952380952381
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16771428571428568
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.0887142857142857
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6628571428571428
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8128571428571428
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8385714285714285
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8871428571428571
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7771376992897233
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7417278911564624
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7459340014094423
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.66
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8114285714285714
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.84
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8871428571428571
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.66
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2704761904761904
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16799999999999998
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.0887142857142857
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.66
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8114285714285714
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.84
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8871428571428571
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7738952698065006
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7376156462585033
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7416047303260471
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.6671428571428571
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8057142857142857
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8371428571428572
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.88
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6671428571428571
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.26857142857142857
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1674285714285714
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.088
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6671428571428571
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8057142857142857
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8371428571428572
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.88
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7749410226388818
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7410992063492059
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.745220616023529
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.6342857142857142
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.79
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8314285714285714
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8728571428571429
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6342857142857142
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2633333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1662857142857143
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08728571428571427
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6342857142857142
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.79
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8314285714285714
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8728571428571429
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7567972995851519
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7192930839002263
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7237935936286254
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.6285714285714286
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7671428571428571
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8142857142857143
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8728571428571429
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.6285714285714286
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2557142857142857
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.16285714285714287
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.08728571428571427
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.6285714285714286
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7671428571428571
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.8142857142857143
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8728571428571429
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7483704138772564
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7087936507936506
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7127238799035323
name: Cosine Map@100
---
# BGE base Financial Matryoshka
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("thang1943/bge-base-financial-matryoshka")
# Run inference
sentences = [
'New stores | 131 | | 333 | | 464 | | 311 | | 225 | 536',
'How many new stores did the Dollar Tree segment open in the fiscal year ending January 28, 2023?',
'How is the discount rate for the Family Dollar goodwill impairment evaluation determined?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `dim_768`, `dim_512`, `dim_256`, `dim_128` and `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | dim_768 | dim_512 | dim_256 | dim_128 | dim_64 |
|:--------------------|:-----------|:-----------|:-----------|:-----------|:-----------|
| cosine_accuracy@1 | 0.6629 | 0.66 | 0.6671 | 0.6343 | 0.6286 |
| cosine_accuracy@3 | 0.8129 | 0.8114 | 0.8057 | 0.79 | 0.7671 |
| cosine_accuracy@5 | 0.8386 | 0.84 | 0.8371 | 0.8314 | 0.8143 |
| cosine_accuracy@10 | 0.8871 | 0.8871 | 0.88 | 0.8729 | 0.8729 |
| cosine_precision@1 | 0.6629 | 0.66 | 0.6671 | 0.6343 | 0.6286 |
| cosine_precision@3 | 0.271 | 0.2705 | 0.2686 | 0.2633 | 0.2557 |
| cosine_precision@5 | 0.1677 | 0.168 | 0.1674 | 0.1663 | 0.1629 |
| cosine_precision@10 | 0.0887 | 0.0887 | 0.088 | 0.0873 | 0.0873 |
| cosine_recall@1 | 0.6629 | 0.66 | 0.6671 | 0.6343 | 0.6286 |
| cosine_recall@3 | 0.8129 | 0.8114 | 0.8057 | 0.79 | 0.7671 |
| cosine_recall@5 | 0.8386 | 0.84 | 0.8371 | 0.8314 | 0.8143 |
| cosine_recall@10 | 0.8871 | 0.8871 | 0.88 | 0.8729 | 0.8729 |
| **cosine_ndcg@10** | **0.7771** | **0.7739** | **0.7749** | **0.7568** | **0.7484** |
| cosine_mrr@10 | 0.7417 | 0.7376 | 0.7411 | 0.7193 | 0.7088 |
| cosine_map@100 | 0.7459 | 0.7416 | 0.7452 | 0.7238 | 0.7127 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 6,300 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 45.49 tokens</li><li>max: 371 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 20.33 tokens</li><li>max: 41 tokens</li></ul> |
* Samples:
| positive | anchor |
|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>In their 2023 forward-looking statements, Goldman Sachs mentioned that results, financial condition, liquidity, and capital actions may differ, possibly materially, from the anticipated results. Important factors include those described in "Risk Factors" in Part I, Item 1A and "Forward-Looking Statements" in Part I, Item 1.</code> | <code>What factors could potentially alter Goldman Sachs' anticipated financial outcomes according to their 2023 forward-looking statements?</code> |
| <code>Visa Direct is part of Visaโs strategy beyond C2B payments and helps facilitate the delivery of funds to eligible cards, deposit accounts and digital wallets across more than 190 countries and territories. Visa Direct supports multiple use cases, such as P2P payments and account-to-account transfers, business and government payouts to individuals or small businesses, merchant settlements and refunds.</code> | <code>What is the purpose of Visa Direct?</code> |
| <code>The Company's international operations are subject to different, and sometimes more stringent, legal and regulatory requirements, which vary widely by jurisdiction, including anti-corruption laws; economic sanctions laws; various privacy, insurance, tax, tariff and trade laws and regulations; corporate governance, privacy, data protection (including the EU's General Data Protection Regulation which began to apply across the EU during 2018), data mining, data transfer, labor and employment, intellectual property, consumer protection and investment laws and regulations; discriminatory licensing procedures; compulsory cessions of reinsurance; required localization of records and funds; higher premium and income taxes; limitations on dividends and repatriation of capital; and requirements for local participation in an insurer's ownership.</code> | <code>What types of laws and regulations govern the international operations of a company?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 2
- `per_device_eval_batch_size`: 1
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: False
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 2
- `per_device_eval_batch_size`: 1
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: False
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
|:-------:|:--------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|
| 0.0032 | 10 | 0.271 | - | - | - | - | - |
| 0.0063 | 20 | 0.0452 | - | - | - | - | - |
| 0.0095 | 30 | 0.2152 | - | - | - | - | - |
| 0.0127 | 40 | 0.0658 | - | - | - | - | - |
| 0.0159 | 50 | 0.5701 | - | - | - | - | - |
| 0.0190 | 60 | 0.0882 | - | - | - | - | - |
| 0.0222 | 70 | 0.0902 | - | - | - | - | - |
| 0.0254 | 80 | 0.8865 | - | - | - | - | - |
| 0.0286 | 90 | 0.1985 | - | - | - | - | - |
| 0.0317 | 100 | 0.2853 | - | - | - | - | - |
| 0.0349 | 110 | 0.2637 | - | - | - | - | - |
| 0.0381 | 120 | 0.007 | - | - | - | - | - |
| 0.0413 | 130 | 0.0432 | - | - | - | - | - |
| 0.0444 | 140 | 0.0126 | - | - | - | - | - |
| 0.0476 | 150 | 0.0174 | - | - | - | - | - |
| 0.0508 | 160 | 0.2123 | - | - | - | - | - |
| 0.0540 | 170 | 0.0489 | - | - | - | - | - |
| 0.0571 | 180 | 0.0306 | - | - | - | - | - |
| 0.0603 | 190 | 0.0032 | - | - | - | - | - |
| 0.0635 | 200 | 0.027 | - | - | - | - | - |
| 0.0667 | 210 | 0.0131 | - | - | - | - | - |
| 0.0698 | 220 | 0.0164 | - | - | - | - | - |
| 0.0730 | 230 | 0.0044 | - | - | - | - | - |
| 0.0762 | 240 | 0.0119 | - | - | - | - | - |
| 0.0794 | 250 | 0.0539 | - | - | - | - | - |
| 0.0825 | 260 | 0.0425 | - | - | - | - | - |
| 0.0857 | 270 | 0.0213 | - | - | - | - | - |
| 0.0889 | 280 | 0.0676 | - | - | - | - | - |
| 0.0921 | 290 | 0.029 | - | - | - | - | - |
| 0.0952 | 300 | 0.0147 | - | - | - | - | - |
| 0.0984 | 310 | 0.0201 | - | - | - | - | - |
| 0.1016 | 320 | 0.0112 | - | - | - | - | - |
| 0.1048 | 330 | 0.0236 | - | - | - | - | - |
| 0.1079 | 340 | 0.0619 | - | - | - | - | - |
| 0.1111 | 350 | 0.0521 | - | - | - | - | - |
| 0.1143 | 360 | 0.034 | - | - | - | - | - |
| 0.1175 | 370 | 0.0729 | - | - | - | - | - |
| 0.1206 | 380 | 0.6353 | - | - | - | - | - |
| 0.1238 | 390 | 0.0053 | - | - | - | - | - |
| 0.1270 | 400 | 0.0047 | - | - | - | - | - |
| 0.1302 | 410 | 0.0038 | - | - | - | - | - |
| 0.1333 | 420 | 0.1795 | - | - | - | - | - |
| 0.1365 | 430 | 0.0715 | - | - | - | - | - |
| 0.1397 | 440 | 0.0328 | - | - | - | - | - |
| 0.1429 | 450 | 0.0301 | - | - | - | - | - |
| 0.1460 | 460 | 0.0163 | - | - | - | - | - |
| 0.1492 | 470 | 0.0515 | - | - | - | - | - |
| 0.1524 | 480 | 0.0009 | - | - | - | - | - |
| 0.1556 | 490 | 0.0645 | - | - | - | - | - |
| 0.1587 | 500 | 0.0024 | - | - | - | - | - |
| 0.1619 | 510 | 0.0833 | - | - | - | - | - |
| 0.1651 | 520 | 0.0052 | - | - | - | - | - |
| 0.1683 | 530 | 0.0056 | - | - | - | - | - |
| 0.1714 | 540 | 0.164 | - | - | - | - | - |
| 0.1746 | 550 | 0.0054 | - | - | - | - | - |
| 0.1778 | 560 | 0.0446 | - | - | - | - | - |
| 0.1810 | 570 | 0.001 | - | - | - | - | - |
| 0.1841 | 580 | 0.0869 | - | - | - | - | - |
| 0.1873 | 590 | 0.0036 | - | - | - | - | - |
| 0.1905 | 600 | 0.022 | - | - | - | - | - |
| 0.1937 | 610 | 0.0025 | - | - | - | - | - |
| 0.1968 | 620 | 0.0112 | - | - | - | - | - |
| 0.2 | 630 | 0.0005 | - | - | - | - | - |
| 0.2032 | 640 | 0.0047 | - | - | - | - | - |
| 0.2063 | 650 | 0.0003 | - | - | - | - | - |
| 0.2095 | 660 | 0.089 | - | - | - | - | - |
| 0.2127 | 670 | 0.0009 | - | - | - | - | - |
| 0.2159 | 680 | 0.0012 | - | - | - | - | - |
| 0.2190 | 690 | 0.0278 | - | - | - | - | - |
| 0.2222 | 700 | 0.0013 | - | - | - | - | - |
| 0.2254 | 710 | 0.0017 | - | - | - | - | - |
| 0.2286 | 720 | 0.0137 | - | - | - | - | - |
| 0.2317 | 730 | 0.2628 | - | - | - | - | - |
| 0.2349 | 740 | 0.011 | - | - | - | - | - |
| 0.2381 | 750 | 0.9877 | - | - | - | - | - |
| 0.2413 | 760 | 0.0166 | - | - | - | - | - |
| 0.2444 | 770 | 0.03 | - | - | - | - | - |
| 0.2476 | 780 | 0.5091 | - | - | - | - | - |
| 0.2508 | 790 | 0.0057 | - | - | - | - | - |
| 0.2540 | 800 | 0.0003 | - | - | - | - | - |
| 0.2571 | 810 | 0.0002 | - | - | - | - | - |
| 0.2603 | 820 | 0.0515 | - | - | - | - | - |
| 0.2635 | 830 | 0.134 | - | - | - | - | - |
| 0.2667 | 840 | 0.0033 | - | - | - | - | - |
| 0.2698 | 850 | 0.0046 | - | - | - | - | - |
| 0.2730 | 860 | 0.004 | - | - | - | - | - |
| 0.2762 | 870 | 0.0017 | - | - | - | - | - |
| 0.2794 | 880 | 0.0027 | - | - | - | - | - |
| 0.2825 | 890 | 0.0946 | - | - | - | - | - |
| 0.2857 | 900 | 0.0016 | - | - | - | - | - |
| 0.2889 | 910 | 0.0057 | - | - | - | - | - |
| 0.2921 | 920 | 0.0005 | - | - | - | - | - |
| 0.2952 | 930 | 0.0145 | - | - | - | - | - |
| 0.2984 | 940 | 0.0049 | - | - | - | - | - |
| 0.3016 | 950 | 0.0008 | - | - | - | - | - |
| 0.3048 | 960 | 0.0013 | - | - | - | - | - |
| 0.3079 | 970 | 0.0245 | - | - | - | - | - |
| 0.3111 | 980 | 0.0012 | - | - | - | - | - |
| 0.3143 | 990 | 0.0051 | - | - | - | - | - |
| 0.3175 | 1000 | 0.0016 | - | - | - | - | - |
| 0.3206 | 1010 | 0.0014 | - | - | - | - | - |
| 0.3238 | 1020 | 0.0002 | - | - | - | - | - |
| 0.3270 | 1030 | 0.0021 | - | - | - | - | - |
| 0.3302 | 1040 | 0.0038 | - | - | - | - | - |
| 0.3333 | 1050 | 0.0084 | - | - | - | - | - |
| 0.3365 | 1060 | 0.0044 | - | - | - | - | - |
| 0.3397 | 1070 | 0.0002 | - | - | - | - | - |
| 0.3429 | 1080 | 0.0058 | - | - | - | - | - |
| 0.3460 | 1090 | 0.008 | - | - | - | - | - |
| 0.3492 | 1100 | 0.0008 | - | - | - | - | - |
| 0.3524 | 1110 | 0.0043 | - | - | - | - | - |
| 0.3556 | 1120 | 0.1245 | - | - | - | - | - |
| 0.3587 | 1130 | 0.0037 | - | - | - | - | - |
| 0.3619 | 1140 | 0.581 | - | - | - | - | - |
| 0.3651 | 1150 | 0.0011 | - | - | - | - | - |
| 0.3683 | 1160 | 0.0061 | - | - | - | - | - |
| 0.3714 | 1170 | 0.0292 | - | - | - | - | - |
| 0.3746 | 1180 | 0.005 | - | - | - | - | - |
| 0.3778 | 1190 | 0.003 | - | - | - | - | - |
| 0.3810 | 1200 | 0.0003 | - | - | - | - | - |
| 0.3841 | 1210 | 0.0007 | - | - | - | - | - |
| 0.3873 | 1220 | 0.5248 | - | - | - | - | - |
| 0.3905 | 1230 | 0.3122 | - | - | - | - | - |
| 0.3937 | 1240 | 0.0079 | - | - | - | - | - |
| 0.3968 | 1250 | 0.014 | - | - | - | - | - |
| 0.4 | 1260 | 0.0271 | - | - | - | - | - |
| 0.4032 | 1270 | 0.0043 | - | - | - | - | - |
| 0.4063 | 1280 | 0.0005 | - | - | - | - | - |
| 0.4095 | 1290 | 0.0012 | - | - | - | - | - |
| 0.4127 | 1300 | 0.0179 | - | - | - | - | - |
| 0.4159 | 1310 | 0.0011 | - | - | - | - | - |
| 0.4190 | 1320 | 0.0048 | - | - | - | - | - |
| 0.4222 | 1330 | 0.002 | - | - | - | - | - |
| 0.4254 | 1340 | 0.0002 | - | - | - | - | - |
| 0.4286 | 1350 | 0.0091 | - | - | - | - | - |
| 0.4317 | 1360 | 0.0002 | - | - | - | - | - |
| 0.4349 | 1370 | 0.0137 | - | - | - | - | - |
| 0.4381 | 1380 | 0.017 | - | - | - | - | - |
| 0.4413 | 1390 | 0.0007 | - | - | - | - | - |
| 0.4444 | 1400 | 0.001 | - | - | - | - | - |
| 0.4476 | 1410 | 0.0015 | - | - | - | - | - |
| 0.4508 | 1420 | 0.0015 | - | - | - | - | - |
| 0.4540 | 1430 | 0.0002 | - | - | - | - | - |
| 0.4571 | 1440 | 0.125 | - | - | - | - | - |
| 0.4603 | 1450 | 0.0014 | - | - | - | - | - |
| 0.4635 | 1460 | 0.0019 | - | - | - | - | - |
| 0.4667 | 1470 | 0.0061 | - | - | - | - | - |
| 0.4698 | 1480 | 0.0019 | - | - | - | - | - |
| 0.4730 | 1490 | 0.0045 | - | - | - | - | - |
| 0.4762 | 1500 | 0.004 | - | - | - | - | - |
| 0.4794 | 1510 | 0.0003 | - | - | - | - | - |
| 0.4825 | 1520 | 0.0002 | - | - | - | - | - |
| 0.4857 | 1530 | 0.0053 | - | - | - | - | - |
| 0.4889 | 1540 | 0.0042 | - | - | - | - | - |
| 0.4921 | 1550 | 0.0005 | - | - | - | - | - |
| 0.4952 | 1560 | 0.0026 | - | - | - | - | - |
| 0.4984 | 1570 | 0.0081 | - | - | - | - | - |
| 0.5016 | 1580 | 0.0094 | - | - | - | - | - |
| 0.5048 | 1590 | 0.0003 | - | - | - | - | - |
| 0.5079 | 1600 | 0.0075 | - | - | - | - | - |
| 0.5111 | 1610 | 0.0002 | - | - | - | - | - |
| 0.5143 | 1620 | 0.001 | - | - | - | - | - |
| 0.5175 | 1630 | 0.0015 | - | - | - | - | - |
| 0.5206 | 1640 | 0.0015 | - | - | - | - | - |
| 0.5238 | 1650 | 0.3041 | - | - | - | - | - |
| 0.5270 | 1660 | 0.0328 | - | - | - | - | - |
| 0.5302 | 1670 | 0.0138 | - | - | - | - | - |
| 0.5333 | 1680 | 0.0007 | - | - | - | - | - |
| 0.5365 | 1690 | 0.0008 | - | - | - | - | - |
| 0.5397 | 1700 | 0.0011 | - | - | - | - | - |
| 0.5429 | 1710 | 0.0013 | - | - | - | - | - |
| 0.5460 | 1720 | 0.0011 | - | - | - | - | - |
| 0.5492 | 1730 | 0.2332 | - | - | - | - | - |
| 0.5524 | 1740 | 0.0021 | - | - | - | - | - |
| 0.5556 | 1750 | 0.8243 | - | - | - | - | - |
| 0.5587 | 1760 | 0.0199 | - | - | - | - | - |
| 0.5619 | 1770 | 0.0118 | - | - | - | - | - |
| 0.5651 | 1780 | 0.0425 | - | - | - | - | - |
| 0.5683 | 1790 | 0.003 | - | - | - | - | - |
| 0.5714 | 1800 | 0.0024 | - | - | - | - | - |
| 0.5746 | 1810 | 0.0002 | - | - | - | - | - |
| 0.5778 | 1820 | 0.0459 | - | - | - | - | - |
| 0.5810 | 1830 | 0.0018 | - | - | - | - | - |
| 0.5841 | 1840 | 0.0009 | - | - | - | - | - |
| 0.5873 | 1850 | 0.0007 | - | - | - | - | - |
| 0.5905 | 1860 | 0.0112 | - | - | - | - | - |
| 0.5937 | 1870 | 0.0302 | - | - | - | - | - |
| 0.5968 | 1880 | 0.0101 | - | - | - | - | - |
| 0.6 | 1890 | 0.0098 | - | - | - | - | - |
| 0.6032 | 1900 | 0.0332 | - | - | - | - | - |
| 0.6063 | 1910 | 0.0017 | - | - | - | - | - |
| 0.6095 | 1920 | 0.007 | - | - | - | - | - |
| 0.6127 | 1930 | 0.0012 | - | - | - | - | - |
| 0.6159 | 1940 | 0.0971 | - | - | - | - | - |
| 0.6190 | 1950 | 0.0009 | - | - | - | - | - |
| 0.6222 | 1960 | 0.0001 | - | - | - | - | - |
| 0.6254 | 1970 | 0.0041 | - | - | - | - | - |
| 0.6286 | 1980 | 0.0021 | - | - | - | - | - |
| 0.6317 | 1990 | 0.0044 | - | - | - | - | - |
| 0.6349 | 2000 | 0.0004 | - | - | - | - | - |
| 0.6381 | 2010 | 0.0077 | - | - | - | - | - |
| 0.6413 | 2020 | 0.0002 | - | - | - | - | - |
| 0.6444 | 2030 | 0.0006 | - | - | - | - | - |
| 0.6476 | 2040 | 0.0008 | - | - | - | - | - |
| 0.6508 | 2050 | 0.0004 | - | - | - | - | - |
| 0.6540 | 2060 | 0.0013 | - | - | - | - | - |
| 0.6571 | 2070 | 0.0009 | - | - | - | - | - |
| 0.6603 | 2080 | 0.0015 | - | - | - | - | - |
| 0.6635 | 2090 | 0.0002 | - | - | - | - | - |
| 0.6667 | 2100 | 0.0028 | - | - | - | - | - |
| 0.6698 | 2110 | 0.0008 | - | - | - | - | - |
| 0.6730 | 2120 | 0.0094 | - | - | - | - | - |
| 0.6762 | 2130 | 0.5743 | - | - | - | - | - |
| 0.6794 | 2140 | 0.0002 | - | - | - | - | - |
| 0.6825 | 2150 | 0.0006 | - | - | - | - | - |
| 0.6857 | 2160 | 0.0005 | - | - | - | - | - |
| 0.6889 | 2170 | 0.0002 | - | - | - | - | - |
| 0.6921 | 2180 | 0.0032 | - | - | - | - | - |
| 0.6952 | 2190 | 0.0006 | - | - | - | - | - |
| 0.6984 | 2200 | 0.0012 | - | - | - | - | - |
| 0.7016 | 2210 | 0.0598 | - | - | - | - | - |
| 0.7048 | 2220 | 0.0 | - | - | - | - | - |
| 0.7079 | 2230 | 0.0001 | - | - | - | - | - |
| 0.7111 | 2240 | 0.0001 | - | - | - | - | - |
| 0.7143 | 2250 | 0.0082 | - | - | - | - | - |
| 0.7175 | 2260 | 0.0033 | - | - | - | - | - |
| 0.7206 | 2270 | 0.0004 | - | - | - | - | - |
| 0.7238 | 2280 | 0.0132 | - | - | - | - | - |
| 0.7270 | 2290 | 0.0004 | - | - | - | - | - |
| 0.7302 | 2300 | 0.0107 | - | - | - | - | - |
| 0.7333 | 2310 | 0.0018 | - | - | - | - | - |
| 0.7365 | 2320 | 0.0255 | - | - | - | - | - |
| 0.7397 | 2330 | 0.0001 | - | - | - | - | - |
| 0.7429 | 2340 | 0.0025 | - | - | - | - | - |
| 0.7460 | 2350 | 0.3299 | - | - | - | - | - |
| 0.7492 | 2360 | 0.0039 | - | - | - | - | - |
| 0.7524 | 2370 | 0.0511 | - | - | - | - | - |
| 0.7556 | 2380 | 0.0001 | - | - | - | - | - |
| 0.7587 | 2390 | 0.0002 | - | - | - | - | - |
| 0.7619 | 2400 | 0.0001 | - | - | - | - | - |
| 0.7651 | 2410 | 0.0002 | - | - | - | - | - |
| 0.7683 | 2420 | 0.0072 | - | - | - | - | - |
| 0.7714 | 2430 | 0.0453 | - | - | - | - | - |
| 0.7746 | 2440 | 0.0003 | - | - | - | - | - |
| 0.7778 | 2450 | 0.0224 | - | - | - | - | - |
| 0.7810 | 2460 | 0.0035 | - | - | - | - | - |
| 0.7841 | 2470 | 0.001 | - | - | - | - | - |
| 0.7873 | 2480 | 0.0003 | - | - | - | - | - |
| 0.7905 | 2490 | 0.0001 | - | - | - | - | - |
| 0.7937 | 2500 | 0.0002 | - | - | - | - | - |
| 0.7968 | 2510 | 0.0489 | - | - | - | - | - |
| 0.8 | 2520 | 0.0001 | - | - | - | - | - |
| 0.8032 | 2530 | 0.0128 | - | - | - | - | - |
| 0.8063 | 2540 | 0.0009 | - | - | - | - | - |
| 0.8095 | 2550 | 0.0022 | - | - | - | - | - |
| 0.8127 | 2560 | 0.0002 | - | - | - | - | - |
| 0.8159 | 2570 | 0.0525 | - | - | - | - | - |
| 0.8190 | 2580 | 0.0005 | - | - | - | - | - |
| 0.8222 | 2590 | 0.2441 | - | - | - | - | - |
| 0.8254 | 2600 | 0.0002 | - | - | - | - | - |
| 0.8286 | 2610 | 0.0002 | - | - | - | - | - |
| 0.8317 | 2620 | 0.0004 | - | - | - | - | - |
| 0.8349 | 2630 | 0.0007 | - | - | - | - | - |
| 0.8381 | 2640 | 0.01 | - | - | - | - | - |
| 0.8413 | 2650 | 1.0383 | - | - | - | - | - |
| 0.8444 | 2660 | 0.2035 | - | - | - | - | - |
| 0.8476 | 2670 | 0.0246 | - | - | - | - | - |
| 0.8508 | 2680 | 0.056 | - | - | - | - | - |
| 0.8540 | 2690 | 0.0 | - | - | - | - | - |
| 0.8571 | 2700 | 0.0 | - | - | - | - | - |
| 0.8603 | 2710 | 0.378 | - | - | - | - | - |
| 0.8635 | 2720 | 0.0076 | - | - | - | - | - |
| 0.8667 | 2730 | 0.0108 | - | - | - | - | - |
| 0.8698 | 2740 | 0.0066 | - | - | - | - | - |
| 0.8730 | 2750 | 0.0146 | - | - | - | - | - |
| 0.8762 | 2760 | 0.0002 | - | - | - | - | - |
| 0.8794 | 2770 | 0.0005 | - | - | - | - | - |
| 0.8825 | 2780 | 0.0001 | - | - | - | - | - |
| 0.8857 | 2790 | 0.0001 | - | - | - | - | - |
| 0.8889 | 2800 | 0.006 | - | - | - | - | - |
| 0.8921 | 2810 | 0.0021 | - | - | - | - | - |
| 0.8952 | 2820 | 0.0314 | - | - | - | - | - |
| 0.8984 | 2830 | 0.0008 | - | - | - | - | - |
| 0.9016 | 2840 | 0.0004 | - | - | - | - | - |
| 0.9048 | 2850 | 0.0024 | - | - | - | - | - |
| 0.9079 | 2860 | 0.0004 | - | - | - | - | - |
| 0.9111 | 2870 | 0.0004 | - | - | - | - | - |
| 0.9143 | 2880 | 0.0001 | - | - | - | - | - |
| 0.9175 | 2890 | 0.0017 | - | - | - | - | - |
| 0.9206 | 2900 | 0.0004 | - | - | - | - | - |
| 0.9238 | 2910 | 0.0016 | - | - | - | - | - |
| 0.9270 | 2920 | 0.0004 | - | - | - | - | - |
| 0.9302 | 2930 | 0.0029 | - | - | - | - | - |
| 0.9333 | 2940 | 0.0011 | - | - | - | - | - |
| 0.9365 | 2950 | 0.0015 | - | - | - | - | - |
| 0.9397 | 2960 | 0.0128 | - | - | - | - | - |
| 0.9429 | 2970 | 0.311 | - | - | - | - | - |
| 0.9460 | 2980 | 0.0244 | - | - | - | - | - |
| 0.9492 | 2990 | 0.0278 | - | - | - | - | - |
| 0.9524 | 3000 | 0.0016 | - | - | - | - | - |
| 0.9556 | 3010 | 0.0005 | - | - | - | - | - |
| 0.9587 | 3020 | 0.0008 | - | - | - | - | - |
| 0.9619 | 3030 | 0.0005 | - | - | - | - | - |
| 0.9651 | 3040 | 0.0 | - | - | - | - | - |
| 0.9683 | 3050 | 0.0103 | - | - | - | - | - |
| 0.9714 | 3060 | 0.0019 | - | - | - | - | - |
| 0.9746 | 3070 | 0.0011 | - | - | - | - | - |
| 0.9778 | 3080 | 0.0005 | - | - | - | - | - |
| 0.9810 | 3090 | 0.0377 | - | - | - | - | - |
| 0.9841 | 3100 | 0.0006 | - | - | - | - | - |
| 0.9873 | 3110 | 0.7692 | - | - | - | - | - |
| 0.9905 | 3120 | 0.0005 | - | - | - | - | - |
| 0.9937 | 3130 | 0.0006 | - | - | - | - | - |
| 0.9968 | 3140 | 0.0062 | - | - | - | - | - |
| 1.0 | 3150 | 0.0161 | 0.7705 | 0.7679 | 0.7597 | 0.7425 | 0.7233 |
| 1.0032 | 3160 | 0.0032 | - | - | - | - | - |
| 1.0063 | 3170 | 0.0 | - | - | - | - | - |
| 1.0095 | 3180 | 0.0016 | - | - | - | - | - |
| 1.0127 | 3190 | 0.0001 | - | - | - | - | - |
| 1.0159 | 3200 | 0.0221 | - | - | - | - | - |
| 1.0190 | 3210 | 0.0004 | - | - | - | - | - |
| 1.0222 | 3220 | 0.0008 | - | - | - | - | - |
| 1.0254 | 3230 | 0.0001 | - | - | - | - | - |
| 1.0286 | 3240 | 0.0004 | - | - | - | - | - |
| 1.0317 | 3250 | 0.0004 | - | - | - | - | - |
| 1.0349 | 3260 | 0.0004 | - | - | - | - | - |
| 1.0381 | 3270 | 0.0 | - | - | - | - | - |
| 1.0413 | 3280 | 0.0001 | - | - | - | - | - |
| 1.0444 | 3290 | 0.2183 | - | - | - | - | - |
| 1.0476 | 3300 | 0.045 | - | - | - | - | - |
| 1.0508 | 3310 | 0.0002 | - | - | - | - | - |
| 1.0540 | 3320 | 0.0001 | - | - | - | - | - |
| 1.0571 | 3330 | 0.0167 | - | - | - | - | - |
| 1.0603 | 3340 | 0.0043 | - | - | - | - | - |
| 1.0635 | 3350 | 0.0012 | - | - | - | - | - |
| 1.0667 | 3360 | 0.0006 | - | - | - | - | - |
| 1.0698 | 3370 | 0.0029 | - | - | - | - | - |
| 1.0730 | 3380 | 0.0004 | - | - | - | - | - |
| 1.0762 | 3390 | 0.0024 | - | - | - | - | - |
| 1.0794 | 3400 | 0.0019 | - | - | - | - | - |
| 1.0825 | 3410 | 0.2129 | - | - | - | - | - |
| 1.0857 | 3420 | 0.06 | - | - | - | - | - |
| 1.0889 | 3430 | 0.0001 | - | - | - | - | - |
| 1.0921 | 3440 | 0.0008 | - | - | - | - | - |
| 1.0952 | 3450 | 0.0 | - | - | - | - | - |
| 1.0984 | 3460 | 0.0006 | - | - | - | - | - |
| 1.1016 | 3470 | 0.0001 | - | - | - | - | - |
| 1.1048 | 3480 | 0.0009 | - | - | - | - | - |
| 1.1079 | 3490 | 0.0016 | - | - | - | - | - |
| 1.1111 | 3500 | 0.0002 | - | - | - | - | - |
| 1.1143 | 3510 | 0.0001 | - | - | - | - | - |
| 1.1175 | 3520 | 0.0198 | - | - | - | - | - |
| 1.1206 | 3530 | 0.0018 | - | - | - | - | - |
| 1.1238 | 3540 | 0.0 | - | - | - | - | - |
| 1.1270 | 3550 | 0.0001 | - | - | - | - | - |
| 1.1302 | 3560 | 0.0003 | - | - | - | - | - |
| 1.1333 | 3570 | 0.0021 | - | - | - | - | - |
| 1.1365 | 3580 | 0.0 | - | - | - | - | - |
| 1.1397 | 3590 | 0.0007 | - | - | - | - | - |
| 1.1429 | 3600 | 0.0 | - | - | - | - | - |
| 1.1460 | 3610 | 0.0016 | - | - | - | - | - |
| 1.1492 | 3620 | 0.0005 | - | - | - | - | - |
| 1.1524 | 3630 | 0.001 | - | - | - | - | - |
| 1.1556 | 3640 | 0.0042 | - | - | - | - | - |
| 1.1587 | 3650 | 0.0008 | - | - | - | - | - |
| 1.1619 | 3660 | 0.0002 | - | - | - | - | - |
| 1.1651 | 3670 | 0.0004 | - | - | - | - | - |
| 1.1683 | 3680 | 0.1335 | - | - | - | - | - |
| 1.1714 | 3690 | 0.0014 | - | - | - | - | - |
| 1.1746 | 3700 | 0.0009 | - | - | - | - | - |
| 1.1778 | 3710 | 0.0017 | - | - | - | - | - |
| 1.1810 | 3720 | 0.0088 | - | - | - | - | - |
| 1.1841 | 3730 | 0.0002 | - | - | - | - | - |
| 1.1873 | 3740 | 0.0122 | - | - | - | - | - |
| 1.1905 | 3750 | 0.0001 | - | - | - | - | - |
| 1.1937 | 3760 | 0.0 | - | - | - | - | - |
| 1.1968 | 3770 | 0.0017 | - | - | - | - | - |
| 1.2 | 3780 | 0.0031 | - | - | - | - | - |
| 1.2032 | 3790 | 0.0026 | - | - | - | - | - |
| 1.2063 | 3800 | 0.0001 | - | - | - | - | - |
| 1.2095 | 3810 | 0.026 | - | - | - | - | - |
| 1.2127 | 3820 | 0.0002 | - | - | - | - | - |
| 1.2159 | 3830 | 0.0053 | - | - | - | - | - |
| 1.2190 | 3840 | 0.0004 | - | - | - | - | - |
| 1.2222 | 3850 | 0.2406 | - | - | - | - | - |
| 1.2254 | 3860 | 0.0069 | - | - | - | - | - |
| 1.2286 | 3870 | 0.0098 | - | - | - | - | - |
| 1.2317 | 3880 | 0.0005 | - | - | - | - | - |
| 1.2349 | 3890 | 0.0056 | - | - | - | - | - |
| 1.2381 | 3900 | 0.0 | - | - | - | - | - |
| 1.2413 | 3910 | 0.0001 | - | - | - | - | - |
| 1.2444 | 3920 | 0.0003 | - | - | - | - | - |
| 1.2476 | 3930 | 0.0007 | - | - | - | - | - |
| 1.2508 | 3940 | 0.0029 | - | - | - | - | - |
| 1.2540 | 3950 | 0.0001 | - | - | - | - | - |
| 1.2571 | 3960 | 0.0022 | - | - | - | - | - |
| 1.2603 | 3970 | 0.0021 | - | - | - | - | - |
| 1.2635 | 3980 | 0.0001 | - | - | - | - | - |
| 1.2667 | 3990 | 0.0006 | - | - | - | - | - |
| 1.2698 | 4000 | 0.0 | - | - | - | - | - |
| 1.2730 | 4010 | 0.0 | - | - | - | - | - |
| 1.2762 | 4020 | 0.0003 | - | - | - | - | - |
| 1.2794 | 4030 | 0.525 | - | - | - | - | - |
| 1.2825 | 4040 | 0.0001 | - | - | - | - | - |
| 1.2857 | 4050 | 0.0001 | - | - | - | - | - |
| 1.2889 | 4060 | 0.0003 | - | - | - | - | - |
| 1.2921 | 4070 | 0.0001 | - | - | - | - | - |
| 1.2952 | 4080 | 0.0002 | - | - | - | - | - |
| 1.2984 | 4090 | 0.0001 | - | - | - | - | - |
| 1.3016 | 4100 | 0.0006 | - | - | - | - | - |
| 1.3048 | 4110 | 0.0003 | - | - | - | - | - |
| 1.3079 | 4120 | 0.0162 | - | - | - | - | - |
| 1.3111 | 4130 | 0.0002 | - | - | - | - | - |
| 1.3143 | 4140 | 0.008 | - | - | - | - | - |
| 1.3175 | 4150 | 0.6283 | - | - | - | - | - |
| 1.3206 | 4160 | 0.0 | - | - | - | - | - |
| 1.3238 | 4170 | 0.0004 | - | - | - | - | - |
| 1.3270 | 4180 | 0.0002 | - | - | - | - | - |
| 1.3302 | 4190 | 0.0 | - | - | - | - | - |
| 1.3333 | 4200 | 0.0002 | - | - | - | - | - |
| 1.3365 | 4210 | 0.0002 | - | - | - | - | - |
| 1.3397 | 4220 | 0.0001 | - | - | - | - | - |
| 1.3429 | 4230 | 0.0023 | - | - | - | - | - |
| 1.3460 | 4240 | 0.0002 | - | - | - | - | - |
| 1.3492 | 4250 | 0.0 | - | - | - | - | - |
| 1.3524 | 4260 | 0.0 | - | - | - | - | - |
| 1.3556 | 4270 | 0.0 | - | - | - | - | - |
| 1.3587 | 4280 | 0.002 | - | - | - | - | - |
| 1.3619 | 4290 | 0.0019 | - | - | - | - | - |
| 1.3651 | 4300 | 0.0012 | - | - | - | - | - |
| 1.3683 | 4310 | 0.0061 | - | - | - | - | - |
| 1.3714 | 4320 | 0.0677 | - | - | - | - | - |
| 1.3746 | 4330 | 0.0 | - | - | - | - | - |
| 1.3778 | 4340 | 0.0 | - | - | - | - | - |
| 1.3810 | 4350 | 0.0784 | - | - | - | - | - |
| 1.3841 | 4360 | 0.0001 | - | - | - | - | - |
| 1.3873 | 4370 | 0.0097 | - | - | - | - | - |
| 1.3905 | 4380 | 0.0004 | - | - | - | - | - |
| 1.3937 | 4390 | 0.0001 | - | - | - | - | - |
| 1.3968 | 4400 | 0.0065 | - | - | - | - | - |
| 1.4 | 4410 | 0.0002 | - | - | - | - | - |
| 1.4032 | 4420 | 0.0128 | - | - | - | - | - |
| 1.4063 | 4430 | 0.0001 | - | - | - | - | - |
| 1.4095 | 4440 | 0.0006 | - | - | - | - | - |
| 1.4127 | 4450 | 0.0002 | - | - | - | - | - |
| 1.4159 | 4460 | 0.0008 | - | - | - | - | - |
| 1.4190 | 4470 | 0.0001 | - | - | - | - | - |
| 1.4222 | 4480 | 0.0001 | - | - | - | - | - |
| 1.4254 | 4490 | 0.0001 | - | - | - | - | - |
| 1.4286 | 4500 | 0.0511 | - | - | - | - | - |
| 1.4317 | 4510 | 0.0001 | - | - | - | - | - |
| 1.4349 | 4520 | 0.0001 | - | - | - | - | - |
| 1.4381 | 4530 | 0.0044 | - | - | - | - | - |
| 1.4413 | 4540 | 0.0025 | - | - | - | - | - |
| 1.4444 | 4550 | 0.0001 | - | - | - | - | - |
| 1.4476 | 4560 | 0.0001 | - | - | - | - | - |
| 1.4508 | 4570 | 0.015 | - | - | - | - | - |
| 1.4540 | 4580 | 0.0002 | - | - | - | - | - |
| 1.4571 | 4590 | 0.0001 | - | - | - | - | - |
| 1.4603 | 4600 | 0.0308 | - | - | - | - | - |
| 1.4635 | 4610 | 0.0005 | - | - | - | - | - |
| 1.4667 | 4620 | 0.0101 | - | - | - | - | - |
| 1.4698 | 4630 | 0.0012 | - | - | - | - | - |
| 1.4730 | 4640 | 0.0023 | - | - | - | - | - |
| 1.4762 | 4650 | 0.0003 | - | - | - | - | - |
| 1.4794 | 4660 | 0.0313 | - | - | - | - | - |
| 1.4825 | 4670 | 0.0048 | - | - | - | - | - |
| 1.4857 | 4680 | 0.0013 | - | - | - | - | - |
| 1.4889 | 4690 | 0.0008 | - | - | - | - | - |
| 1.4921 | 4700 | 0.0001 | - | - | - | - | - |
| 1.4952 | 4710 | 0.0007 | - | - | - | - | - |
| 1.4984 | 4720 | 0.0 | - | - | - | - | - |
| 1.5016 | 4730 | 0.0002 | - | - | - | - | - |
| 1.5048 | 4740 | 0.0019 | - | - | - | - | - |
| 1.5079 | 4750 | 0.0491 | - | - | - | - | - |
| 1.5111 | 4760 | 0.0272 | - | - | - | - | - |
| 1.5143 | 4770 | 0.0003 | - | - | - | - | - |
| 1.5175 | 4780 | 0.0003 | - | - | - | - | - |
| 1.5206 | 4790 | 0.0 | - | - | - | - | - |
| 1.5238 | 4800 | 0.0001 | - | - | - | - | - |
| 1.5270 | 4810 | 0.0006 | - | - | - | - | - |
| 1.5302 | 4820 | 0.0001 | - | - | - | - | - |
| 1.5333 | 4830 | 0.0011 | - | - | - | - | - |
| 1.5365 | 4840 | 0.0001 | - | - | - | - | - |
| 1.5397 | 4850 | 0.0004 | - | - | - | - | - |
| 1.5429 | 4860 | 0.002 | - | - | - | - | - |
| 1.5460 | 4870 | 0.8482 | - | - | - | - | - |
| 1.5492 | 4880 | 0.0001 | - | - | - | - | - |
| 1.5524 | 4890 | 0.0001 | - | - | - | - | - |
| 1.5556 | 4900 | 0.0004 | - | - | - | - | - |
| 1.5587 | 4910 | 0.0084 | - | - | - | - | - |
| 1.5619 | 4920 | 0.0006 | - | - | - | - | - |
| 1.5651 | 4930 | 0.3809 | - | - | - | - | - |
| 1.5683 | 4940 | 0.0007 | - | - | - | - | - |
| 1.5714 | 4950 | 0.0 | - | - | - | - | - |
| 1.5746 | 4960 | 0.002 | - | - | - | - | - |
| 1.5778 | 4970 | 0.0021 | - | - | - | - | - |
| 1.5810 | 4980 | 0.3699 | - | - | - | - | - |
| 1.5841 | 4990 | 0.0022 | - | - | - | - | - |
| 1.5873 | 5000 | 0.0022 | - | - | - | - | - |
| 1.5905 | 5010 | 0.0043 | - | - | - | - | - |
| 1.5937 | 5020 | 0.0001 | - | - | - | - | - |
| 1.5968 | 5030 | 0.0001 | - | - | - | - | - |
| 1.6 | 5040 | 0.0016 | - | - | - | - | - |
| 1.6032 | 5050 | 0.0004 | - | - | - | - | - |
| 1.6063 | 5060 | 0.0003 | - | - | - | - | - |
| 1.6095 | 5070 | 0.0017 | - | - | - | - | - |
| 1.6127 | 5080 | 0.0016 | - | - | - | - | - |
| 1.6159 | 5090 | 0.0001 | - | - | - | - | - |
| 1.6190 | 5100 | 0.0051 | - | - | - | - | - |
| 1.6222 | 5110 | 0.0 | - | - | - | - | - |
| 1.6254 | 5120 | 0.0214 | - | - | - | - | - |
| 1.6286 | 5130 | 0.0031 | - | - | - | - | - |
| 1.6317 | 5140 | 0.0011 | - | - | - | - | - |
| 1.6349 | 5150 | 0.0 | - | - | - | - | - |
| 1.6381 | 5160 | 0.0001 | - | - | - | - | - |
| 1.6413 | 5170 | 0.0001 | - | - | - | - | - |
| 1.6444 | 5180 | 0.0015 | - | - | - | - | - |
| 1.6476 | 5190 | 0.0002 | - | - | - | - | - |
| 1.6508 | 5200 | 0.0001 | - | - | - | - | - |
| 1.6540 | 5210 | 0.0023 | - | - | - | - | - |
| 1.6571 | 5220 | 0.2279 | - | - | - | - | - |
| 1.6603 | 5230 | 0.0787 | - | - | - | - | - |
| 1.6635 | 5240 | 0.0002 | - | - | - | - | - |
| 1.6667 | 5250 | 0.0015 | - | - | - | - | - |
| 1.6698 | 5260 | 0.0 | - | - | - | - | - |
| 1.6730 | 5270 | 0.0004 | - | - | - | - | - |
| 1.6762 | 5280 | 0.0011 | - | - | - | - | - |
| 1.6794 | 5290 | 0.0003 | - | - | - | - | - |
| 1.6825 | 5300 | 0.0017 | - | - | - | - | - |
| 1.6857 | 5310 | 0.0002 | - | - | - | - | - |
| 1.6889 | 5320 | 0.0 | - | - | - | - | - |
| 1.6921 | 5330 | 0.001 | - | - | - | - | - |
| 1.6952 | 5340 | 0.0003 | - | - | - | - | - |
| 1.6984 | 5350 | 0.0004 | - | - | - | - | - |
| 1.7016 | 5360 | 0.0294 | - | - | - | - | - |
| 1.7048 | 5370 | 0.0005 | - | - | - | - | - |
| 1.7079 | 5380 | 0.0123 | - | - | - | - | - |
| 1.7111 | 5390 | 0.0053 | - | - | - | - | - |
| 1.7143 | 5400 | 0.2908 | - | - | - | - | - |
| 1.7175 | 5410 | 0.0001 | - | - | - | - | - |
| 1.7206 | 5420 | 0.0005 | - | - | - | - | - |
| 1.7238 | 5430 | 0.0004 | - | - | - | - | - |
| 1.7270 | 5440 | 0.0384 | - | - | - | - | - |
| 1.7302 | 5450 | 0.2805 | - | - | - | - | - |
| 1.7333 | 5460 | 0.0004 | - | - | - | - | - |
| 1.7365 | 5470 | 0.0013 | - | - | - | - | - |
| 1.7397 | 5480 | 0.0002 | - | - | - | - | - |
| 1.7429 | 5490 | 1.5794 | - | - | - | - | - |
| 1.7460 | 5500 | 0.0125 | - | - | - | - | - |
| 1.7492 | 5510 | 0.0029 | - | - | - | - | - |
| 1.7524 | 5520 | 0.0 | - | - | - | - | - |
| 1.7556 | 5530 | 0.0001 | - | - | - | - | - |
| 1.7587 | 5540 | 0.0025 | - | - | - | - | - |
| 1.7619 | 5550 | 0.0446 | - | - | - | - | - |
| 1.7651 | 5560 | 0.0023 | - | - | - | - | - |
| 1.7683 | 5570 | 0.0001 | - | - | - | - | - |
| 1.7714 | 5580 | 0.0004 | - | - | - | - | - |
| 1.7746 | 5590 | 0.0003 | - | - | - | - | - |
| 1.7778 | 5600 | 0.0002 | - | - | - | - | - |
| 1.7810 | 5610 | 0.0002 | - | - | - | - | - |
| 1.7841 | 5620 | 0.1482 | - | - | - | - | - |
| 1.7873 | 5630 | 0.0632 | - | - | - | - | - |
| 1.7905 | 5640 | 0.0009 | - | - | - | - | - |
| 1.7937 | 5650 | 0.0027 | - | - | - | - | - |
| 1.7968 | 5660 | 0.0011 | - | - | - | - | - |
| 1.8 | 5670 | 0.0001 | - | - | - | - | - |
| 1.8032 | 5680 | 0.0 | - | - | - | - | - |
| 1.8063 | 5690 | 0.0029 | - | - | - | - | - |
| 1.8095 | 5700 | 0.0004 | - | - | - | - | - |
| 1.8127 | 5710 | 0.0019 | - | - | - | - | - |
| 1.8159 | 5720 | 0.1265 | - | - | - | - | - |
| 1.8190 | 5730 | 0.0004 | - | - | - | - | - |
| 1.8222 | 5740 | 0.0012 | - | - | - | - | - |
| 1.8254 | 5750 | 0.0001 | - | - | - | - | - |
| 1.8286 | 5760 | 0.0047 | - | - | - | - | - |
| 1.8317 | 5770 | 0.0227 | - | - | - | - | - |
| 1.8349 | 5780 | 0.0003 | - | - | - | - | - |
| 1.8381 | 5790 | 0.0001 | - | - | - | - | - |
| 1.8413 | 5800 | 0.0044 | - | - | - | - | - |
| 1.8444 | 5810 | 0.0001 | - | - | - | - | - |
| 1.8476 | 5820 | 0.0004 | - | - | - | - | - |
| 1.8508 | 5830 | 0.0005 | - | - | - | - | - |
| 1.8540 | 5840 | 0.0009 | - | - | - | - | - |
| 1.8571 | 5850 | 0.0027 | - | - | - | - | - |
| 1.8603 | 5860 | 0.0003 | - | - | - | - | - |
| 1.8635 | 5870 | 0.0 | - | - | - | - | - |
| 1.8667 | 5880 | 0.0001 | - | - | - | - | - |
| 1.8698 | 5890 | 0.0002 | - | - | - | - | - |
| 1.8730 | 5900 | 0.0 | - | - | - | - | - |
| 1.8762 | 5910 | 0.0002 | - | - | - | - | - |
| 1.8794 | 5920 | 0.001 | - | - | - | - | - |
| 1.8825 | 5930 | 0.0001 | - | - | - | - | - |
| 1.8857 | 5940 | 0.0001 | - | - | - | - | - |
| 1.8889 | 5950 | 0.0049 | - | - | - | - | - |
| 1.8921 | 5960 | 0.0 | - | - | - | - | - |
| 1.8952 | 5970 | 0.0023 | - | - | - | - | - |
| 1.8984 | 5980 | 0.0001 | - | - | - | - | - |
| 1.9016 | 5990 | 0.0002 | - | - | - | - | - |
| 1.9048 | 6000 | 0.0371 | - | - | - | - | - |
| 1.9079 | 6010 | 0.0 | - | - | - | - | - |
| 1.9111 | 6020 | 0.0001 | - | - | - | - | - |
| 1.9143 | 6030 | 0.0116 | - | - | - | - | - |
| 1.9175 | 6040 | 0.0 | - | - | - | - | - |
| 1.9206 | 6050 | 0.0 | - | - | - | - | - |
| 1.9238 | 6060 | 0.0 | - | - | - | - | - |
| 1.9270 | 6070 | 0.0001 | - | - | - | - | - |
| 1.9302 | 6080 | 0.0001 | - | - | - | - | - |
| 1.9333 | 6090 | 0.0002 | - | - | - | - | - |
| 1.9365 | 6100 | 0.4081 | - | - | - | - | - |
| 1.9397 | 6110 | 0.0309 | - | - | - | - | - |
| 1.9429 | 6120 | 0.0009 | - | - | - | - | - |
| 1.9460 | 6130 | 0.0018 | - | - | - | - | - |
| 1.9492 | 6140 | 0.0005 | - | - | - | - | - |
| 1.9524 | 6150 | 0.0058 | - | - | - | - | - |
| 1.9556 | 6160 | 0.0 | - | - | - | - | - |
| 1.9587 | 6170 | 0.0215 | - | - | - | - | - |
| 1.9619 | 6180 | 0.0007 | - | - | - | - | - |
| 1.9651 | 6190 | 0.0072 | - | - | - | - | - |
| 1.9683 | 6200 | 0.0002 | - | - | - | - | - |
| 1.9714 | 6210 | 0.0001 | - | - | - | - | - |
| 1.9746 | 6220 | 0.0002 | - | - | - | - | - |
| 1.9778 | 6230 | 0.0001 | - | - | - | - | - |
| 1.9810 | 6240 | 0.0005 | - | - | - | - | - |
| 1.9841 | 6250 | 0.0011 | - | - | - | - | - |
| 1.9873 | 6260 | 0.0027 | - | - | - | - | - |
| 1.9905 | 6270 | 0.0016 | - | - | - | - | - |
| 1.9937 | 6280 | 0.0364 | - | - | - | - | - |
| 1.9968 | 6290 | 0.0016 | - | - | - | - | - |
| 2.0 | 6300 | 0.0001 | 0.7724 | 0.7705 | 0.7673 | 0.7579 | 0.7396 |
| 2.0032 | 6310 | 0.0 | - | - | - | - | - |
| 2.0063 | 6320 | 0.0391 | - | - | - | - | - |
| 2.0095 | 6330 | 0.0009 | - | - | - | - | - |
| 2.0127 | 6340 | 0.0045 | - | - | - | - | - |
| 2.0159 | 6350 | 0.0002 | - | - | - | - | - |
| 2.0190 | 6360 | 0.0224 | - | - | - | - | - |
| 2.0222 | 6370 | 0.007 | - | - | - | - | - |
| 2.0254 | 6380 | 0.0011 | - | - | - | - | - |
| 2.0286 | 6390 | 0.0 | - | - | - | - | - |
| 2.0317 | 6400 | 0.001 | - | - | - | - | - |
| 2.0349 | 6410 | 0.0004 | - | - | - | - | - |
| 2.0381 | 6420 | 0.0 | - | - | - | - | - |
| 2.0413 | 6430 | 0.1194 | - | - | - | - | - |
| 2.0444 | 6440 | 0.0023 | - | - | - | - | - |
| 2.0476 | 6450 | 0.0004 | - | - | - | - | - |
| 2.0508 | 6460 | 0.0 | - | - | - | - | - |
| 2.0540 | 6470 | 0.0007 | - | - | - | - | - |
| 2.0571 | 6480 | 0.0001 | - | - | - | - | - |
| 2.0603 | 6490 | 0.0 | - | - | - | - | - |
| 2.0635 | 6500 | 0.0063 | - | - | - | - | - |
| 2.0667 | 6510 | 0.0 | - | - | - | - | - |
| 2.0698 | 6520 | 0.0047 | - | - | - | - | - |
| 2.0730 | 6530 | 0.0001 | - | - | - | - | - |
| 2.0762 | 6540 | 0.0 | - | - | - | - | - |
| 2.0794 | 6550 | 0.0001 | - | - | - | - | - |
| 2.0825 | 6560 | 0.0 | - | - | - | - | - |
| 2.0857 | 6570 | 0.0 | - | - | - | - | - |
| 2.0889 | 6580 | 0.0078 | - | - | - | - | - |
| 2.0921 | 6590 | 0.0016 | - | - | - | - | - |
| 2.0952 | 6600 | 0.0014 | - | - | - | - | - |
| 2.0984 | 6610 | 0.0001 | - | - | - | - | - |
| 2.1016 | 6620 | 0.0001 | - | - | - | - | - |
| 2.1048 | 6630 | 0.0001 | - | - | - | - | - |
| 2.1079 | 6640 | 0.0047 | - | - | - | - | - |
| 2.1111 | 6650 | 0.0009 | - | - | - | - | - |
| 2.1143 | 6660 | 0.0001 | - | - | - | - | - |
| 2.1175 | 6670 | 0.0003 | - | - | - | - | - |
| 2.1206 | 6680 | 0.0 | - | - | - | - | - |
| 2.1238 | 6690 | 0.0001 | - | - | - | - | - |
| 2.1270 | 6700 | 0.0 | - | - | - | - | - |
| 2.1302 | 6710 | 0.2378 | - | - | - | - | - |
| 2.1333 | 6720 | 0.0001 | - | - | - | - | - |
| 2.1365 | 6730 | 0.0 | - | - | - | - | - |
| 2.1397 | 6740 | 0.0011 | - | - | - | - | - |
| 2.1429 | 6750 | 0.0012 | - | - | - | - | - |
| 2.1460 | 6760 | 0.0001 | - | - | - | - | - |
| 2.1492 | 6770 | 0.0005 | - | - | - | - | - |
| 2.1524 | 6780 | 0.0 | - | - | - | - | - |
| 2.1556 | 6790 | 0.0318 | - | - | - | - | - |
| 2.1587 | 6800 | 0.0002 | - | - | - | - | - |
| 2.1619 | 6810 | 0.0004 | - | - | - | - | - |
| 2.1651 | 6820 | 0.0004 | - | - | - | - | - |
| 2.1683 | 6830 | 0.005 | - | - | - | - | - |
| 2.1714 | 6840 | 0.0003 | - | - | - | - | - |
| 2.1746 | 6850 | 0.0002 | - | - | - | - | - |
| 2.1778 | 6860 | 0.0008 | - | - | - | - | - |
| 2.1810 | 6870 | 0.0002 | - | - | - | - | - |
| 2.1841 | 6880 | 0.0003 | - | - | - | - | - |
| 2.1873 | 6890 | 0.0 | - | - | - | - | - |
| 2.1905 | 6900 | 0.0001 | - | - | - | - | - |
| 2.1937 | 6910 | 0.0 | - | - | - | - | - |
| 2.1968 | 6920 | 0.001 | - | - | - | - | - |
| 2.2 | 6930 | 0.1066 | - | - | - | - | - |
| 2.2032 | 6940 | 0.002 | - | - | - | - | - |
| 2.2063 | 6950 | 0.0006 | - | - | - | - | - |
| 2.2095 | 6960 | 0.0006 | - | - | - | - | - |
| 2.2127 | 6970 | 0.0 | - | - | - | - | - |
| 2.2159 | 6980 | 0.0005 | - | - | - | - | - |
| 2.2190 | 6990 | 0.0006 | - | - | - | - | - |
| 2.2222 | 7000 | 0.0002 | - | - | - | - | - |
| 2.2254 | 7010 | 0.0001 | - | - | - | - | - |
| 2.2286 | 7020 | 0.0357 | - | - | - | - | - |
| 2.2317 | 7030 | 0.0014 | - | - | - | - | - |
| 2.2349 | 7040 | 0.0007 | - | - | - | - | - |
| 2.2381 | 7050 | 0.0004 | - | - | - | - | - |
| 2.2413 | 7060 | 0.0003 | - | - | - | - | - |
| 2.2444 | 7070 | 0.0018 | - | - | - | - | - |
| 2.2476 | 7080 | 0.07 | - | - | - | - | - |
| 2.2508 | 7090 | 0.0001 | - | - | - | - | - |
| 2.2540 | 7100 | 0.0001 | - | - | - | - | - |
| 2.2571 | 7110 | 0.0002 | - | - | - | - | - |
| 2.2603 | 7120 | 0.024 | - | - | - | - | - |
| 2.2635 | 7130 | 0.0034 | - | - | - | - | - |
| 2.2667 | 7140 | 0.0025 | - | - | - | - | - |
| 2.2698 | 7150 | 0.0001 | - | - | - | - | - |
| 2.2730 | 7160 | 0.0006 | - | - | - | - | - |
| 2.2762 | 7170 | 0.0 | - | - | - | - | - |
| 2.2794 | 7180 | 0.0015 | - | - | - | - | - |
| 2.2825 | 7190 | 0.0024 | - | - | - | - | - |
| 2.2857 | 7200 | 0.2618 | - | - | - | - | - |
| 2.2889 | 7210 | 0.0006 | - | - | - | - | - |
| 2.2921 | 7220 | 0.0001 | - | - | - | - | - |
| 2.2952 | 7230 | 0.0008 | - | - | - | - | - |
| 2.2984 | 7240 | 0.0001 | - | - | - | - | - |
| 2.3016 | 7250 | 0.0 | - | - | - | - | - |
| 2.3048 | 7260 | 0.0016 | - | - | - | - | - |
| 2.3079 | 7270 | 0.0 | - | - | - | - | - |
| 2.3111 | 7280 | 0.0482 | - | - | - | - | - |
| 2.3143 | 7290 | 0.0001 | - | - | - | - | - |
| 2.3175 | 7300 | 0.0 | - | - | - | - | - |
| 2.3206 | 7310 | 0.0 | - | - | - | - | - |
| 2.3238 | 7320 | 0.0259 | - | - | - | - | - |
| 2.3270 | 7330 | 0.0005 | - | - | - | - | - |
| 2.3302 | 7340 | 0.0008 | - | - | - | - | - |
| 2.3333 | 7350 | 0.0063 | - | - | - | - | - |
| 2.3365 | 7360 | 0.0003 | - | - | - | - | - |
| 2.3397 | 7370 | 0.0025 | - | - | - | - | - |
| 2.3429 | 7380 | 0.0215 | - | - | - | - | - |
| 2.3460 | 7390 | 0.1826 | - | - | - | - | - |
| 2.3492 | 7400 | 0.001 | - | - | - | - | - |
| 2.3524 | 7410 | 0.0006 | - | - | - | - | - |
| 2.3556 | 7420 | 0.0 | - | - | - | - | - |
| 2.3587 | 7430 | 0.0 | - | - | - | - | - |
| 2.3619 | 7440 | 0.005 | - | - | - | - | - |
| 2.3651 | 7450 | 0.004 | - | - | - | - | - |
| 2.3683 | 7460 | 0.0 | - | - | - | - | - |
| 2.3714 | 7470 | 0.0003 | - | - | - | - | - |
| 2.3746 | 7480 | 0.0002 | - | - | - | - | - |
| 2.3778 | 7490 | 0.0001 | - | - | - | - | - |
| 2.3810 | 7500 | 0.0024 | - | - | - | - | - |
| 2.3841 | 7510 | 0.0 | - | - | - | - | - |
| 2.3873 | 7520 | 0.0001 | - | - | - | - | - |
| 2.3905 | 7530 | 0.0036 | - | - | - | - | - |
| 2.3937 | 7540 | 0.0007 | - | - | - | - | - |
| 2.3968 | 7550 | 0.0 | - | - | - | - | - |
| 2.4 | 7560 | 0.0001 | - | - | - | - | - |
| 2.4032 | 7570 | 0.0196 | - | - | - | - | - |
| 2.4063 | 7580 | 0.0003 | - | - | - | - | - |
| 2.4095 | 7590 | 0.0042 | - | - | - | - | - |
| 2.4127 | 7600 | 0.0185 | - | - | - | - | - |
| 2.4159 | 7610 | 0.2535 | - | - | - | - | - |
| 2.4190 | 7620 | 0.0 | - | - | - | - | - |
| 2.4222 | 7630 | 0.1162 | - | - | - | - | - |
| 2.4254 | 7640 | 0.0 | - | - | - | - | - |
| 2.4286 | 7650 | 0.0006 | - | - | - | - | - |
| 2.4317 | 7660 | 0.0003 | - | - | - | - | - |
| 2.4349 | 7670 | 0.0004 | - | - | - | - | - |
| 2.4381 | 7680 | 0.0 | - | - | - | - | - |
| 2.4413 | 7690 | 0.0 | - | - | - | - | - |
| 2.4444 | 7700 | 0.0003 | - | - | - | - | - |
| 2.4476 | 7710 | 0.0001 | - | - | - | - | - |
| 2.4508 | 7720 | 0.0016 | - | - | - | - | - |
| 2.4540 | 7730 | 0.0 | - | - | - | - | - |
| 2.4571 | 7740 | 0.001 | - | - | - | - | - |
| 2.4603 | 7750 | 0.0042 | - | - | - | - | - |
| 2.4635 | 7760 | 0.0011 | - | - | - | - | - |
| 2.4667 | 7770 | 0.0 | - | - | - | - | - |
| 2.4698 | 7780 | 0.0002 | - | - | - | - | - |
| 2.4730 | 7790 | 0.0 | - | - | - | - | - |
| 2.4762 | 7800 | 0.0 | - | - | - | - | - |
| 2.4794 | 7810 | 0.0002 | - | - | - | - | - |
| 2.4825 | 7820 | 0.0003 | - | - | - | - | - |
| 2.4857 | 7830 | 0.0072 | - | - | - | - | - |
| 2.4889 | 7840 | 0.0003 | - | - | - | - | - |
| 2.4921 | 7850 | 0.0006 | - | - | - | - | - |
| 2.4952 | 7860 | 0.005 | - | - | - | - | - |
| 2.4984 | 7870 | 0.0243 | - | - | - | - | - |
| 2.5016 | 7880 | 0.0 | - | - | - | - | - |
| 2.5048 | 7890 | 0.0 | - | - | - | - | - |
| 2.5079 | 7900 | 0.0001 | - | - | - | - | - |
| 2.5111 | 7910 | 0.0006 | - | - | - | - | - |
| 2.5143 | 7920 | 0.0002 | - | - | - | - | - |
| 2.5175 | 7930 | 0.0019 | - | - | - | - | - |
| 2.5206 | 7940 | 0.0014 | - | - | - | - | - |
| 2.5238 | 7950 | 0.0001 | - | - | - | - | - |
| 2.5270 | 7960 | 0.0043 | - | - | - | - | - |
| 2.5302 | 7970 | 0.0002 | - | - | - | - | - |
| 2.5333 | 7980 | 0.0 | - | - | - | - | - |
| 2.5365 | 7990 | 0.0044 | - | - | - | - | - |
| 2.5397 | 8000 | 0.001 | - | - | - | - | - |
| 2.5429 | 8010 | 0.0155 | - | - | - | - | - |
| 2.5460 | 8020 | 0.0011 | - | - | - | - | - |
| 2.5492 | 8030 | 0.002 | - | - | - | - | - |
| 2.5524 | 8040 | 0.0 | - | - | - | - | - |
| 2.5556 | 8050 | 0.0048 | - | - | - | - | - |
| 2.5587 | 8060 | 0.0043 | - | - | - | - | - |
| 2.5619 | 8070 | 0.0 | - | - | - | - | - |
| 2.5651 | 8080 | 0.0001 | - | - | - | - | - |
| 2.5683 | 8090 | 0.001 | - | - | - | - | - |
| 2.5714 | 8100 | 0.0004 | - | - | - | - | - |
| 2.5746 | 8110 | 0.0002 | - | - | - | - | - |
| 2.5778 | 8120 | 0.0002 | - | - | - | - | - |
| 2.5810 | 8130 | 0.1305 | - | - | - | - | - |
| 2.5841 | 8140 | 0.0001 | - | - | - | - | - |
| 2.5873 | 8150 | 0.0 | - | - | - | - | - |
| 2.5905 | 8160 | 0.0018 | - | - | - | - | - |
| 2.5937 | 8170 | 0.002 | - | - | - | - | - |
| 2.5968 | 8180 | 0.0001 | - | - | - | - | - |
| 2.6 | 8190 | 0.0007 | - | - | - | - | - |
| 2.6032 | 8200 | 0.0002 | - | - | - | - | - |
| 2.6063 | 8210 | 0.0004 | - | - | - | - | - |
| 2.6095 | 8220 | 0.0005 | - | - | - | - | - |
| 2.6127 | 8230 | 0.0 | - | - | - | - | - |
| 2.6159 | 8240 | 0.0001 | - | - | - | - | - |
| 2.6190 | 8250 | 0.0257 | - | - | - | - | - |
| 2.6222 | 8260 | 0.0001 | - | - | - | - | - |
| 2.6254 | 8270 | 0.0 | - | - | - | - | - |
| 2.6286 | 8280 | 0.0001 | - | - | - | - | - |
| 2.6317 | 8290 | 0.0001 | - | - | - | - | - |
| 2.6349 | 8300 | 0.0009 | - | - | - | - | - |
| 2.6381 | 8310 | 0.0013 | - | - | - | - | - |
| 2.6413 | 8320 | 0.0001 | - | - | - | - | - |
| 2.6444 | 8330 | 0.0 | - | - | - | - | - |
| 2.6476 | 8340 | 0.0 | - | - | - | - | - |
| 2.6508 | 8350 | 0.0 | - | - | - | - | - |
| 2.6540 | 8360 | 0.0003 | - | - | - | - | - |
| 2.6571 | 8370 | 0.0001 | - | - | - | - | - |
| 2.6603 | 8380 | 0.0013 | - | - | - | - | - |
| 2.6635 | 8390 | 0.0001 | - | - | - | - | - |
| 2.6667 | 8400 | 0.0 | - | - | - | - | - |
| 2.6698 | 8410 | 0.0073 | - | - | - | - | - |
| 2.6730 | 8420 | 0.0001 | - | - | - | - | - |
| 2.6762 | 8430 | 0.0003 | - | - | - | - | - |
| 2.6794 | 8440 | 0.0006 | - | - | - | - | - |
| 2.6825 | 8450 | 0.0002 | - | - | - | - | - |
| 2.6857 | 8460 | 0.0004 | - | - | - | - | - |
| 2.6889 | 8470 | 0.0369 | - | - | - | - | - |
| 2.6921 | 8480 | 0.001 | - | - | - | - | - |
| 2.6952 | 8490 | 0.0002 | - | - | - | - | - |
| 2.6984 | 8500 | 0.0 | - | - | - | - | - |
| 2.7016 | 8510 | 0.002 | - | - | - | - | - |
| 2.7048 | 8520 | 0.002 | - | - | - | - | - |
| 2.7079 | 8530 | 0.0025 | - | - | - | - | - |
| 2.7111 | 8540 | 0.0 | - | - | - | - | - |
| 2.7143 | 8550 | 0.0014 | - | - | - | - | - |
| 2.7175 | 8560 | 0.0 | - | - | - | - | - |
| 2.7206 | 8570 | 0.0001 | - | - | - | - | - |
| 2.7238 | 8580 | 0.0007 | - | - | - | - | - |
| 2.7270 | 8590 | 0.0001 | - | - | - | - | - |
| 2.7302 | 8600 | 0.0003 | - | - | - | - | - |
| 2.7333 | 8610 | 0.0007 | - | - | - | - | - |
| 2.7365 | 8620 | 0.0 | - | - | - | - | - |
| 2.7397 | 8630 | 0.0011 | - | - | - | - | - |
| 2.7429 | 8640 | 0.0 | - | - | - | - | - |
| 2.7460 | 8650 | 0.0002 | - | - | - | - | - |
| 2.7492 | 8660 | 0.0115 | - | - | - | - | - |
| 2.7524 | 8670 | 0.0003 | - | - | - | - | - |
| 2.7556 | 8680 | 0.0 | - | - | - | - | - |
| 2.7587 | 8690 | 0.0097 | - | - | - | - | - |
| 2.7619 | 8700 | 0.0199 | - | - | - | - | - |
| 2.7651 | 8710 | 0.0832 | - | - | - | - | - |
| 2.7683 | 8720 | 0.0007 | - | - | - | - | - |
| 2.7714 | 8730 | 0.0011 | - | - | - | - | - |
| 2.7746 | 8740 | 0.0001 | - | - | - | - | - |
| 2.7778 | 8750 | 0.0002 | - | - | - | - | - |
| 2.7810 | 8760 | 0.1405 | - | - | - | - | - |
| 2.7841 | 8770 | 0.0002 | - | - | - | - | - |
| 2.7873 | 8780 | 0.0001 | - | - | - | - | - |
| 2.7905 | 8790 | 0.0013 | - | - | - | - | - |
| 2.7937 | 8800 | 0.0001 | - | - | - | - | - |
| 2.7968 | 8810 | 0.0631 | - | - | - | - | - |
| 2.8 | 8820 | 0.0004 | - | - | - | - | - |
| 2.8032 | 8830 | 0.0 | - | - | - | - | - |
| 2.8063 | 8840 | 0.0 | - | - | - | - | - |
| 2.8095 | 8850 | 0.0 | - | - | - | - | - |
| 2.8127 | 8860 | 0.0 | - | - | - | - | - |
| 2.8159 | 8870 | 0.0012 | - | - | - | - | - |
| 2.8190 | 8880 | 0.0 | - | - | - | - | - |
| 2.8222 | 8890 | 0.0002 | - | - | - | - | - |
| 2.8254 | 8900 | 0.0069 | - | - | - | - | - |
| 2.8286 | 8910 | 0.0132 | - | - | - | - | - |
| 2.8317 | 8920 | 0.0001 | - | - | - | - | - |
| 2.8349 | 8930 | 0.0005 | - | - | - | - | - |
| 2.8381 | 8940 | 0.0019 | - | - | - | - | - |
| 2.8413 | 8950 | 0.0001 | - | - | - | - | - |
| 2.8444 | 8960 | 0.001 | - | - | - | - | - |
| 2.8476 | 8970 | 0.0 | - | - | - | - | - |
| 2.8508 | 8980 | 0.0 | - | - | - | - | - |
| 2.8540 | 8990 | 0.0009 | - | - | - | - | - |
| 2.8571 | 9000 | 0.0049 | - | - | - | - | - |
| 2.8603 | 9010 | 0.0018 | - | - | - | - | - |
| 2.8635 | 9020 | 0.0 | - | - | - | - | - |
| 2.8667 | 9030 | 0.0002 | - | - | - | - | - |
| 2.8698 | 9040 | 0.0006 | - | - | - | - | - |
| 2.8730 | 9050 | 0.0012 | - | - | - | - | - |
| 2.8762 | 9060 | 0.1402 | - | - | - | - | - |
| 2.8794 | 9070 | 0.0005 | - | - | - | - | - |
| 2.8825 | 9080 | 0.0001 | - | - | - | - | - |
| 2.8857 | 9090 | 0.0 | - | - | - | - | - |
| 2.8889 | 9100 | 0.0001 | - | - | - | - | - |
| 2.8921 | 9110 | 0.0035 | - | - | - | - | - |
| 2.8952 | 9120 | 0.0001 | - | - | - | - | - |
| 2.8984 | 9130 | 0.0141 | - | - | - | - | - |
| 2.9016 | 9140 | 0.0456 | - | - | - | - | - |
| 2.9048 | 9150 | 0.0001 | - | - | - | - | - |
| 2.9079 | 9160 | 0.0 | - | - | - | - | - |
| 2.9111 | 9170 | 0.0001 | - | - | - | - | - |
| 2.9143 | 9180 | 0.0001 | - | - | - | - | - |
| 2.9175 | 9190 | 0.0 | - | - | - | - | - |
| 2.9206 | 9200 | 0.0 | - | - | - | - | - |
| 2.9238 | 9210 | 0.0007 | - | - | - | - | - |
| 2.9270 | 9220 | 0.0002 | - | - | - | - | - |
| 2.9302 | 9230 | 0.0 | - | - | - | - | - |
| 2.9333 | 9240 | 0.0001 | - | - | - | - | - |
| 2.9365 | 9250 | 0.0006 | - | - | - | - | - |
| 2.9397 | 9260 | 0.0005 | - | - | - | - | - |
| 2.9429 | 9270 | 0.0 | - | - | - | - | - |
| 2.9460 | 9280 | 0.0001 | - | - | - | - | - |
| 2.9492 | 9290 | 0.0 | - | - | - | - | - |
| 2.9524 | 9300 | 0.0002 | - | - | - | - | - |
| 2.9556 | 9310 | 0.0 | - | - | - | - | - |
| 2.9587 | 9320 | 0.0004 | - | - | - | - | - |
| 2.9619 | 9330 | 0.0002 | - | - | - | - | - |
| 2.9651 | 9340 | 0.0006 | - | - | - | - | - |
| 2.9683 | 9350 | 0.0 | - | - | - | - | - |
| 2.9714 | 9360 | 0.0001 | - | - | - | - | - |
| 2.9746 | 9370 | 0.0012 | - | - | - | - | - |
| 2.9778 | 9380 | 0.009 | - | - | - | - | - |
| 2.9810 | 9390 | 0.0 | - | - | - | - | - |
| 2.9841 | 9400 | 0.02 | - | - | - | - | - |
| 2.9873 | 9410 | 0.0001 | - | - | - | - | - |
| 2.9905 | 9420 | 0.0003 | - | - | - | - | - |
| 2.9937 | 9430 | 0.0 | - | - | - | - | - |
| 2.9968 | 9440 | 0.0006 | - | - | - | - | - |
| **3.0** | **9450** | **0.0001** | **0.7783** | **0.7725** | **0.7705** | **0.7601** | **0.7515** |
| 3.0032 | 9460 | 0.0 | - | - | - | - | - |
| 3.0063 | 9470 | 0.0 | - | - | - | - | - |
| 3.0095 | 9480 | 0.0 | - | - | - | - | - |
| 3.0127 | 9490 | 0.0 | - | - | - | - | - |
| 3.0159 | 9500 | 0.0 | - | - | - | - | - |
| 3.0190 | 9510 | 0.0017 | - | - | - | - | - |
| 3.0222 | 9520 | 0.0018 | - | - | - | - | - |
| 3.0254 | 9530 | 0.0001 | - | - | - | - | - |
| 3.0286 | 9540 | 0.0001 | - | - | - | - | - |
| 3.0317 | 9550 | 0.0088 | - | - | - | - | - |
| 3.0349 | 9560 | 0.0 | - | - | - | - | - |
| 3.0381 | 9570 | 0.0 | - | - | - | - | - |
| 3.0413 | 9580 | 0.0002 | - | - | - | - | - |
| 3.0444 | 9590 | 0.0001 | - | - | - | - | - |
| 3.0476 | 9600 | 0.0001 | - | - | - | - | - |
| 3.0508 | 9610 | 0.0001 | - | - | - | - | - |
| 3.0540 | 9620 | 0.509 | - | - | - | - | - |
| 3.0571 | 9630 | 0.0 | - | - | - | - | - |
| 3.0603 | 9640 | 0.0 | - | - | - | - | - |
| 3.0635 | 9650 | 0.0003 | - | - | - | - | - |
| 3.0667 | 9660 | 0.0 | - | - | - | - | - |
| 3.0698 | 9670 | 0.0 | - | - | - | - | - |
| 3.0730 | 9680 | 0.0 | - | - | - | - | - |
| 3.0762 | 9690 | 0.0028 | - | - | - | - | - |
| 3.0794 | 9700 | 0.0015 | - | - | - | - | - |
| 3.0825 | 9710 | 0.2634 | - | - | - | - | - |
| 3.0857 | 9720 | 0.007 | - | - | - | - | - |
| 3.0889 | 9730 | 0.0002 | - | - | - | - | - |
| 3.0921 | 9740 | 0.0001 | - | - | - | - | - |
| 3.0952 | 9750 | 0.0001 | - | - | - | - | - |
| 3.0984 | 9760 | 0.0 | - | - | - | - | - |
| 3.1016 | 9770 | 0.0001 | - | - | - | - | - |
| 3.1048 | 9780 | 0.0065 | - | - | - | - | - |
| 3.1079 | 9790 | 0.0001 | - | - | - | - | - |
| 3.1111 | 9800 | 0.0 | - | - | - | - | - |
| 3.1143 | 9810 | 0.0001 | - | - | - | - | - |
| 3.1175 | 9820 | 0.0001 | - | - | - | - | - |
| 3.1206 | 9830 | 0.0002 | - | - | - | - | - |
| 3.1238 | 9840 | 0.0 | - | - | - | - | - |
| 3.1270 | 9850 | 0.0001 | - | - | - | - | - |
| 3.1302 | 9860 | 0.0 | - | - | - | - | - |
| 3.1333 | 9870 | 0.0008 | - | - | - | - | - |
| 3.1365 | 9880 | 0.0002 | - | - | - | - | - |
| 3.1397 | 9890 | 0.0 | - | - | - | - | - |
| 3.1429 | 9900 | 0.0001 | - | - | - | - | - |
| 3.1460 | 9910 | 0.0001 | - | - | - | - | - |
| 3.1492 | 9920 | 0.0002 | - | - | - | - | - |
| 3.1524 | 9930 | 0.0 | - | - | - | - | - |
| 3.1556 | 9940 | 0.0005 | - | - | - | - | - |
| 3.1587 | 9950 | 0.0 | - | - | - | - | - |
| 3.1619 | 9960 | 0.0001 | - | - | - | - | - |
| 3.1651 | 9970 | 0.0 | - | - | - | - | - |
| 3.1683 | 9980 | 0.0 | - | - | - | - | - |
| 3.1714 | 9990 | 0.0005 | - | - | - | - | - |
| 3.1746 | 10000 | 0.0009 | - | - | - | - | - |
| 3.1778 | 10010 | 0.0001 | - | - | - | - | - |
| 3.1810 | 10020 | 0.0013 | - | - | - | - | - |
| 3.1841 | 10030 | 0.0002 | - | - | - | - | - |
| 3.1873 | 10040 | 0.0001 | - | - | - | - | - |
| 3.1905 | 10050 | 0.0002 | - | - | - | - | - |
| 3.1937 | 10060 | 0.0016 | - | - | - | - | - |
| 3.1968 | 10070 | 0.0 | - | - | - | - | - |
| 3.2 | 10080 | 0.0001 | - | - | - | - | - |
| 3.2032 | 10090 | 0.0 | - | - | - | - | - |
| 3.2063 | 10100 | 0.0021 | - | - | - | - | - |
| 3.2095 | 10110 | 0.0005 | - | - | - | - | - |
| 3.2127 | 10120 | 0.0323 | - | - | - | - | - |
| 3.2159 | 10130 | 0.0011 | - | - | - | - | - |
| 3.2190 | 10140 | 0.0005 | - | - | - | - | - |
| 3.2222 | 10150 | 0.0001 | - | - | - | - | - |
| 3.2254 | 10160 | 0.0001 | - | - | - | - | - |
| 3.2286 | 10170 | 0.0002 | - | - | - | - | - |
| 3.2317 | 10180 | 0.0013 | - | - | - | - | - |
| 3.2349 | 10190 | 0.0002 | - | - | - | - | - |
| 3.2381 | 10200 | 0.0003 | - | - | - | - | - |
| 3.2413 | 10210 | 0.0 | - | - | - | - | - |
| 3.2444 | 10220 | 0.0004 | - | - | - | - | - |
| 3.2476 | 10230 | 0.0001 | - | - | - | - | - |
| 3.2508 | 10240 | 0.1051 | - | - | - | - | - |
| 3.2540 | 10250 | 0.0003 | - | - | - | - | - |
| 3.2571 | 10260 | 0.0 | - | - | - | - | - |
| 3.2603 | 10270 | 0.0005 | - | - | - | - | - |
| 3.2635 | 10280 | 0.0065 | - | - | - | - | - |
| 3.2667 | 10290 | 0.0001 | - | - | - | - | - |
| 3.2698 | 10300 | 0.0004 | - | - | - | - | - |
| 3.2730 | 10310 | 0.0001 | - | - | - | - | - |
| 3.2762 | 10320 | 0.0009 | - | - | - | - | - |
| 3.2794 | 10330 | 0.0 | - | - | - | - | - |
| 3.2825 | 10340 | 0.0 | - | - | - | - | - |
| 3.2857 | 10350 | 0.0 | - | - | - | - | - |
| 3.2889 | 10360 | 0.0 | - | - | - | - | - |
| 3.2921 | 10370 | 0.0 | - | - | - | - | - |
| 3.2952 | 10380 | 0.003 | - | - | - | - | - |
| 3.2984 | 10390 | 0.0668 | - | - | - | - | - |
| 3.3016 | 10400 | 0.0 | - | - | - | - | - |
| 3.3048 | 10410 | 0.0002 | - | - | - | - | - |
| 3.3079 | 10420 | 0.0 | - | - | - | - | - |
| 3.3111 | 10430 | 0.0 | - | - | - | - | - |
| 3.3143 | 10440 | 0.0014 | - | - | - | - | - |
| 3.3175 | 10450 | 0.0 | - | - | - | - | - |
| 3.3206 | 10460 | 0.0 | - | - | - | - | - |
| 3.3238 | 10470 | 0.0 | - | - | - | - | - |
| 3.3270 | 10480 | 0.0003 | - | - | - | - | - |
| 3.3302 | 10490 | 0.0001 | - | - | - | - | - |
| 3.3333 | 10500 | 0.0 | - | - | - | - | - |
| 3.3365 | 10510 | 0.0001 | - | - | - | - | - |
| 3.3397 | 10520 | 0.0011 | - | - | - | - | - |
| 3.3429 | 10530 | 0.0039 | - | - | - | - | - |
| 3.3460 | 10540 | 0.0003 | - | - | - | - | - |
| 3.3492 | 10550 | 0.0 | - | - | - | - | - |
| 3.3524 | 10560 | 0.2692 | - | - | - | - | - |
| 3.3556 | 10570 | 0.0007 | - | - | - | - | - |
| 3.3587 | 10580 | 0.0001 | - | - | - | - | - |
| 3.3619 | 10590 | 0.0008 | - | - | - | - | - |
| 3.3651 | 10600 | 0.0002 | - | - | - | - | - |
| 3.3683 | 10610 | 0.0 | - | - | - | - | - |
| 3.3714 | 10620 | 0.0004 | - | - | - | - | - |
| 3.3746 | 10630 | 0.0 | - | - | - | - | - |
| 3.3778 | 10640 | 0.0001 | - | - | - | - | - |
| 3.3810 | 10650 | 0.0001 | - | - | - | - | - |
| 3.3841 | 10660 | 0.0163 | - | - | - | - | - |
| 3.3873 | 10670 | 0.0097 | - | - | - | - | - |
| 3.3905 | 10680 | 0.0003 | - | - | - | - | - |
| 3.3937 | 10690 | 0.0 | - | - | - | - | - |
| 3.3968 | 10700 | 0.0003 | - | - | - | - | - |
| 3.4 | 10710 | 0.0311 | - | - | - | - | - |
| 3.4032 | 10720 | 0.3813 | - | - | - | - | - |
| 3.4063 | 10730 | 0.0001 | - | - | - | - | - |
| 3.4095 | 10740 | 0.0001 | - | - | - | - | - |
| 3.4127 | 10750 | 0.0001 | - | - | - | - | - |
| 3.4159 | 10760 | 0.0 | - | - | - | - | - |
| 3.4190 | 10770 | 0.0129 | - | - | - | - | - |
| 3.4222 | 10780 | 0.0 | - | - | - | - | - |
| 3.4254 | 10790 | 0.0 | - | - | - | - | - |
| 3.4286 | 10800 | 0.0008 | - | - | - | - | - |
| 3.4317 | 10810 | 0.0001 | - | - | - | - | - |
| 3.4349 | 10820 | 0.0005 | - | - | - | - | - |
| 3.4381 | 10830 | 0.0001 | - | - | - | - | - |
| 3.4413 | 10840 | 0.0029 | - | - | - | - | - |
| 3.4444 | 10850 | 0.0 | - | - | - | - | - |
| 3.4476 | 10860 | 0.002 | - | - | - | - | - |
| 3.4508 | 10870 | 0.0016 | - | - | - | - | - |
| 3.4540 | 10880 | 0.0015 | - | - | - | - | - |
| 3.4571 | 10890 | 0.0 | - | - | - | - | - |
| 3.4603 | 10900 | 0.0001 | - | - | - | - | - |
| 3.4635 | 10910 | 0.0004 | - | - | - | - | - |
| 3.4667 | 10920 | 0.0 | - | - | - | - | - |
| 3.4698 | 10930 | 0.0081 | - | - | - | - | - |
| 3.4730 | 10940 | 0.0 | - | - | - | - | - |
| 3.4762 | 10950 | 0.0001 | - | - | - | - | - |
| 3.4794 | 10960 | 0.0 | - | - | - | - | - |
| 3.4825 | 10970 | 0.0001 | - | - | - | - | - |
| 3.4857 | 10980 | 0.0 | - | - | - | - | - |
| 3.4889 | 10990 | 0.0002 | - | - | - | - | - |
| 3.4921 | 11000 | 0.0001 | - | - | - | - | - |
| 3.4952 | 11010 | 0.0 | - | - | - | - | - |
| 3.4984 | 11020 | 0.0003 | - | - | - | - | - |
| 3.5016 | 11030 | 0.0015 | - | - | - | - | - |
| 3.5048 | 11040 | 0.0766 | - | - | - | - | - |
| 3.5079 | 11050 | 0.0001 | - | - | - | - | - |
| 3.5111 | 11060 | 0.0001 | - | - | - | - | - |
| 3.5143 | 11070 | 0.0001 | - | - | - | - | - |
| 3.5175 | 11080 | 0.0 | - | - | - | - | - |
| 3.5206 | 11090 | 0.0 | - | - | - | - | - |
| 3.5238 | 11100 | 0.0 | - | - | - | - | - |
| 3.5270 | 11110 | 0.0001 | - | - | - | - | - |
| 3.5302 | 11120 | 0.0621 | - | - | - | - | - |
| 3.5333 | 11130 | 0.0065 | - | - | - | - | - |
| 3.5365 | 11140 | 0.0001 | - | - | - | - | - |
| 3.5397 | 11150 | 0.0002 | - | - | - | - | - |
| 3.5429 | 11160 | 0.0016 | - | - | - | - | - |
| 3.5460 | 11170 | 0.0009 | - | - | - | - | - |
| 3.5492 | 11180 | 0.0008 | - | - | - | - | - |
| 3.5524 | 11190 | 0.0063 | - | - | - | - | - |
| 3.5556 | 11200 | 0.0001 | - | - | - | - | - |
| 3.5587 | 11210 | 0.0 | - | - | - | - | - |
| 3.5619 | 11220 | 0.0002 | - | - | - | - | - |
| 3.5651 | 11230 | 0.0001 | - | - | - | - | - |
| 3.5683 | 11240 | 0.0001 | - | - | - | - | - |
| 3.5714 | 11250 | 0.0001 | - | - | - | - | - |
| 3.5746 | 11260 | 0.0003 | - | - | - | - | - |
| 3.5778 | 11270 | 0.0002 | - | - | - | - | - |
| 3.5810 | 11280 | 0.0001 | - | - | - | - | - |
| 3.5841 | 11290 | 0.0 | - | - | - | - | - |
| 3.5873 | 11300 | 0.0044 | - | - | - | - | - |
| 3.5905 | 11310 | 0.0003 | - | - | - | - | - |
| 3.5937 | 11320 | 0.0001 | - | - | - | - | - |
| 3.5968 | 11330 | 0.0012 | - | - | - | - | - |
| 3.6 | 11340 | 0.0097 | - | - | - | - | - |
| 3.6032 | 11350 | 0.0 | - | - | - | - | - |
| 3.6063 | 11360 | 0.0 | - | - | - | - | - |
| 3.6095 | 11370 | 0.0154 | - | - | - | - | - |
| 3.6127 | 11380 | 0.0002 | - | - | - | - | - |
| 3.6159 | 11390 | 0.0001 | - | - | - | - | - |
| 3.6190 | 11400 | 0.0006 | - | - | - | - | - |
| 3.6222 | 11410 | 0.0001 | - | - | - | - | - |
| 3.6254 | 11420 | 0.0005 | - | - | - | - | - |
| 3.6286 | 11430 | 0.0 | - | - | - | - | - |
| 3.6317 | 11440 | 0.0003 | - | - | - | - | - |
| 3.6349 | 11450 | 0.0003 | - | - | - | - | - |
| 3.6381 | 11460 | 0.0017 | - | - | - | - | - |
| 3.6413 | 11470 | 0.0 | - | - | - | - | - |
| 3.6444 | 11480 | 0.0001 | - | - | - | - | - |
| 3.6476 | 11490 | 0.0 | - | - | - | - | - |
| 3.6508 | 11500 | 0.0029 | - | - | - | - | - |
| 3.6540 | 11510 | 0.0031 | - | - | - | - | - |
| 3.6571 | 11520 | 0.0023 | - | - | - | - | - |
| 3.6603 | 11530 | 0.0001 | - | - | - | - | - |
| 3.6635 | 11540 | 0.0024 | - | - | - | - | - |
| 3.6667 | 11550 | 0.0 | - | - | - | - | - |
| 3.6698 | 11560 | 0.0403 | - | - | - | - | - |
| 3.6730 | 11570 | 0.0 | - | - | - | - | - |
| 3.6762 | 11580 | 0.0 | - | - | - | - | - |
| 3.6794 | 11590 | 0.0005 | - | - | - | - | - |
| 3.6825 | 11600 | 0.0002 | - | - | - | - | - |
| 3.6857 | 11610 | 0.0024 | - | - | - | - | - |
| 3.6889 | 11620 | 0.0 | - | - | - | - | - |
| 3.6921 | 11630 | 0.0011 | - | - | - | - | - |
| 3.6952 | 11640 | 0.0 | - | - | - | - | - |
| 3.6984 | 11650 | 0.0002 | - | - | - | - | - |
| 3.7016 | 11660 | 0.0423 | - | - | - | - | - |
| 3.7048 | 11670 | 0.0 | - | - | - | - | - |
| 3.7079 | 11680 | 0.0 | - | - | - | - | - |
| 3.7111 | 11690 | 0.0003 | - | - | - | - | - |
| 3.7143 | 11700 | 0.0 | - | - | - | - | - |
| 3.7175 | 11710 | 0.0001 | - | - | - | - | - |
| 3.7206 | 11720 | 0.0002 | - | - | - | - | - |
| 3.7238 | 11730 | 0.0015 | - | - | - | - | - |
| 3.7270 | 11740 | 0.0 | - | - | - | - | - |
| 3.7302 | 11750 | 0.0001 | - | - | - | - | - |
| 3.7333 | 11760 | 0.0006 | - | - | - | - | - |
| 3.7365 | 11770 | 0.0004 | - | - | - | - | - |
| 3.7397 | 11780 | 0.0 | - | - | - | - | - |
| 3.7429 | 11790 | 0.0002 | - | - | - | - | - |
| 3.7460 | 11800 | 0.0004 | - | - | - | - | - |
| 3.7492 | 11810 | 0.0029 | - | - | - | - | - |
| 3.7524 | 11820 | 0.0001 | - | - | - | - | - |
| 3.7556 | 11830 | 0.0001 | - | - | - | - | - |
| 3.7587 | 11840 | 0.0 | - | - | - | - | - |
| 3.7619 | 11850 | 0.0005 | - | - | - | - | - |
| 3.7651 | 11860 | 0.0078 | - | - | - | - | - |
| 3.7683 | 11870 | 0.0 | - | - | - | - | - |
| 3.7714 | 11880 | 0.0001 | - | - | - | - | - |
| 3.7746 | 11890 | 0.0003 | - | - | - | - | - |
| 3.7778 | 11900 | 0.0 | - | - | - | - | - |
| 3.7810 | 11910 | 0.0001 | - | - | - | - | - |
| 3.7841 | 11920 | 0.0037 | - | - | - | - | - |
| 3.7873 | 11930 | 0.0 | - | - | - | - | - |
| 3.7905 | 11940 | 0.0 | - | - | - | - | - |
| 3.7937 | 11950 | 0.298 | - | - | - | - | - |
| 3.7968 | 11960 | 0.0 | - | - | - | - | - |
| 3.8 | 11970 | 0.0006 | - | - | - | - | - |
| 3.8032 | 11980 | 0.0003 | - | - | - | - | - |
| 3.8063 | 11990 | 0.0002 | - | - | - | - | - |
| 3.8095 | 12000 | 0.0001 | - | - | - | - | - |
| 3.8127 | 12010 | 0.0835 | - | - | - | - | - |
| 3.8159 | 12020 | 0.0054 | - | - | - | - | - |
| 3.8190 | 12030 | 0.0026 | - | - | - | - | - |
| 3.8222 | 12040 | 0.0289 | - | - | - | - | - |
| 3.8254 | 12050 | 0.0004 | - | - | - | - | - |
| 3.8286 | 12060 | 0.0003 | - | - | - | - | - |
| 3.8317 | 12070 | 0.0 | - | - | - | - | - |
| 3.8349 | 12080 | 0.0002 | - | - | - | - | - |
| 3.8381 | 12090 | 0.0002 | - | - | - | - | - |
| 3.8413 | 12100 | 0.0 | - | - | - | - | - |
| 3.8444 | 12110 | 0.0156 | - | - | - | - | - |
| 3.8476 | 12120 | 0.0633 | - | - | - | - | - |
| 3.8508 | 12130 | 0.0 | - | - | - | - | - |
| 3.8540 | 12140 | 0.0 | - | - | - | - | - |
| 3.8571 | 12150 | 0.0 | - | - | - | - | - |
| 3.8603 | 12160 | 0.0006 | - | - | - | - | - |
| 3.8635 | 12170 | 0.0001 | - | - | - | - | - |
| 3.8667 | 12180 | 0.0004 | - | - | - | - | - |
| 3.8698 | 12190 | 0.0003 | - | - | - | - | - |
| 3.8730 | 12200 | 0.0001 | - | - | - | - | - |
| 3.8762 | 12210 | 0.0 | - | - | - | - | - |
| 3.8794 | 12220 | 0.0001 | - | - | - | - | - |
| 3.8825 | 12230 | 0.0001 | - | - | - | - | - |
| 3.8857 | 12240 | 0.0003 | - | - | - | - | - |
| 3.8889 | 12250 | 0.0 | - | - | - | - | - |
| 3.8921 | 12260 | 0.0001 | - | - | - | - | - |
| 3.8952 | 12270 | 0.1166 | - | - | - | - | - |
| 3.8984 | 12280 | 0.3643 | - | - | - | - | - |
| 3.9016 | 12290 | 0.0004 | - | - | - | - | - |
| 3.9048 | 12300 | 0.0001 | - | - | - | - | - |
| 3.9079 | 12310 | 0.0095 | - | - | - | - | - |
| 3.9111 | 12320 | 0.0003 | - | - | - | - | - |
| 3.9143 | 12330 | 0.0003 | - | - | - | - | - |
| 3.9175 | 12340 | 0.0174 | - | - | - | - | - |
| 3.9206 | 12350 | 0.0 | - | - | - | - | - |
| 3.9238 | 12360 | 0.0 | - | - | - | - | - |
| 3.9270 | 12370 | 0.0003 | - | - | - | - | - |
| 3.9302 | 12380 | 0.0 | - | - | - | - | - |
| 3.9333 | 12390 | 0.0001 | - | - | - | - | - |
| 3.9365 | 12400 | 0.0 | - | - | - | - | - |
| 3.9397 | 12410 | 0.0 | - | - | - | - | - |
| 3.9429 | 12420 | 0.0 | - | - | - | - | - |
| 3.9460 | 12430 | 0.0001 | - | - | - | - | - |
| 3.9492 | 12440 | 0.0001 | - | - | - | - | - |
| 3.9524 | 12450 | 0.0 | - | - | - | - | - |
| 3.9556 | 12460 | 0.0 | - | - | - | - | - |
| 3.9587 | 12470 | 0.0005 | - | - | - | - | - |
| 3.9619 | 12480 | 0.0001 | - | - | - | - | - |
| 3.9651 | 12490 | 0.0061 | - | - | - | - | - |
| 3.9683 | 12500 | 0.0006 | - | - | - | - | - |
| 3.9714 | 12510 | 0.0 | - | - | - | - | - |
| 3.9746 | 12520 | 0.0005 | - | - | - | - | - |
| 3.9778 | 12530 | 0.0001 | - | - | - | - | - |
| 3.9810 | 12540 | 0.001 | - | - | - | - | - |
| 3.9841 | 12550 | 0.0051 | - | - | - | - | - |
| 3.9873 | 12560 | 0.0002 | - | - | - | - | - |
| 3.9905 | 12570 | 0.0005 | - | - | - | - | - |
| 3.9937 | 12580 | 0.0 | - | - | - | - | - |
| 3.9968 | 12590 | 0.001 | - | - | - | - | - |
| 4.0 | 12600 | 0.0002 | 0.7771 | 0.7739 | 0.7749 | 0.7568 | 0.7484 |
* The bold row denotes the saved checkpoint.
</details>
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.4.1
- Transformers: 4.48.3
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.3.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
mradermacher/MS-ManciousWriter-22B-v0.3-i1-GGUF | mradermacher | "2025-01-13T13:53:20Z" | 424 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:ThomasComics/MS-ManciousWriter-22B-v0.3",
"base_model:quantized:ThomasComics/MS-ManciousWriter-22B-v0.3",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-01-13T11:24:08Z" | ---
base_model: ThomasComics/MS-ManciousWriter-22B-v0.3
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ThomasComics/MS-ManciousWriter-22B-v0.3
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/MS-ManciousWriter-22B-v0.3-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MS-ManciousWriter-22B-v0.3-i1-GGUF/resolve/main/MS-ManciousWriter-22B-v0.3.i1-IQ1_S.gguf) | i1-IQ1_S | 4.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/MS-ManciousWriter-22B-v0.3-i1-GGUF/resolve/main/MS-ManciousWriter-22B-v0.3.i1-IQ1_M.gguf) | i1-IQ1_M | 5.4 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/MS-ManciousWriter-22B-v0.3-i1-GGUF/resolve/main/MS-ManciousWriter-22B-v0.3.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/MS-ManciousWriter-22B-v0.3-i1-GGUF/resolve/main/MS-ManciousWriter-22B-v0.3.i1-IQ2_XS.gguf) | i1-IQ2_XS | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/MS-ManciousWriter-22B-v0.3-i1-GGUF/resolve/main/MS-ManciousWriter-22B-v0.3.i1-IQ2_S.gguf) | i1-IQ2_S | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/MS-ManciousWriter-22B-v0.3-i1-GGUF/resolve/main/MS-ManciousWriter-22B-v0.3.i1-IQ2_M.gguf) | i1-IQ2_M | 7.7 | |
| [GGUF](https://huggingface.co/mradermacher/MS-ManciousWriter-22B-v0.3-i1-GGUF/resolve/main/MS-ManciousWriter-22B-v0.3.i1-Q2_K_S.gguf) | i1-Q2_K_S | 7.8 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/MS-ManciousWriter-22B-v0.3-i1-GGUF/resolve/main/MS-ManciousWriter-22B-v0.3.i1-Q2_K.gguf) | i1-Q2_K | 8.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/MS-ManciousWriter-22B-v0.3-i1-GGUF/resolve/main/MS-ManciousWriter-22B-v0.3.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 8.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MS-ManciousWriter-22B-v0.3-i1-GGUF/resolve/main/MS-ManciousWriter-22B-v0.3.i1-IQ3_XS.gguf) | i1-IQ3_XS | 9.3 | |
| [GGUF](https://huggingface.co/mradermacher/MS-ManciousWriter-22B-v0.3-i1-GGUF/resolve/main/MS-ManciousWriter-22B-v0.3.i1-Q3_K_S.gguf) | i1-Q3_K_S | 9.7 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/MS-ManciousWriter-22B-v0.3-i1-GGUF/resolve/main/MS-ManciousWriter-22B-v0.3.i1-IQ3_S.gguf) | i1-IQ3_S | 9.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MS-ManciousWriter-22B-v0.3-i1-GGUF/resolve/main/MS-ManciousWriter-22B-v0.3.i1-IQ3_M.gguf) | i1-IQ3_M | 10.2 | |
| [GGUF](https://huggingface.co/mradermacher/MS-ManciousWriter-22B-v0.3-i1-GGUF/resolve/main/MS-ManciousWriter-22B-v0.3.i1-Q3_K_M.gguf) | i1-Q3_K_M | 10.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/MS-ManciousWriter-22B-v0.3-i1-GGUF/resolve/main/MS-ManciousWriter-22B-v0.3.i1-Q3_K_L.gguf) | i1-Q3_K_L | 11.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/MS-ManciousWriter-22B-v0.3-i1-GGUF/resolve/main/MS-ManciousWriter-22B-v0.3.i1-IQ4_XS.gguf) | i1-IQ4_XS | 12.0 | |
| [GGUF](https://huggingface.co/mradermacher/MS-ManciousWriter-22B-v0.3-i1-GGUF/resolve/main/MS-ManciousWriter-22B-v0.3.i1-Q4_0.gguf) | i1-Q4_0 | 12.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/MS-ManciousWriter-22B-v0.3-i1-GGUF/resolve/main/MS-ManciousWriter-22B-v0.3.i1-Q4_K_S.gguf) | i1-Q4_K_S | 12.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/MS-ManciousWriter-22B-v0.3-i1-GGUF/resolve/main/MS-ManciousWriter-22B-v0.3.i1-Q4_K_M.gguf) | i1-Q4_K_M | 13.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MS-ManciousWriter-22B-v0.3-i1-GGUF/resolve/main/MS-ManciousWriter-22B-v0.3.i1-Q4_1.gguf) | i1-Q4_1 | 14.0 | |
| [GGUF](https://huggingface.co/mradermacher/MS-ManciousWriter-22B-v0.3-i1-GGUF/resolve/main/MS-ManciousWriter-22B-v0.3.i1-Q5_K_S.gguf) | i1-Q5_K_S | 15.4 | |
| [GGUF](https://huggingface.co/mradermacher/MS-ManciousWriter-22B-v0.3-i1-GGUF/resolve/main/MS-ManciousWriter-22B-v0.3.i1-Q5_K_M.gguf) | i1-Q5_K_M | 15.8 | |
| [GGUF](https://huggingface.co/mradermacher/MS-ManciousWriter-22B-v0.3-i1-GGUF/resolve/main/MS-ManciousWriter-22B-v0.3.i1-Q6_K.gguf) | i1-Q6_K | 18.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
golesheed/whisper-native-elderly-4-dutch | golesheed | "2024-02-06T09:27:32Z" | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"nl",
"base_model:openai/whisper-large-v2",
"base_model:finetune:openai/whisper-large-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-02-06T08:15:34Z" | ---
language:
- nl
license: apache-2.0
base_model: openai/whisper-large-v2
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Whisper Large V2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large V2
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3247
- Wer: 13.4709
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.5388 | 0.49 | 30 | 0.3297 | 12.2434 |
| 0.2858 | 0.98 | 60 | 0.2893 | 23.3419 |
| 0.143 | 1.48 | 90 | 0.2922 | 13.5327 |
| 0.1337 | 1.97 | 120 | 0.2838 | 10.7065 |
| 0.0606 | 2.46 | 150 | 0.2905 | 10.3765 |
| 0.0557 | 2.95 | 180 | 0.2915 | 10.0258 |
| 0.0265 | 3.44 | 210 | 0.3139 | 10.8613 |
| 0.0207 | 3.93 | 240 | 0.3094 | 10.0670 |
| 0.0098 | 4.43 | 270 | 0.3188 | 12.0578 |
| 0.0098 | 4.92 | 300 | 0.3247 | 13.4709 |
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.15.0
|
HusseinEid/rl_course_vizdoom_health_gathering_supreme | HusseinEid | "2024-04-28T22:02:54Z" | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-04-28T22:02:46Z" | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 9.56 +/- 4.15
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r HusseinEid/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.colab_kernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
luaqi/phi_02281 | luaqi | "2025-02-28T02:34:03Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-28T02:16:10Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mrferr3t/578aa82e-1658-4d5e-9e83-8fdb9069a8da | mrferr3t | "2025-02-02T08:03:45Z" | 12 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-360M",
"base_model:adapter:unsloth/SmolLM-360M",
"license:apache-2.0",
"region:us"
] | null | "2025-02-02T07:29:04Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-360M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 578aa82e-1658-4d5e-9e83-8fdb9069a8da
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-360M
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- afdd4a168ab9d01b_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/afdd4a168ab9d01b_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 50
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/578aa82e-1658-4d5e-9e83-8fdb9069a8da
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0005
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 99
micro_batch_size: 2
mlflow_experiment_name: /tmp/afdd4a168ab9d01b_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 300
saves_per_epoch: 0
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5457da28-ed59-44f3-819c-4944542f77b0
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5457da28-ed59-44f3-819c-4944542f77b0
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 578aa82e-1658-4d5e-9e83-8fdb9069a8da
This model is a fine-tuned version of [unsloth/SmolLM-360M](https://huggingface.co/unsloth/SmolLM-360M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3984
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 99
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.1246 | 0.0000 | 1 | 1.5221 |
| 1.6258 | 0.0022 | 50 | 1.3984 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
KnutJaegersberg/argument-quality-judge-WizardLM-Uncensored-40b-lora | KnutJaegersberg | "2023-07-31T08:26:33Z" | 0 | 0 | null | [
"license:cc-by-4.0",
"region:us"
] | null | "2023-07-31T06:17:46Z" | ---
license: cc-by-4.0
---
Experimental qloras trained on records from
https://zenodo.org/record/3780049
Using as base model:
https://huggingface.co/ehartford/WizardLM-Uncensored-Falcon-40b
Prompt style:
You are a witty debater. Your task is to judge the quality of an argument. Is the argument below a good or a bad argument? Take the rethorical quality, logical quality and dialectical quality of the argument into account. Only respond with your overal judgment as either 'Good' or 'Bad'. \n
Argument:\n
Why when it comes cheaply out the tap would you pay 1,000 times more? Volcanicity perhaps... Take, for instance, Pepsiโs Aquafina or Coca-Colaโs Dasani bottled water. Both are sold in 20 ounce sizes and can be purchased from vending machines alongside soft drinksโโโand at the same price. Assuming you can find a $1 machine, that works out to 5 cents an ounce. These two brands are essentially filtered tap water, bottled close to their distribution point. Most municipal water costs less than one cent per gallon. Now consider another widely-sold liquid: gasoline. It has to be pumped out of the ground in the form of crude oil, shipped to a refinery (often halfway across the world), and shipped again to your local filling station. In the U.S., the average price per gallon is hovering around $3. There are 128 ounces in a gallon, which puts the current price of gasoline at fraction over 2 cents an ounce. And thatโs why thereโs no shortage of companies which want to get into the business. In terms of price versus production cost, bottled water puts Big Oil to shame.\n
### Response:\n
Argument Quality:\n
Good |
JayKim83/kisa-fine-tuned4 | JayKim83 | "2024-05-21T06:07:25Z" | 76 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-05-21T06:01:57Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kixr/kiksy-chatbot-base | kixr | "2024-12-24T09:48:31Z" | 75 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-24T08:50:19Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Manojajj/bert_resume_parser_fine_tuned | Manojajj | "2024-11-17T18:09:09Z" | 7 | 0 | null | [
"safetensors",
"bert",
"license:apache-2.0",
"region:us"
] | null | "2024-11-17T18:06:57Z" | ---
license: apache-2.0
---
|
kangqi-ni/zephyr-7b-beta_bio-tutor_sft | kangqi-ni | "2024-10-09T18:58:41Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"dpo",
"biology",
"education",
"zephyr",
"conversational",
"en",
"arxiv:2402.05000",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-02-22T18:47:06Z" | ---
license: mit
language:
- en
tags:
- dpo
- biology
- education
- zephyr
---
This model is fine-tuned on zephyr-7b-beta with SFT. The purpose is to develop a more capable educational chatbot that helps students study biology.
If you use this work, please cite:
```
@misc{sonkar2024pedagogical,
title={Pedagogical Alignment of Large Language Models},
author={Shashank Sonkar and Kangqi Ni and Sapana Chaudhary and Richard G. Baraniuk},
year={2024},
eprint={2402.05000},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2402.05000}
}
``` |
cwchang/ner_model | cwchang | "2023-12-05T06:45:00Z" | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:wnut_17",
"base_model:distilbert/distilbert-base-multilingual-cased",
"base_model:finetune:distilbert/distilbert-base-multilingual-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2023-12-04T10:29:01Z" | ---
license: apache-2.0
base_model: distilbert-base-multilingual-cased
tags:
- generated_from_trainer
datasets:
- wnut_17
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: ner_model
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wnut_17
type: wnut_17
config: wnut_17
split: validation
args: wnut_17
metrics:
- name: Precision
type: precision
value: 0.6122448979591837
- name: Recall
type: recall
value: 0.430622009569378
- name: F1
type: f1
value: 0.5056179775280899
- name: Accuracy
type: accuracy
value: 0.9499141930973114
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ner_model
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the wnut_17 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2729
- Precision: 0.6122
- Recall: 0.4306
- F1: 0.5056
- Accuracy: 0.9499
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
|
nosenko-mi/Llama-3.2-1B-uk-ext-8e | nosenko-mi | "2024-12-09T10:55:30Z" | 131 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-09T10:53:42Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sbtom/karakuri-midrose-CV | sbtom | "2024-04-17T00:35:06Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"ja",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-16T17:05:06Z" | ---
language:
- ja
pipeline_tag: text-generation
tags:
- merge
---
# karakuri-midroze-CV
ใขใใซใฎ่ฉณ็ดฐใฏใ[ใใกใ](https://huggingface.co/sbtom/karakuri-midrose-CV.gguf)ใงใใ
|
mradermacher/Mistral-NeuralDPO-v0.6-GGUF | mradermacher | "2024-11-04T21:46:07Z" | 6 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"en",
"base_model:Novocoders/Mistral-NeuralDPO-v0.6",
"base_model:quantized:Novocoders/Mistral-NeuralDPO-v0.6",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-11-04T21:30:19Z" | ---
base_model: Novocoders/Mistral-NeuralDPO-v0.6
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Novocoders/Mistral-NeuralDPO-v0.6
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mistral-NeuralDPO-v0.6-GGUF/resolve/main/Mistral-NeuralDPO-v0.6.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-NeuralDPO-v0.6-GGUF/resolve/main/Mistral-NeuralDPO-v0.6.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-NeuralDPO-v0.6-GGUF/resolve/main/Mistral-NeuralDPO-v0.6.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-NeuralDPO-v0.6-GGUF/resolve/main/Mistral-NeuralDPO-v0.6.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-NeuralDPO-v0.6-GGUF/resolve/main/Mistral-NeuralDPO-v0.6.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-NeuralDPO-v0.6-GGUF/resolve/main/Mistral-NeuralDPO-v0.6.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-NeuralDPO-v0.6-GGUF/resolve/main/Mistral-NeuralDPO-v0.6.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-NeuralDPO-v0.6-GGUF/resolve/main/Mistral-NeuralDPO-v0.6.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-NeuralDPO-v0.6-GGUF/resolve/main/Mistral-NeuralDPO-v0.6.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-NeuralDPO-v0.6-GGUF/resolve/main/Mistral-NeuralDPO-v0.6.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-NeuralDPO-v0.6-GGUF/resolve/main/Mistral-NeuralDPO-v0.6.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-NeuralDPO-v0.6-GGUF/resolve/main/Mistral-NeuralDPO-v0.6.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-NeuralDPO-v0.6-GGUF/resolve/main/Mistral-NeuralDPO-v0.6.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
simondh/LunarLander-v2 | simondh | "2023-08-21T14:49:56Z" | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-08-21T14:49:34Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 261.86 +/- 20.91
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
RichardErkhov/macadeliccc_-_MBX-7B-v3-DPO-4bits | RichardErkhov | "2024-08-29T10:56:11Z" | 5 | 0 | null | [
"safetensors",
"mistral",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2024-08-29T10:53:41Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
MBX-7B-v3-DPO - bnb 4bits
- Model creator: https://huggingface.co/macadeliccc/
- Original model: https://huggingface.co/macadeliccc/MBX-7B-v3-DPO/
Original model description:
---
license: cc
library_name: transformers
datasets:
- jondurbin/truthy-dpo-v0.1
model-index:
- name: MBX-7B-v3-DPO
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.55
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/MBX-7B-v3-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 89.11
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/MBX-7B-v3-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.91
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/MBX-7B-v3-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 74.0
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/MBX-7B-v3-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 85.56
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/MBX-7B-v3-DPO
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.67
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/MBX-7B-v3-DPO
name: Open LLM Leaderboard
---
# MBX-7B-v3-DPO
This model is a finetune of [flemmingmiguel/MBX-7B-v3](https://huggingface.co/flemmingmiguel/MBX-7B-v3) using jondurbin/truthy-dpo-v0.1

## Code Example
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("macadeliccc/MBX-7B-v3-DPO")
model = AutoModelForCausalLM.from_pretrained("macadeliccc/MBX-7B-v3-DPO")
messages = [
{"role": "system", "content": "Respond to the users request like a pirate"},
{"role": "user", "content": "Can you write me a quicksort algorithm?"}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
```
## Example Output

## GGUF
Available [here](https://huggingface.co/macadeliccc/MBX-7B-v3-DPO-GGUF/tree/main)
## Exllamav2
Quants are available from bartowski, check them out [here](https://huggingface.co/bartowski/MBX-7B-v3-DPO-exl2)
Download the size you want below, VRAM figures are estimates.
| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (16k) | VRAM (32k) | Description |
| ----- | ---- | ------- | ------ | ------ | ------ | ------------ |
| [8_0](https://huggingface.co/bartowski/MBX-7B-v3-DPO-exl2/tree/8_0) | 8.0 | 8.0 | 8.4 GB | 9.8 GB | 11.8 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/bartowski/MBX-7B-v3-DPO-exl2/tree/6_5) | 6.5 | 8.0 | 7.2 GB | 8.6 GB | 10.6 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |
| [5_0](https://huggingface.co/bartowski/MBX-7B-v3-DPO-exl2/tree/5_0) | 5.0 | 6.0 | 6.0 GB | 7.4 GB | 9.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards. |
| [4_25](https://huggingface.co/bartowski/MBX-7B-v3-DPO-exl2/tree/4_25) | 4.25 | 6.0 | 5.3 GB | 6.7 GB | 8.7 GB | GPTQ equivalent bits per weight, slightly higher quality. |
| [3_5](https://huggingface.co/bartowski/MBX-7B-v3-DPO-exl2/tree/3_5) | 3.5 | 6.0 | 4.7 GB | 6.1 GB | 8.1 GB | Lower quality, only use if you have to. |
## Evaluations
## EQ-Bench Comparison
<pre>----Benchmark Complete----
2024-01-30 15:22:18
Time taken: 145.9 mins
Prompt Format: ChatML
Model: macadeliccc/MBX-7B-v3-DPO
Score (v2): 74.32
Parseable: 166.0
---------------
Batch completed
Time taken: 145.9 mins
---------------
</pre>
### Original Model
<pre>----Benchmark Complete----
2024-01-31 01:26:26
Time taken: 89.1 mins
Prompt Format: Mistral
Model: flemmingmiguel/MBX-7B-v3
Score (v2): 73.87
Parseable: 168.0
---------------
Batch completed
Time taken: 89.1 mins
---------------
</pre>
| Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|-----------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|[MBX-7B-v3-DPO](https://huggingface.co/macadeliccc/MBX-7B-v3-DPO)| 45.16| 77.73| 74.62| 48.83| 61.58|
### AGIEval
| Task |Version| Metric |Value| |Stderr|
|------------------------------|------:|--------|----:|---|-----:|
|agieval_aqua_rat | 0|acc |27.95|ยฑ | 2.82|
| | |acc_norm|26.77|ยฑ | 2.78|
|agieval_logiqa_en | 0|acc |41.01|ยฑ | 1.93|
| | |acc_norm|40.55|ยฑ | 1.93|
|agieval_lsat_ar | 0|acc |25.65|ยฑ | 2.89|
| | |acc_norm|23.91|ยฑ | 2.82|
|agieval_lsat_lr | 0|acc |50.78|ยฑ | 2.22|
| | |acc_norm|52.94|ยฑ | 2.21|
|agieval_lsat_rc | 0|acc |66.54|ยฑ | 2.88|
| | |acc_norm|65.80|ยฑ | 2.90|
|agieval_sat_en | 0|acc |77.67|ยฑ | 2.91|
| | |acc_norm|77.67|ยฑ | 2.91|
|agieval_sat_en_without_passage| 0|acc |43.20|ยฑ | 3.46|
| | |acc_norm|43.20|ยฑ | 3.46|
|agieval_sat_math | 0|acc |32.27|ยฑ | 3.16|
| | |acc_norm|30.45|ยฑ | 3.11|
Average: 45.16%
### GPT4All
| Task |Version| Metric |Value| |Stderr|
|-------------|------:|--------|----:|---|-----:|
|arc_challenge| 0|acc |68.43|ยฑ | 1.36|
| | |acc_norm|68.34|ยฑ | 1.36|
|arc_easy | 0|acc |87.54|ยฑ | 0.68|
| | |acc_norm|82.11|ยฑ | 0.79|
|boolq | 1|acc |88.20|ยฑ | 0.56|
|hellaswag | 0|acc |69.76|ยฑ | 0.46|
| | |acc_norm|87.40|ยฑ | 0.33|
|openbookqa | 0|acc |40.20|ยฑ | 2.19|
| | |acc_norm|49.60|ยฑ | 2.24|
|piqa | 0|acc |83.68|ยฑ | 0.86|
| | |acc_norm|85.36|ยฑ | 0.82|
|winogrande | 0|acc |83.11|ยฑ | 1.05|
Average: 77.73%
### TruthfulQA
| Task |Version|Metric|Value| |Stderr|
|-------------|------:|------|----:|---|-----:|
|truthfulqa_mc| 1|mc1 |58.87|ยฑ | 1.72|
| | |mc2 |74.62|ยฑ | 1.44|
Average: 74.62%
### Bigbench
| Task |Version| Metric |Value| |Stderr|
|------------------------------------------------|------:|---------------------|----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|60.00|ยฑ | 3.56|
|bigbench_date_understanding | 0|multiple_choice_grade|63.14|ยฑ | 2.51|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|47.67|ยฑ | 3.12|
|bigbench_geometric_shapes | 0|multiple_choice_grade|22.56|ยฑ | 2.21|
| | |exact_str_match | 0.84|ยฑ | 0.48|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|33.20|ยฑ | 2.11|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|23.00|ยฑ | 1.59|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|59.67|ยฑ | 2.84|
|bigbench_movie_recommendation | 0|multiple_choice_grade|47.40|ยฑ | 2.24|
|bigbench_navigate | 0|multiple_choice_grade|56.10|ยฑ | 1.57|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|71.25|ยฑ | 1.01|
|bigbench_ruin_names | 0|multiple_choice_grade|56.47|ยฑ | 2.35|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|35.27|ยฑ | 1.51|
|bigbench_snarks | 0|multiple_choice_grade|73.48|ยฑ | 3.29|
|bigbench_sports_understanding | 0|multiple_choice_grade|75.46|ยฑ | 1.37|
|bigbench_temporal_sequences | 0|multiple_choice_grade|52.10|ยฑ | 1.58|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|22.64|ยฑ | 1.18|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|19.83|ยฑ | 0.95|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|59.67|ยฑ | 2.84|
Average: 48.83%
Average score: 61.58%
Elapsed time: 02:37:39
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_macadeliccc__MBX-7B-v3-DPO)
| Metric |Value|
|---------------------------------|----:|
|Avg. |76.13|
|AI2 Reasoning Challenge (25-Shot)|73.55|
|HellaSwag (10-Shot) |89.11|
|MMLU (5-Shot) |64.91|
|TruthfulQA (0-shot) |74.00|
|Winogrande (5-shot) |85.56|
|GSM8k (5-shot) |69.67|
|
mrferr3t/5c8af84a-c463-4e97-81ab-456797866e8b | mrferr3t | "2025-02-06T19:00:56Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"opt",
"axolotl",
"generated_from_trainer",
"base_model:facebook/opt-350m",
"base_model:adapter:facebook/opt-350m",
"license:other",
"region:us"
] | null | "2025-02-06T18:22:30Z" | ---
library_name: peft
license: other
base_model: facebook/opt-350m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5c8af84a-c463-4e97-81ab-456797866e8b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
auto_find_batch_size: true
base_model: facebook/opt-350m
bf16: auto
chat_template: llama3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- 9c6192c81aab012d_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9c6192c81aab012d_train_data.json
type:
field_input: ''
field_instruction: instruction
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 3
early_stopping_threshold: 0.0001
eval_max_new_tokens: 128
eval_steps: 150
eval_strategy: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/5c8af84a-c463-4e97-81ab-456797866e8b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0004
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 150
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps:
micro_batch_size: 32
mlflow_experiment_name: /tmp/9c6192c81aab012d_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 100
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: /workspace/hub_repo/last-checkpoint
s2_attention: null
sample_packing: false
save_steps: 150
saves_per_epoch: 0
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode:
wandb_name: d3628ff6-15f3-4303-be81-2406c09d5eb0
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d3628ff6-15f3-4303-be81-2406c09d5eb0
warmup_steps: 100
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 5c8af84a-c463-4e97-81ab-456797866e8b
This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7595
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| No log | 0.0094 | 1 | 2.3533 |
| 4.3974 | 1.5094 | 160 | 1.9243 |
| 3.8798 | 3.0189 | 320 | 1.8332 |
| 3.636 | 4.5283 | 480 | 1.7927 |
| 3.4973 | 6.0377 | 640 | 1.7694 |
| 3.3521 | 7.5472 | 800 | 1.7598 |
| 3.2664 | 9.0566 | 960 | 1.7597 |
| 3.1551 | 10.5660 | 1120 | 1.7539 |
| 3.0939 | 12.0755 | 1280 | 1.7567 |
| 3.0241 | 13.5849 | 1440 | 1.7581 |
| 2.962 | 15.0943 | 1600 | 1.7595 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ibm-ai-platform/llama3-70b-accelerator | ibm-ai-platform | "2024-08-29T15:34:52Z" | 2,469 | 5 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mlp_speculator",
"license:llama3",
"endpoints_compatible",
"region:us"
] | null | "2024-07-24T20:48:55Z" | ---
license: llama3
---
## Installation from source
```bash
git clone https://github.com/foundation-model-stack/fms-extras
cd fms-extras
pip install -e .
```
## Description
This model is intended to be used as an accelerator for [llama3 70b (instruct)](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct) and takes inspiration
from the Medusa speculative decoding architecture. It is also applicable for [llama3.1 70b (instruct)](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct).
This accelerator modifies the MLP into a multi-stage MLP, where each stage predicts
a single token in the draft based on both a state vector and sampled token
from the prior stage (the base model can be considered stage 0).
The state vector from the base model provides contextual information to the accelerator,
while conditioning on prior sampled tokens allows it to produce higher-quality draft n-grams.
Note: The underlying MLP speculator is a generic architecture that can be trained with any generative model to accelerate inference.
Training is light-weight and can be completed in only a few days depending on base model size and speed.
## Repository Links
1. [Paged Attention KV-Cache / Speculator](https://github.com/foundation-model-stack/fms-extras)
2. [Production Server with speculative decoding](https://github.com/IBM/text-generation-inference.git)
3. [Speculator training](https://github.com/foundation-model-stack/fms-fsdp/pull/35)
## Samples
_Note: For all samples, your environment must have access to cuda_
### Use in IBM Production TGIS
*To try this out running in a production-like environment, please use the pre-built docker image:*
#### Setup
```bash
HF_HUB_CACHE=/hf_hub_cache
chmod a+w $HF_HUB_CACHE
HF_HUB_TOKEN="your huggingface hub token"
TGIS_IMAGE=quay.io/wxpe/text-gen-server:main.ddc56ee
docker pull $TGIS_IMAGE
# optionally download llama3-70b-instruct if the weights do not already exist
docker run --rm \
-v $HF_HUB_CACHE:/models \
-e HF_HUB_CACHE=/models \
-e TRANSFORMERS_CACHE=/models \
$TGIS_IMAGE \
text-generation-server download-weights \
meta-llama/Meta-Llama-3-70B-Instruct \
--token $HF_HUB_TOKEN
# optionally download the speculator model if the weights do not already exist
docker run --rm \
-v $HF_HUB_CACHE:/models \
-e HF_HUB_CACHE=/models \
-e TRANSFORMERS_CACHE=/models \
$TGIS_IMAGE \
text-generation-server download-weights \
ibm-fms/llama3-70b-accelerator \
--token $HF_HUB_TOKEN
# note: if the weights were downloaded separately (not with the above commands), please place them in the HF_HUB_CACHE directory and refer to them with /models/<model_name>
docker run -d --rm --gpus all \
--name my-tgis-server \
-p 8033:8033 \
-v $HF_HUB_CACHE:/models \
-e HF_HUB_CACHE=/models \
-e TRANSFORMERS_CACHE=/models \
-e MODEL_NAME=meta-llama/Meta-Llama-3-70B-Instruct \
-e SPECULATOR_NAME=ibm-fms/llama3-70b-accelerator \
-e FLASH_ATTENTION=true \
-e PAGED_ATTENTION=true \
-e DTYPE=float16 \
$TGIS_IMAGE
# check logs and wait for "gRPC server started on port 8033" and "HTTP server started on port 3000"
docker logs my-tgis-server -f
# get the client sample (Note: The first prompt will take longer as there is a warmup time)
conda create -n tgis-client-env python=3.11
conda activate tgis-client-env
git clone --branch main --single-branch https://github.com/IBM/text-generation-inference.git
cd text-generation-inference/integration_tests
make gen-client
pip install . --no-cache-dir
```
#### Run Sample
```bash
python sample_client.py
```
_Note: first prompt may be slower as there is a slight warmup time_
### Use in Huggingface TGI
#### start the server
```bash
model=ibm-fms/llama3-70b-accelerator
volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run
docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:latest --model-id $model
```
_note: for tensor parallel, add --num-shard_
#### make a request
```bash
curl 127.0.0.1:8080/generate_stream \
-X POST \
-d '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":20}}' \
-H 'Content-Type: application/json'
```
### Use in vLLM
```from vllm import LLM, SamplingParams
# Sample prompts.
prompts = [
"The president of the United States is",
]
# Create a sampling params object.
sampling_params = SamplingParams(temperature=0.0)
# Create an LLM.
llm = LLM(
model="/path/to/Meta-Llama-3-70B-Instruct",
tensor_parallel_size=4,
speculative_model="/path/to/llama3-70b-accelerator",
speculative_draft_tensor_parallel_size=1,
use_v2_block_manager=True,
)
# Generate texts from the prompts. The output is a list of RequestOutput objects
# that contain the prompt, generated text, and other information.
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
``` |
mradermacher/Flowable-Docs-Qwen-2.5-Coder-7B-GGUF | mradermacher | "2024-11-15T10:38:52Z" | 27 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:zzzmahesh/Flowable-Docs-Qwen-2.5-Coder-7B",
"base_model:quantized:zzzmahesh/Flowable-Docs-Qwen-2.5-Coder-7B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-11-15T10:21:44Z" | ---
base_model: zzzmahesh/Flowable-Docs-Qwen-2.5-Coder-7B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/zzzmahesh/Flowable-Docs-Qwen-2.5-Coder-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Flowable-Docs-Qwen-2.5-Coder-7B-GGUF/resolve/main/Flowable-Docs-Qwen-2.5-Coder-7B.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Flowable-Docs-Qwen-2.5-Coder-7B-GGUF/resolve/main/Flowable-Docs-Qwen-2.5-Coder-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Flowable-Docs-Qwen-2.5-Coder-7B-GGUF/resolve/main/Flowable-Docs-Qwen-2.5-Coder-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Flowable-Docs-Qwen-2.5-Coder-7B-GGUF/resolve/main/Flowable-Docs-Qwen-2.5-Coder-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Flowable-Docs-Qwen-2.5-Coder-7B-GGUF/resolve/main/Flowable-Docs-Qwen-2.5-Coder-7B.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Flowable-Docs-Qwen-2.5-Coder-7B-GGUF/resolve/main/Flowable-Docs-Qwen-2.5-Coder-7B.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Flowable-Docs-Qwen-2.5-Coder-7B-GGUF/resolve/main/Flowable-Docs-Qwen-2.5-Coder-7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Flowable-Docs-Qwen-2.5-Coder-7B-GGUF/resolve/main/Flowable-Docs-Qwen-2.5-Coder-7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Flowable-Docs-Qwen-2.5-Coder-7B-GGUF/resolve/main/Flowable-Docs-Qwen-2.5-Coder-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Flowable-Docs-Qwen-2.5-Coder-7B-GGUF/resolve/main/Flowable-Docs-Qwen-2.5-Coder-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Flowable-Docs-Qwen-2.5-Coder-7B-GGUF/resolve/main/Flowable-Docs-Qwen-2.5-Coder-7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Flowable-Docs-Qwen-2.5-Coder-7B-GGUF/resolve/main/Flowable-Docs-Qwen-2.5-Coder-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Flowable-Docs-Qwen-2.5-Coder-7B-GGUF/resolve/main/Flowable-Docs-Qwen-2.5-Coder-7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
SashaSheykina/xLNet-finetuned-cXg-nl-to-code | SashaSheykina | "2024-07-29T13:23:07Z" | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlnet",
"text-generation",
"Text Generation",
"generated_from_trainer",
"base_model:xlnet/xlnet-base-cased",
"base_model:finetune:xlnet/xlnet-base-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-29T12:45:22Z" | ---
license: mit
base_model: xlnet-base-cased
tags:
- Text Generation
- generated_from_trainer
metrics:
- rouge
- bleu
model-index:
- name: xLNet-finetuned-cXg-nl-to-code
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xLNet-finetuned-cXg-nl-to-code
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 12.2852
- Rouge1: 0.0799
- Rouge2: 0.0062
- Rougel: 0.0596
- Bleu: 0.8853
- Meteor: 0.1244
- Codebleu: 0.2037
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.42.4
- Pytorch 2.3.1+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
shrish23/Flan-t5-finetuned-dialogsum | shrish23 | "2024-11-22T00:20:20Z" | 115 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-11-22T00:06:12Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
twidfeel/bert-base-uncased-issues-128 | twidfeel | "2023-06-20T05:59:49Z" | 116 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2023-06-20T05:25:18Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-issues-128
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-issues-128
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1883
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.1094 | 1.0 | 291 | 1.6856 |
| 1.6364 | 2.0 | 582 | 1.3676 |
| 1.4818 | 3.0 | 873 | 1.4158 |
| 1.397 | 4.0 | 1164 | 1.4260 |
| 1.3407 | 5.0 | 1455 | 1.2725 |
| 1.2883 | 6.0 | 1746 | 1.3102 |
| 1.2308 | 7.0 | 2037 | 1.2178 |
| 1.2122 | 8.0 | 2328 | 1.2875 |
| 1.179 | 9.0 | 2619 | 1.2713 |
| 1.1501 | 10.0 | 2910 | 1.2187 |
| 1.1253 | 11.0 | 3201 | 1.2641 |
| 1.0996 | 12.0 | 3492 | 1.1546 |
| 1.0925 | 13.0 | 3783 | 1.1543 |
| 1.077 | 14.0 | 4074 | 1.0697 |
| 1.0653 | 15.0 | 4365 | 1.2503 |
| 1.0676 | 16.0 | 4656 | 1.1883 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
csukuangfj/icefall_asr_aidatatang-200zh_pruned_transducer_stateless2 | csukuangfj | "2024-01-26T04:33:06Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-01-26T04:13:32Z" | This repo is forked from
https://huggingface.co/luomingshuang/icefall_asr_aidatatang-200zh_pruned_transducer_stateless2
Note: This recipe is trained with the codes from this PR https://github.com/k2-fsa/icefall/pull/355
And the SpecAugment codes from this PR https://github.com/lhotse-speech/lhotse/pull/604.
# Pre-trained Transducer-Stateless2 models for the Aidatatang_200zh dataset with icefall.
The model was trained on full [Aidatatang_200zh](https://www.openslr.org/62) with the scripts in [icefall](https://github.com/k2-fsa/icefall) based on the latest version k2.
## Training procedure
The main repositories are list below, we will update the training and decoding scripts with the update of version.
k2: https://github.com/k2-fsa/k2
icefall: https://github.com/k2-fsa/icefall
lhotse: https://github.com/lhotse-speech/lhotse
* Install k2 and lhotse, k2 installation guide refers to https://k2.readthedocs.io/en/latest/installation/index.html, lhotse refers to https://lhotse.readthedocs.io/en/latest/getting-started.html#installation. I think the latest version would be ok. And please also install the requirements listed in icefall.
* Clone icefall(https://github.com/k2-fsa/icefall) and check to the commit showed above.
```
git clone https://github.com/k2-fsa/icefall
cd icefall
```
* Preparing data.
```
cd egs/aidatatang_200zh/ASR
bash ./prepare.sh
```
* Training
```
export CUDA_VISIBLE_DEVICES="0,1"
./pruned_transducer_stateless2/train.py \
--world-size 2 \
--num-epochs 30 \
--start-epoch 0 \
--exp-dir pruned_transducer_stateless2/exp \
--lang-dir data/lang_char \
--max-duration 250
```
## Evaluation results
The decoding results (WER%) on Aidatatang_200zh(dev and test) are listed below, we got this result by averaging models from epoch 11 to 29.
The WERs are
| | dev | test | comment |
|------------------------------------|------------|------------|------------------------------------------|
| greedy search | 5.53 | 6.59 | --epoch 29, --avg 19, --max-duration 100 |
| modified beam search (beam size 4) | 5.28 | 6.32 | --epoch 29, --avg 19, --max-duration 100 |
| fast beam search (set as default) | 5.29 | 6.33 | --epoch 29, --avg 19, --max-duration 1500|
|
mike-krk/q-FrozenLake-v1-4x4-noSlippery | mike-krk | "2023-12-02T19:51:06Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-12-02T19:50:55Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="mike-krk/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
helenblake13/first-baseline-780-2374 | helenblake13 | "2024-01-07T00:33:36Z" | 0 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-01-07T00:29:42Z" | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### first_baseline2 Dreambooth model trained by helenblake13 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
tsavage68/UTI_L3_1000steps_1e7rate_SFT | tsavage68 | "2024-06-06T03:53:12Z" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-29T13:51:40Z" | ---
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: UTI_L3_1000steps_1e7rate_SFT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# UTI_L3_1000steps_1e7rate_SFT
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6055
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 2.4485 | 0.3333 | 25 | 2.4666 |
| 2.4645 | 0.6667 | 50 | 2.4522 |
| 2.452 | 1.0 | 75 | 2.4164 |
| 2.391 | 1.3333 | 100 | 2.3529 |
| 2.2816 | 1.6667 | 125 | 2.2866 |
| 2.175 | 2.0 | 150 | 2.2255 |
| 2.2168 | 2.3333 | 175 | 2.1683 |
| 2.1574 | 2.6667 | 200 | 2.1166 |
| 2.1107 | 3.0 | 225 | 2.0679 |
| 2.0126 | 3.3333 | 250 | 2.0229 |
| 1.9353 | 3.6667 | 275 | 1.9810 |
| 1.9552 | 4.0 | 300 | 1.9445 |
| 1.9759 | 4.3333 | 325 | 1.9100 |
| 1.8721 | 4.6667 | 350 | 1.8773 |
| 1.8928 | 5.0 | 375 | 1.8491 |
| 1.8331 | 5.3333 | 400 | 1.8236 |
| 1.8221 | 5.6667 | 425 | 1.7980 |
| 1.7615 | 6.0 | 450 | 1.7762 |
| 1.7701 | 6.3333 | 475 | 1.7562 |
| 1.7034 | 6.6667 | 500 | 1.7327 |
| 1.7471 | 7.0 | 525 | 1.7064 |
| 1.7317 | 7.3333 | 550 | 1.6831 |
| 1.6897 | 7.6667 | 575 | 1.6645 |
| 1.6452 | 8.0 | 600 | 1.6476 |
| 1.6675 | 8.3333 | 625 | 1.6327 |
| 1.569 | 8.6667 | 650 | 1.6238 |
| 1.705 | 9.0 | 675 | 1.6163 |
| 1.6025 | 9.3333 | 700 | 1.6121 |
| 1.6224 | 9.6667 | 725 | 1.6083 |
| 1.6976 | 10.0 | 750 | 1.6074 |
| 1.6031 | 10.3333 | 775 | 1.6059 |
| 1.5703 | 10.6667 | 800 | 1.6046 |
| 1.6563 | 11.0 | 825 | 1.6055 |
| 1.6464 | 11.3333 | 850 | 1.6059 |
| 1.6075 | 11.6667 | 875 | 1.6055 |
| 1.6453 | 12.0 | 900 | 1.6057 |
| 1.5754 | 12.3333 | 925 | 1.6054 |
| 1.5962 | 12.6667 | 950 | 1.6055 |
| 1.6333 | 13.0 | 975 | 1.6055 |
| 1.6086 | 13.3333 | 1000 | 1.6055 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.0.0+cu117
- Datasets 2.19.2
- Tokenizers 0.19.1
|
SuperkingbasSKB/adapter_ThaiSC_LLM_Scamper | SuperkingbasSKB | "2024-04-30T18:17:18Z" | 2 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:openthaigpt/openthaigpt-1.0.0-7b-chat",
"base_model:adapter:openthaigpt/openthaigpt-1.0.0-7b-chat",
"license:apache-2.0",
"region:us"
] | null | "2024-04-30T18:16:35Z" | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: openthaigpt/openthaigpt-1.0.0-7b-chat
model-index:
- name: adapter_ThaiSC_LLM_Scamper
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# adapter_ThaiSC_LLM_Scamper
This model is a fine-tuned version of [openthaigpt/openthaigpt-1.0.0-7b-chat](https://huggingface.co/openthaigpt/openthaigpt-1.0.0-7b-chat) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.0
- Pytorch 2.2.1+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2 |
hamzawaheed/emotion-classification-model | hamzawaheed | "2024-11-21T02:30:28Z" | 102 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"emotion-classification",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-11-11T01:47:39Z" | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- emotion-classification
- text-classification
- distilbert
metrics:
- accuracy
---
# emotion-classification-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased).
It achieves the following results on the evaluation set:
- **Loss:** 0.1789
- **Accuracy:** 0.931
## Model Description
The **Emotion Classification Model** is a fine-tuned version of the `distilbert-base-uncased` transformer architecture, adapted specifically for classifying text into six distinct emotions. DistilBERT, a distilled version of BERT, offers a lightweight yet powerful foundation, enabling efficient training and inference without significant loss in performance.
This model leverages the pre-trained language understanding capabilities of DistilBERT to accurately categorize textual data into the following emotion classes:
- **Sadness**
- **Joy**
- **Love**
- **Anger**
- **Fear**
- **Surprise**
By fine-tuning on the `dair-ai/emotion` dataset, the model has been optimized to recognize and differentiate subtle emotional cues in various text inputs, making it suitable for applications that require nuanced sentiment analysis and emotional intelligence.
## Intended Uses & Limitations
### Intended Uses
The Emotion Classification Model is designed for a variety of applications where understanding the emotional tone of text is crucial. Suitable use cases include:
- **Sentiment Analysis:** Gauging customer feedback, reviews, and social media posts to understand emotional responses.
- **Social Media Analysis:** Tracking and analyzing emotional trends and public sentiment across platforms like Twitter, Facebook, and Instagram.
- **Content Recommendation:** Enhancing recommendation systems by aligning content suggestions with users' current emotional states.
- **Chatbots and Virtual Assistants:** Enabling more empathetic and emotionally aware interactions with users.
### Limitations
While the Emotion Classification Model demonstrates strong performance across various tasks, it has certain limitations:
- **Bias in Training Data:** The model may inherit biases present in the `dair-ai/emotion` dataset, potentially affecting its performance across different demographics, cultures, or contexts.
- **Contextual Understanding:** The model analyzes text in isolation and may struggle with understanding nuanced emotions that depend on broader conversational context or preceding interactions.
- **Language Constraints:** Currently optimized for English, limiting its effectiveness with multilingual or non-English inputs without further training or adaptation.
- **Emotion Overlap:** Some emotions have overlapping linguistic cues, which may lead to misclassifications in ambiguous text scenarios.
- **Dependence on Text Quality:** The model's performance can degrade with poorly structured, slang-heavy, or highly informal text inputs.
## Training and Evaluation Data
### Dataset
The model was trained and evaluated on the [`dair-ai/emotion`](https://huggingface.co/datasets/dair-ai/emotion) dataset, a comprehensive collection of textual data annotated for emotion classification.
### Dataset Statistics
- **Total Samples:** 20,000
- **Training Set:** 16,000 samples
- **Validation Set:** 2,000 samples
- **Test Set:** 2,000 samples
### Data Preprocessing
Prior to training, the dataset underwent the following preprocessing steps:
1. **Tokenization:** Utilized the `DistilBertTokenizerFast` from the `distilbert-base-uncased` model to tokenize the input text. Each text sample was converted into token IDs, ensuring compatibility with the DistilBERT architecture.
2. **Padding & Truncation:** Applied padding and truncation to maintain a uniform sequence length of 32 tokens. This step ensures efficient batching and consistent input dimensions for the model.
3. **Batch Processing:** Employed parallel processing using all available CPU cores minus one to expedite the tokenization process across training, validation, and test sets.
4. **Format Conversion:** Converted the tokenized datasets into PyTorch tensors to facilitate seamless integration with the PyTorch-based `Trainer` API.
### Evaluation Metrics
The model's performance was assessed using the following metrics:
- **Accuracy:** Measures the proportion of correctly predicted samples out of the total samples.
## Training Procedure
### Training Hyperparameters
The following hyperparameters were used during training:
- **Learning Rate:** `6e-05`
- **Training Batch Size:** `16` per device
- **Evaluation Batch Size:** `32` per device
- **Number of Epochs:** `2`
- **Weight Decay:** `0.01`
- **Gradient Accumulation Steps:** `2` (effectively simulating a batch size of `32`)
- **Mixed Precision Training:** Enabled (Native AMP) if CUDA is available
### Optimization Strategies
- **Mixed Precision Training:** Utilized PyTorch's Native AMP to accelerate training and reduce memory consumption when a CUDA-enabled GPU is available.
- **Gradient Accumulation:** Implemented gradient accumulation with `2` steps to effectively increase the batch size without exceeding GPU memory limits.
- **Checkpointing:** Configured to save model checkpoints at the end of each epoch, retaining only the two most recent checkpoints to manage storage efficiently.
### Training Duration
- **Total Training Time:** Approximately `2.40` minutes
### Logging and Monitoring
- **Logging Directory:** `./logs`
- **Logging Steps:** Every `10` steps
- **Reporting To:** TensorBoard
- **Tools Used:** TensorBoard for real-time visualization of training metrics, including loss and accuracy.
### Training Results
After training, the model achieved the following performance metrics:
- **Validation Accuracy:** `93.10%`
- **Test Accuracy:** `93.10%`
|
leeharok/llama-3-8b-chat-doctor | leeharok | "2024-10-28T08:15:06Z" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-10-23T02:41:15Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
alwanrahmana/indobert-large-p2_abscon | alwanrahmana | "2024-06-03T07:10:38Z" | 33 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-03T07:10:01Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
aienthuguy/test_case_assistant | aienthuguy | "2024-04-02T08:32:38Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-02T06:07:20Z" | ---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
jrobles98/starcoderbase-3b-personal-copilot-peft | jrobles98 | "2023-11-21T16:46:18Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:bigcode/starcoderbase-3b",
"base_model:adapter:bigcode/starcoderbase-3b",
"region:us"
] | null | "2023-11-21T16:44:19Z" | ---
library_name: peft
base_model: bigcode/starcoderbase-3b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.3.dev0
|
julian5383/word_ethical | julian5383 | "2023-09-01T07:21:44Z" | 114 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"ko",
"dataset:kowiki",
"dataset:news",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2023-09-01T07:20:31Z" | ---
language: ko
datasets:
- kowiki
- news
---
deeqBERT-base
---
- model: bert-base
- vocab: bert-wordpiece, 35k
- version: latest
|
zaimazarnaz14/dummy-whisper | zaimazarnaz14 | "2024-05-07T17:16:51Z" | 147 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-05-07T17:15:01Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Romain-XV/f0e65db1-9e2c-40dc-961b-bb25593ee140 | Romain-XV | "2025-01-21T15:06:37Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Meta-Llama-3-8B",
"base_model:adapter:NousResearch/Meta-Llama-3-8B",
"license:other",
"region:us"
] | null | "2025-01-21T14:38:59Z" | ---
library_name: peft
license: other
base_model: NousResearch/Meta-Llama-3-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f0e65db1-9e2c-40dc-961b-bb25593ee140
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Meta-Llama-3-8B
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9f860ccb7e806546_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9f860ccb7e806546_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 30
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: false
hub_model_id: Romain-XV/f0e65db1-9e2c-40dc-961b-bb25593ee140
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: true
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
lr_scheduler: cosine
micro_batch_size: 4
mlflow_experiment_name: /tmp/9f860ccb7e806546_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
sequence_len: 2048
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 92906d73-ae0c-43b3-9735-14fe2124bf2a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 92906d73-ae0c-43b3-9735-14fe2124bf2a
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# f0e65db1-9e2c-40dc-961b-bb25593ee140
This model is a fine-tuned version of [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8719
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.1138 | 0.0191 | 1 | 4.2181 |
| 1.9066 | 0.9558 | 50 | 1.8719 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
csikasote/whisper-medium-swagen-combined-15hrs-model | csikasote | "2025-01-05T11:02:21Z" | 21 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:swagen",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2025-01-04T09:13:58Z" | ---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
datasets:
- swagen
metrics:
- wer
model-index:
- name: whisper-medium-swagen-combined-15hrs-model
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: swagen
type: swagen
metrics:
- name: Wer
type: wer
value: 0.27171266233766234
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-medium-swagen-combined-15hrs-model
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the swagen dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4103
- Wer: 0.2717
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 2.6268 | 0.1654 | 200 | 0.8031 | 0.4605 |
| 2.0712 | 0.3308 | 400 | 0.6148 | 0.3829 |
| 1.7302 | 0.4962 | 600 | 0.5562 | 0.3490 |
| 1.5735 | 0.6616 | 800 | 0.5103 | 0.3106 |
| 1.5623 | 0.8270 | 1000 | 0.4683 | 0.2776 |
| 1.2713 | 0.9924 | 1200 | 0.4439 | 0.2688 |
| 0.7209 | 1.1571 | 1400 | 0.4601 | 0.2732 |
| 0.6856 | 1.3225 | 1600 | 0.4391 | 0.2595 |
| 0.7661 | 1.4879 | 1800 | 0.4396 | 0.2755 |
| 0.8113 | 1.6533 | 2000 | 0.4262 | 0.2643 |
| 0.77 | 1.8187 | 2200 | 0.4175 | 0.2679 |
| 0.6942 | 1.9841 | 2400 | 0.4103 | 0.2717 |
| 0.2814 | 2.1489 | 2600 | 0.4295 | 0.2617 |
| 0.3171 | 2.3142 | 2800 | 0.4301 | 0.2432 |
| 0.3495 | 2.4796 | 3000 | 0.4299 | 0.2526 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
enghamdiali/idfc-m1 | enghamdiali | "2024-05-25T13:47:58Z" | 64 | 0 | transformers | [
"transformers",
"safetensors",
"idefics",
"image-text-to-text",
"visual-question-answering",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | visual-question-answering | "2024-05-25T13:40:05Z" | ---
library_name: transformers
pipeline_tag: visual-question-answering
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Srini138/healthcare | Srini138 | "2023-10-16T12:55:40Z" | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"region:us"
] | null | "2023-10-16T12:55:39Z" | ---
library_name: peft
base_model: decapoda-research/llama-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
Fighoture/Llama-2-7b-chat-shortgpt-20-percent | Fighoture | "2024-04-20T00:53:41Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-18T21:28:34Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yyjun/yyjun.KoAlpaca | yyjun | "2023-09-28T09:17:14Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-09-28T09:17:11Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
ibadrehman/ppo-Pyramids | ibadrehman | "2023-02-18T14:33:27Z" | 4 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | "2023-02-18T14:33:21Z" |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
library_name: ml-agents
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Write your model_id: ibadrehman/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
huggingtweets/hutaosoulmate | huggingtweets | "2023-03-29T03:49:40Z" | 118 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-03-29T03:43:25Z" | ---
language: en
thumbnail: http://www.huggingtweets.com/hutaosoulmate/1680061774875/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1539749115092934656/WeP6cOjo_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Laurent</div>
<div style="text-align: center; font-size: 14px;">@hutaosoulmate</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Laurent.
| Data | Laurent |
| --- | --- |
| Tweets downloaded | 1181 |
| Retweets | 212 |
| Short tweets | 101 |
| Tweets kept | 868 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/y718bopk/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @hutaosoulmate's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/vz2s932i) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/vz2s932i/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/hutaosoulmate')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
DuongTrongChi/gemma-2b-sft | DuongTrongChi | "2024-04-29T13:14:11Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"license:gemma",
"region:us"
] | null | "2024-04-29T13:14:01Z" | ---
license: gemma
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: google/gemma-2b
datasets:
- generator
model-index:
- name: gemma-2b-sft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma-2b-sft
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.17.0
- Tokenizers 0.15.2 |
dt-and-vanilla-ardt/ardt-vanilla-robust_train_halfcheetah_level-0109_2214-33 | dt-and-vanilla-ardt | "2023-09-01T23:31:49Z" | 31 | 0 | transformers | [
"transformers",
"pytorch",
"decision_transformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | "2023-09-01T21:16:41Z" | ---
tags:
- generated_from_trainer
model-index:
- name: ardt-vanilla-robust_train_halfcheetah_level-0109_2214-33
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ardt-vanilla-robust_train_halfcheetah_level-0109_2214-33
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.1.0.dev20230727+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
loubnabnl/Llama-8B-Instruct-Bespoke-H4-GBS500k-lr2e-5 | loubnabnl | "2025-01-25T19:39:54Z" | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"dataset:HuggingFaceH4/Bespoke-Stratos-17k",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-25T17:28:08Z" | ---
base_model: meta-llama/Llama-3.1-8B-Instruct
datasets: HuggingFaceH4/Bespoke-Stratos-17k
library_name: transformers
model_name: Llama-8B-Instruct-Bespoke-H4-GBS500k-lr2e-5
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Llama-8B-Instruct-Bespoke-H4-GBS500k-lr2e-5
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on the [HuggingFaceH4/Bespoke-Stratos-17k](https://huggingface.co/datasets/HuggingFaceH4/Bespoke-Stratos-17k) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="loubnabnl/Llama-8B-Instruct-Bespoke-H4-GBS500k-lr2e-5", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/loubnabnl/huggingface/runs/kmk9wmo5)
This model was trained with SFT.
### Framework versions
- TRL: 0.14.0.dev0
- Transformers: 4.48.1
- Pytorch: 2.5.1+cu121
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
MaziyarPanahi/VICIOUS_MESH-12B-BETA-GGUF | MaziyarPanahi | "2024-12-26T22:29:00Z" | 42 | 0 | null | [
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:bamec66557/VICIOUS_MESH-12B-BETA",
"base_model:quantized:bamec66557/VICIOUS_MESH-12B-BETA",
"region:us",
"conversational"
] | text-generation | "2024-12-26T22:01:30Z" | ---
base_model: bamec66557/VICIOUS_MESH-12B-BETA
inference: false
model_creator: bamec66557
model_name: VICIOUS_MESH-12B-BETA-GGUF
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
---
# [MaziyarPanahi/VICIOUS_MESH-12B-BETA-GGUF](https://huggingface.co/MaziyarPanahi/VICIOUS_MESH-12B-BETA-GGUF)
- Model creator: [bamec66557](https://huggingface.co/bamec66557)
- Original model: [bamec66557/VICIOUS_MESH-12B-BETA](https://huggingface.co/bamec66557/VICIOUS_MESH-12B-BETA)
## Description
[MaziyarPanahi/VICIOUS_MESH-12B-BETA-GGUF](https://huggingface.co/MaziyarPanahi/VICIOUS_MESH-12B-BETA-GGUF) contains GGUF format model files for [bamec66557/VICIOUS_MESH-12B-BETA](https://huggingface.co/bamec66557/VICIOUS_MESH-12B-BETA).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
๐ Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B | Cas-Warehouse | "2024-06-16T16:39:07Z" | 9 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2212.04089",
"base_model:Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B",
"base_model:merge:Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B",
"base_model:Cas-Warehouse/Llama-3-SOVL-MopeyMule-8B",
"base_model:merge:Cas-Warehouse/Llama-3-SOVL-MopeyMule-8B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-16T16:35:09Z" | ---
base_model:
- Casual-Autopsy/Llama-3-SOVL-MopeyMule-8B
- Casual-Autopsy/Llama-3-MopeyMule-Blackroot-8B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [Casual-Autopsy/Llama-3-MopeyMule-Blackroot-8B](https://huggingface.co/Casual-Autopsy/Llama-3-MopeyMule-Blackroot-8B) as a base.
### Models Merged
The following models were included in the merge:
* [Casual-Autopsy/Llama-3-SOVL-MopeyMule-8B](https://huggingface.co/Casual-Autopsy/Llama-3-SOVL-MopeyMule-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Casual-Autopsy/Llama-3-MopeyMule-Blackroot-8B
layer_range: [0, 32]
parameters:
weight: 0.75
- model: Casual-Autopsy/Llama-3-SOVL-MopeyMule-8B
layer_range: [0, 32]
parameters:
weight: 0.25
merge_method: task_arithmetic
base_model: Casual-Autopsy/Llama-3-MopeyMule-Blackroot-8B
normalize: False
dtype: bfloat16
```
|
TheBloke/llama-2-13B-German-Assistant-v2-AWQ | TheBloke | "2023-11-09T18:20:22Z" | 12 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"de",
"dataset:flozi00/conversations",
"base_model:flozi00/Llama-2-13B-german-assistant-v2",
"base_model:quantized:flozi00/Llama-2-13B-german-assistant-v2",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] | text-generation | "2023-09-19T05:55:06Z" | ---
language:
- en
- de
license: llama2
datasets:
- flozi00/conversations
model_name: Llama 2 13B German Assistant v2
base_model: flozi00/Llama-2-13B-german-assistant-v2
inference: false
model_creator: Florian Zimmermeister
model_type: llama
prompt_template: '<|prompter|>{prompt}<|endoftext|><|assistant|>
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Llama 2 13B German Assistant v2 - AWQ
- Model creator: [Florian Zimmermeister](https://huggingface.co/flozi00)
- Original model: [Llama 2 13B German Assistant v2](https://huggingface.co/flozi00/Llama-2-13B-german-assistant-v2)
<!-- description start -->
## Description
This repo contains AWQ model files for [flozi00's Llama 2 13B German Assistant v2](https://huggingface.co/flozi00/Llama-2-13B-german-assistant-v2).
Many thanks to William Beauchamp from [Chai](https://chai-research.com/) for providing the hardware used to make and upload these files!
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference.
It is also now supported by continuous batching server [vLLM](https://github.com/vllm-project/vllm), allowing use of AWQ models for high-throughput concurrent inference in multi-user server scenarios. Note that, at the time of writing, overall throughput is still lower than running vLLM with unquantised models, however using AWQ enables using much smaller GPUs which can lead to easier deployment and overall cost savings. For example, a 70B model can be run on 1 x 48GB GPU instead of 2 x 80GB.
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/llama-2-13B-German-Assistant-v2-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/llama-2-13B-German-Assistant-v2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/llama-2-13B-German-Assistant-v2-GGUF)
* [Florian Zimmermeister's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/flozi00/Llama-2-13B-german-assistant-v2)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: OpenAssistant
```
<|prompter|>{prompt}<|endoftext|><|assistant|>
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files and AWQ parameters
For my first release of AWQ models, I am releasing 128g models only. I will consider adding 32g as well if there is interest, and once I have done perplexity and evaluation comparisons, but at this time 32g models are still not fully tested with AutoAWQ and vLLM.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/llama-2-13B-German-Assistant-v2-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.25 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Serving this model from vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- When using vLLM as a server, pass the `--quantization awq` parameter, for example:
```shell
python3 python -m vllm.entrypoints.api_server --model TheBloke/llama-2-13B-German-Assistant-v2-AWQ --quantization awq
```
When using vLLM from Python code, pass the `quantization=awq` parameter, for example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Hello, my name is",
"The president of the United States is",
"The capital of France is",
"The future of AI is",
]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/llama-2-13B-German-Assistant-v2-AWQ", quantization="awq")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-python start -->
## How to use this AWQ model from Python code
### Install the necessary packages
Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.0.2 or later
```shell
pip3 install autoawq
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### You can then try the following example code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
model_name_or_path = "TheBloke/llama-2-13B-German-Assistant-v2-AWQ"
# Load model
model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True,
trust_remote_code=False, safetensors=True)
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=False)
prompt = "Tell me about AI"
prompt_template=f'''<|prompter|>{prompt}<|endoftext|><|assistant|>
'''
print("\n\n*** Generate:")
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
# Generate output
generation_output = model.generate(
tokens,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
max_new_tokens=512
)
print("Output: ", tokenizer.decode(generation_output[0]))
# Inference can also be done using transformers' pipeline
from transformers import pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with [AutoAWQ](https://github.com/casper-hansen/AutoAWQ), and [vLLM](https://github.com/vllm-project/vllm).
[Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is not yet compatible with AWQ, but a PR is open which should bring support soon: [TGI PR #781](https://github.com/huggingface/text-generation-inference/issues/781).
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjรคreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, ์ค๊ต ๊น, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, ้ฟๆ, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: flozi00's Llama 2 13B German Assistant v2
## This project is sponsored by [  ](https://www.primeline-solutions.com/de/server/nach-einsatzzweck/gpu-rendering-hpc/)
Please Use V3 of this model instead
# Model Card
This model is an finetuned version for german instructions and conversations in style of Open Assistant tokens. "<|prompter|>" "<|endoftext|>" "<|assistant|>"
The dataset used is deduplicated and cleaned, with no codes inside. The focus is on instruction following and conversational tasks.
The model archictecture is based on Llama version 2 with 13B parameters, trained on 100% renewable energy powered hardware.
This work is contributed by private research of [flozi00](https://huggingface.co/flozi00)
|
sauc-abadal-lloret/bert-base-uncased-es-sentiment-analysis | sauc-abadal-lloret | "2023-10-26T06:53:22Z" | 6 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:muchocine",
"base_model:dccuchile/bert-base-spanish-wwm-uncased",
"base_model:finetune:dccuchile/bert-base-spanish-wwm-uncased",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-10-26T06:42:40Z" | ---
base_model: dccuchile/bert-base-spanish-wwm-uncased
tags:
- generated_from_trainer
datasets:
- muchocine
metrics:
- accuracy
model-index:
- name: bert-base-uncased-es-sentiment-analysis
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: muchocine
type: muchocine
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.792258064516129
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-es-sentiment-analysis
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on the muchocine dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9713
- Accuracy: 0.7923
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.541 | 1.0 | 49 | 0.4618 | 0.7781 |
| 0.3157 | 2.0 | 98 | 0.4989 | 0.7742 |
| 0.1294 | 3.0 | 147 | 0.6931 | 0.8 |
| 0.0541 | 4.0 | 196 | 0.8284 | 0.7935 |
| 0.0254 | 5.0 | 245 | 0.9713 | 0.7923 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
JunxiongWang/Mamba2InLlama_0_75 | JunxiongWang | "2024-09-02T15:48:07Z" | 24 | 0 | null | [
"pytorch",
"llama",
"alignment-handbook",
"generated_from_trainer",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"dataset:HuggingFaceH4/orca_dpo_pairs",
"dataset:JunxiongWang/llama3-ultrafeedback-armorm",
"arxiv:2408.15237",
"region:us"
] | null | "2024-08-22T18:36:42Z" | ---
base_model: JunxiongWang/llama3_0_75_mamba2_sft
tags:
- alignment-handbook
- generated_from_trainer
datasets:
- HuggingFaceH4/ultrafeedback_binarized
- HuggingFaceH4/orca_dpo_pairs
- JunxiongWang/llama3-ultrafeedback-armorm
model-index:
- name: JunxiongWang/Mamba2InLlama_0_75
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
Please check [here](https://github.com/jxiw/MambaInLlama/tree/main) for details.
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/junxiong12/huggingface/runs/24l27qc0)
# JunxiongWang/Mamba2InLlama_0_75
This model is a fine-tuned version of [JunxiongWang/llama3_0_75_mamba2_sft]() on the HuggingFaceH4/ultrafeedback_binarized, the HuggingFaceH4/orca_dpo_pairs and the JunxiongWang/llama3-ultrafeedback-armorm datasets.
It achieves the following results on the evaluation set:
- Loss: 0.4695
- Rewards/chosen: -1.5489
- Rewards/rejected: -2.8730
- Rewards/accuracies: 0.8107
- Rewards/margins: 1.3240
- Logps/rejected: -589.1575
- Logps/chosen: -449.6615
- Logits/rejected: 1.1678
- Logits/chosen: 1.2259
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 32
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.494 | 0.4798 | 2000 | 0.4938 | -1.4838 | -2.6084 | 0.7911 | 1.1246 | -562.7021 | -443.1515 | 1.1609 | 1.2167 |
| 0.4911 | 0.9597 | 4000 | 0.4695 | -1.5489 | -2.8730 | 0.8107 | 1.3240 | -589.1575 | -449.6615 | 1.1678 | 1.2259 |
### Framework versions
- Transformers 4.43.1
- Pytorch 2.1.1+cu118
- Datasets 2.20.0
- Tokenizers 0.19.1
[MambaInLlama](arxiv.org/abs/2408.15237)
```
@article{junxiongdaniele2024mambainllama,
title = {The Mamba in the Llama: Distilling and Accelerating Hybrid Models},
author = {Junxiong Wang and Daniele Paliotta and Avner May and Alexander M. Rush and Tri Dao},
journal = {arXiv preprint arXiv:2408.15237},
year = {2024}
}
``` |
jssky/24e07f90-3c9d-4c98-9cd6-f12f6cedb649 | jssky | "2025-02-11T19:50:46Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Phi-3-mini-4k-instruct",
"base_model:adapter:unsloth/Phi-3-mini-4k-instruct",
"license:mit",
"region:us"
] | null | "2025-02-11T19:41:37Z" | ---
library_name: peft
license: mit
base_model: unsloth/Phi-3-mini-4k-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 24e07f90-3c9d-4c98-9cd6-f12f6cedb649
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.6.0`
```yaml
adapter: lora
base_model: unsloth/Phi-3-mini-4k-instruct
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1ce3a256847d0bf0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1ce3a256847d0bf0_train_data.json
type:
field_instruction: CogVLM
field_output: GT_Caption_GPT4O
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: true
hub_model_id: jssky/24e07f90-3c9d-4c98-9cd6-f12f6cedb649
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 1500
micro_batch_size: 2
mlflow_experiment_name: /tmp/1ce3a256847d0bf0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 58e6a8ee-3331-4c87-8b34-f00d5d00f92d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 58e6a8ee-3331-4c87-8b34-f00d5d00f92d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 24e07f90-3c9d-4c98-9cd6-f12f6cedb649
This model is a fine-tuned version of [unsloth/Phi-3-mini-4k-instruct](https://huggingface.co/unsloth/Phi-3-mini-4k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3805
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 555
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.4027 | 0.2507 | 139 | 1.4092 |
| 1.3524 | 0.5014 | 278 | 1.3900 |
| 1.3942 | 0.7520 | 417 | 1.3805 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3 |
Nekochu/Llama-3.1-8B-french-DPO | Nekochu | "2024-09-25T19:05:36Z" | 12 | 1 | peft | [
"peft",
"safetensors",
"llama",
"llama-factory",
"lora",
"fr",
"en",
"dataset:Snit/french-conversation",
"dataset:Nekochu/novel17_train_alpaca_format",
"dataset:bofenghuang/vigogne",
"dataset:MaziyarPanahi/french_instruct_human_sharegpt",
"dataset:jpacifico/French-Alpaca-dataset-Instruct-110K",
"dataset:jpacifico/french-orca-dpo-pairs-revised",
"base_model:NousResearch/Meta-Llama-3.1-8B-Instruct",
"base_model:adapter:NousResearch/Meta-Llama-3.1-8B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2024-08-12T14:03:06Z" | ---
license: apache-2.0
base_model: NousResearch/Meta-Llama-3.1-8B-Instruct
library_name: peft
tags:
- llama-factory
- lora
datasets:
- Snit/french-conversation
- Nekochu/novel17_train_alpaca_format
- bofenghuang/vigogne
- MaziyarPanahi/french_instruct_human_sharegpt
- jpacifico/French-Alpaca-dataset-Instruct-110K
- jpacifico/french-orca-dpo-pairs-revised
language:
- fr
- en
---
- Similar to the old [Nekochu/Llama-2-13B-fp16-french](https://huggingface.co/Nekochu/Llama-2-13B-fp16-french) with additional datasets.
- I've (alway) kept LoRA `QLoRA_french_dpo` so it can be applied to any *LLaMA-3.1-8B* fine-tuned model but may affect performance.
- Quants: exl2 [2.4bpw-h6](https://huggingface.co/Nekochu/Llama-3.1-8B-french-DPO/tree/2.4bpw-h6), [4.25bpw-h6](https://huggingface.co/Nekochu/Llama-3.1-8B-french-DPO/tree/4.25bpw-h6), [8.0bpw-h8](https://huggingface.co/Nekochu/Llama-3.1-8B-french-DPO/tree/8.0bpw-h8) | [GGUF](https://huggingface.co/Nekochu/Llama-3.1-8B-french-DPO/tree/gguf) Q4_K_M,IQ4_XS...
<details>
<summary>This training can be replicated using LLaMA-Factory. </summary>
Stage A: **P**re **T**raining, Raw text
```
set CUDA_VISIBLE_DEVICES=0 && llamafactory-cli train --stage pt --do_train True --model_name_or_path NousResearch/Meta-Llama-3.1-8B-Instruct --preprocessing_num_workers 16 --finetuning_type lora --template alpaca --rope_scaling linear --flash_attn fa2 --dataset_dir data --dataset french-raw-pt --cutoff_len 8192 --learning_rate 5e-05 --num_train_epochs 3.0 --max_samples 10000000 --per_device_train_batch_size 1 --gradient_accumulation_steps 1 --lr_scheduler_type cosine --max_grad_norm 1.0 --logging_steps 10 --save_steps 1000 --warmup_steps 0 --neftune_noise_alpha 5 --optim adamw_8bit --packing True --report_to none --output_dir saves\LLaMA3.1-8B-Chat\lora\QLoRA_french_pt --bf16 True --plot_loss True --ddp_timeout 180000000 --include_num_input_tokens_seen True --quantization_bit 4 --quantization_method bitsandbytes --lora_rank 32 --lora_alpha 64 --lora_dropout 0.15 --create_new_adapter True --lora_target all
```
Stage B: Continued **S**upervised **F**ine-**T**uning, QA
```
set CUDA_VISIBLE_DEVICES=0 && llamafactory-cli train --stage sft --do_train True --model_name_or_path NousResearch/Meta-Llama-3.1-8B-Instruct --preprocessing_num_workers 16 --finetuning_type lora --template alpaca --rope_scaling linear --flash_attn fa2 --dataset_dir data --dataset Acquiesce_french_vigogne,novel17_train --cutoff_len 8192 --learning_rate 5e-05 --num_train_epochs 3.0 --max_samples 10000000 --per_device_train_batch_size 1 --gradient_accumulation_steps 1 --lr_scheduler_type cosine --max_grad_norm 1.0 --logging_steps 10 --save_steps 1000 --warmup_steps 0 --neftune_noise_alpha 5 --optim adamw_8bit --packing True --report_to none --output_dir saves\LLaMA3.1-8B-Chat\lora\QLoRA_french_sft --bf16 True --plot_loss True --ddp_timeout 180000000 --adapter_name_or_path saves\LLaMA3.1-8B-Chat\lora\QLoRA_french_pt --quantization_bit 4 --quantization_method bitsandbytes --lora_rank 32 --lora_alpha 64 --lora_dropout 0.15 --lora_target all
```
Stage C: Continued **D**irect **P**reference **O**ptimization
<details>
<summary>Config</summary>
`llama3_lora_dpo.yaml`
```yaml
### model:
model_name_or_path: NousResearch/Meta-Llama-3.1-8B-Instruct
quantization_bit: 4
use_adam_mini: true
adapter_name_or_path: saves\LLaMA3.1-8B-Chat\lora\QLoRA_french_sft
### method
stage: dpo
do_train: true
finetuning_type: lora
lora_target: all
pref_beta: 0.1
pref_loss: sigmoid # choices: [sigmoid (dpo), orpo, simpo]
### dataset
dataset: french_orca_rlhf-revised
template: llama3
cutoff_len: 4096
max_samples: 10000
overwrite_cache: true
preprocessing_num_workers: 16
### output
output_dir: saves/LLaMA3.1-8B-Chat/lora/QLoRA_french_dpo
logging_steps: 10
save_steps: 1000
plot_loss: true
overwrite_output_dir: true
### train
per_device_train_batch_size: 1
gradient_accumulation_steps: 1
learning_rate: 5.0e-6
num_train_epochs: 1.0
lr_scheduler_type: cosine
warmup_ratio: 0.1
bf16: true
ddp_timeout: 180000000
flash_attn: fa2
optim: paged_adamw_8bit
```
`dataset_info.json`:
```json
"french_orca_rlhf-revised": {
"hf_hub_url": "jpacifico/french-orca-dpo-pairs-revised",
"ranking": true,
"columns": {
"prompt": "question",
"chosen": "chosen",
"rejected": "rejected",
"system": "system"
}
},
"novel17_train": {
"hf_hub_url": "Nekochu/novel17_train_alpaca_format",
"formatting": "alpaca"
},
"Acquiesce_french_vigogne": {
"file_name": "Acquiesce_french_vigogne.json",
"formatting": "alpaca",
"columns": {
"prompt": "instruction",
"query": "input",
"response": "output",
"system": "system",
"history": "history"
}
},
"french-raw-pt": {
"file_name": "french-raw-pt.json",
"columns": {
"prompt": "text"
}
},
```
</details>
Dataset convert to Alpaca: [Acquiesce_french_vigogne](https://huggingface.co/datasets/Nekochu/Luminia-mixture/tree/split-v2/General/French),french-raw-pt
</details>
<details>
<summary>Output Examples</summary>
```
### Instruction:
รcrire un texte dans un style baroque sur la glace et le feu
### Response:
La glace, cette matiรจre si froide, si pure, si immuable, semble s'opposer en tout point ร la flamme ardente, ร cette force qui embrase tout ce qui se trouve sur son passage. Et pourtant, c'est prรฉcisรฉment cette opposition qui rend leur confrontation si fascinant.
Lorsque la glace rencontre le feu, elle semble se dresser en dรฉfense contre cette menace brรปlante. Ses cristaux brillent alors comme des diamants รฉblouissants, refusant de cรฉder ร cette envahissante passion de feu. Mais le feu, impรฉtueux, persรฉvรฉrant, continue de s'enflammer, dรฉterminรฉ ร briser cette barriรจre glacรฉe.
Cependant, cette lutte apparente trouve en rรฉalitรฉ une harmonie cachรฉe. Car en se confrontant, ces deux รฉlรฉments opposรฉs crรฉent une symphonie de contraires, oรน la glace se transforme en une eau cristalline, purifiรฉe par cette confrontation flamboyante. Cette fusion paradoxale donne naissance ร une nouvelle rรฉalitรฉ, oรน la froideur et la chaleur se mรชlent en une danse รฉblouissante.
Ainsi, cette opposition apparente entre la glace et le feu nous rappelle que mรชme les contraires peuvent trouver leur place dans un monde en constante รฉvolution. Cette confrontation symbolique nous invite ร considรฉrer les aspects contradictoires de notre propre existence, ร embrasser cette dualitรฉ inhรฉrente ร notre nature humaine. En acceptant cette dualitรฉ, nous dรฉcouvrons enfin la vรฉritable beautรฉ de notre existence, oรน la lumiรจre et l'obscuritรฉ se mรชlent en une symphonie harmonieuseassistant.scalablytyped๏ปฟ
SupportedContentEn explorant cette opposition fascinante entre la glace et le feu, nous dรฉcouvrons une vรฉritรฉ profonde sur notre propre existence. Cette confrontation symbolique nous rappelle que nous sommes composรฉs de forces contradictoires, que notre essence mรชme repose sur cette dualitรฉ inhรฉrente ร notre nature humaine. En embrassant cette dualitรฉ, nous dรฉcouvrons enfin la beautรฉ harmonieuse qui rรฉside en chacun de nous, oรน la lumiรจre et l'obscuritรฉ se mรชlent en une symphonie captivanteassistant
```
Note: Output by exl2-DPO. `QLoRA_french_sft` is more stable to avoid any gibberi like ""`harmonieuseassistant.scalablytyped๏ปฟ`"".
</details> |
mradermacher/EZO-Qwen2.5-32B-Instruct-i1-GGUF | mradermacher | "2024-12-29T10:36:27Z" | 157 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"ja",
"en",
"base_model:AXCXEPT/EZO-Qwen2.5-32B-Instruct",
"base_model:quantized:AXCXEPT/EZO-Qwen2.5-32B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2024-12-29T05:02:23Z" | ---
base_model: AXCXEPT/EZO-Qwen2.5-32B-Instruct
language:
- ja
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- chat
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/AXCXEPT/EZO-Qwen2.5-32B-Instruct
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/EZO-Qwen2.5-32B-Instruct-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/EZO-Qwen2.5-32B-Instruct-i1-GGUF/resolve/main/EZO-Qwen2.5-32B-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 7.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/EZO-Qwen2.5-32B-Instruct-i1-GGUF/resolve/main/EZO-Qwen2.5-32B-Instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 8.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/EZO-Qwen2.5-32B-Instruct-i1-GGUF/resolve/main/EZO-Qwen2.5-32B-Instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/EZO-Qwen2.5-32B-Instruct-i1-GGUF/resolve/main/EZO-Qwen2.5-32B-Instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/EZO-Qwen2.5-32B-Instruct-i1-GGUF/resolve/main/EZO-Qwen2.5-32B-Instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/EZO-Qwen2.5-32B-Instruct-i1-GGUF/resolve/main/EZO-Qwen2.5-32B-Instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 11.4 | |
| [GGUF](https://huggingface.co/mradermacher/EZO-Qwen2.5-32B-Instruct-i1-GGUF/resolve/main/EZO-Qwen2.5-32B-Instruct.i1-Q2_K_S.gguf) | i1-Q2_K_S | 11.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/EZO-Qwen2.5-32B-Instruct-i1-GGUF/resolve/main/EZO-Qwen2.5-32B-Instruct.i1-Q2_K.gguf) | i1-Q2_K | 12.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/EZO-Qwen2.5-32B-Instruct-i1-GGUF/resolve/main/EZO-Qwen2.5-32B-Instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/EZO-Qwen2.5-32B-Instruct-i1-GGUF/resolve/main/EZO-Qwen2.5-32B-Instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.8 | |
| [GGUF](https://huggingface.co/mradermacher/EZO-Qwen2.5-32B-Instruct-i1-GGUF/resolve/main/EZO-Qwen2.5-32B-Instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/EZO-Qwen2.5-32B-Instruct-i1-GGUF/resolve/main/EZO-Qwen2.5-32B-Instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 14.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/EZO-Qwen2.5-32B-Instruct-i1-GGUF/resolve/main/EZO-Qwen2.5-32B-Instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 14.9 | |
| [GGUF](https://huggingface.co/mradermacher/EZO-Qwen2.5-32B-Instruct-i1-GGUF/resolve/main/EZO-Qwen2.5-32B-Instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/EZO-Qwen2.5-32B-Instruct-i1-GGUF/resolve/main/EZO-Qwen2.5-32B-Instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/EZO-Qwen2.5-32B-Instruct-i1-GGUF/resolve/main/EZO-Qwen2.5-32B-Instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.8 | |
| [GGUF](https://huggingface.co/mradermacher/EZO-Qwen2.5-32B-Instruct-i1-GGUF/resolve/main/EZO-Qwen2.5-32B-Instruct.i1-Q4_0.gguf) | i1-Q4_0 | 18.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/EZO-Qwen2.5-32B-Instruct-i1-GGUF/resolve/main/EZO-Qwen2.5-32B-Instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/EZO-Qwen2.5-32B-Instruct-i1-GGUF/resolve/main/EZO-Qwen2.5-32B-Instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/EZO-Qwen2.5-32B-Instruct-i1-GGUF/resolve/main/EZO-Qwen2.5-32B-Instruct.i1-Q4_1.gguf) | i1-Q4_1 | 20.7 | |
| [GGUF](https://huggingface.co/mradermacher/EZO-Qwen2.5-32B-Instruct-i1-GGUF/resolve/main/EZO-Qwen2.5-32B-Instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/EZO-Qwen2.5-32B-Instruct-i1-GGUF/resolve/main/EZO-Qwen2.5-32B-Instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/EZO-Qwen2.5-32B-Instruct-i1-GGUF/resolve/main/EZO-Qwen2.5-32B-Instruct.i1-Q6_K.gguf) | i1-Q6_K | 27.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/NeuralKukedlc-7B-Labonned-i1-GGUF | mradermacher | "2024-12-24T15:35:57Z" | 51 | 1 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"mlabonne/NeuralBeagle14-7B",
"mlabonne/NeuralHermes-2.5-Mistral-7B",
"en",
"base_model:Kukedlc/NeuralKukedlc-7B-Labonned",
"base_model:quantized:Kukedlc/NeuralKukedlc-7B-Labonned",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2024-12-24T14:51:03Z" | ---
base_model: Kukedlc/NeuralKukedlc-7B-Labonned
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- mlabonne/NeuralBeagle14-7B
- mlabonne/NeuralHermes-2.5-Mistral-7B
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Kukedlc/NeuralKukedlc-7B-Labonned
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/NeuralKukedlc-7B-Labonned-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NeuralKukedlc-7B-Labonned-i1-GGUF/resolve/main/NeuralKukedlc-7B-Labonned.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/NeuralKukedlc-7B-Labonned-i1-GGUF/resolve/main/NeuralKukedlc-7B-Labonned.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/NeuralKukedlc-7B-Labonned-i1-GGUF/resolve/main/NeuralKukedlc-7B-Labonned.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralKukedlc-7B-Labonned-i1-GGUF/resolve/main/NeuralKukedlc-7B-Labonned.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralKukedlc-7B-Labonned-i1-GGUF/resolve/main/NeuralKukedlc-7B-Labonned.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralKukedlc-7B-Labonned-i1-GGUF/resolve/main/NeuralKukedlc-7B-Labonned.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralKukedlc-7B-Labonned-i1-GGUF/resolve/main/NeuralKukedlc-7B-Labonned.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralKukedlc-7B-Labonned-i1-GGUF/resolve/main/NeuralKukedlc-7B-Labonned.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/NeuralKukedlc-7B-Labonned-i1-GGUF/resolve/main/NeuralKukedlc-7B-Labonned.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralKukedlc-7B-Labonned-i1-GGUF/resolve/main/NeuralKukedlc-7B-Labonned.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralKukedlc-7B-Labonned-i1-GGUF/resolve/main/NeuralKukedlc-7B-Labonned.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/NeuralKukedlc-7B-Labonned-i1-GGUF/resolve/main/NeuralKukedlc-7B-Labonned.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/NeuralKukedlc-7B-Labonned-i1-GGUF/resolve/main/NeuralKukedlc-7B-Labonned.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralKukedlc-7B-Labonned-i1-GGUF/resolve/main/NeuralKukedlc-7B-Labonned.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/NeuralKukedlc-7B-Labonned-i1-GGUF/resolve/main/NeuralKukedlc-7B-Labonned.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/NeuralKukedlc-7B-Labonned-i1-GGUF/resolve/main/NeuralKukedlc-7B-Labonned.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralKukedlc-7B-Labonned-i1-GGUF/resolve/main/NeuralKukedlc-7B-Labonned.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralKukedlc-7B-Labonned-i1-GGUF/resolve/main/NeuralKukedlc-7B-Labonned.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/NeuralKukedlc-7B-Labonned-i1-GGUF/resolve/main/NeuralKukedlc-7B-Labonned.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralKukedlc-7B-Labonned-i1-GGUF/resolve/main/NeuralKukedlc-7B-Labonned.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NeuralKukedlc-7B-Labonned-i1-GGUF/resolve/main/NeuralKukedlc-7B-Labonned.i1-Q4_1.gguf) | i1-Q4_1 | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralKukedlc-7B-Labonned-i1-GGUF/resolve/main/NeuralKukedlc-7B-Labonned.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralKukedlc-7B-Labonned-i1-GGUF/resolve/main/NeuralKukedlc-7B-Labonned.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralKukedlc-7B-Labonned-i1-GGUF/resolve/main/NeuralKukedlc-7B-Labonned.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
dss107/news | dss107 | "2024-11-07T12:26:40Z" | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | "2023-09-27T08:36:09Z" | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# dss107/news
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("dss107/news")
# Run inference
preds = model(["There are several attack along loc!", "Terrorist Captured In Kashmir Region"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
mradermacher/Qwen2.5-Math-72B-Instruct-i1-GGUF | mradermacher | "2024-09-22T17:59:12Z" | 39 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"en",
"base_model:Qwen/Qwen2.5-Math-72B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-Math-72B-Instruct",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2024-09-22T01:21:23Z" | ---
base_model: Qwen/Qwen2.5-Math-72B-Instruct
language:
- en
library_name: transformers
license: other
license_link: https://huggingface.co/Qwen/Qwen2.5-Math-72B-Instruct/blob/main/LICENSE
license_name: qwen
quantized_by: mradermacher
tags:
- chat
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Qwen/Qwen2.5-Math-72B-Instruct
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen2.5-Math-72B-Instruct-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Math-72B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Math-72B-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 22.8 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Math-72B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Math-72B-Instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 23.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Math-72B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Math-72B-Instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 25.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Math-72B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Math-72B-Instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 27.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Math-72B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Math-72B-Instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 28.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Math-72B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Math-72B-Instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Math-72B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Math-72B-Instruct.i1-Q2_K.gguf) | i1-Q2_K | 29.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Math-72B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Math-72B-Instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 31.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Math-72B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Math-72B-Instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 32.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Math-72B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Math-72B-Instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 34.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Math-72B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Math-72B-Instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 34.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Math-72B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Math-72B-Instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 35.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Math-72B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Math-72B-Instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 37.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Math-72B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Math-72B-Instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 39.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Math-72B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Math-72B-Instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 39.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Math-72B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Math-72B-Instruct.i1-Q4_0.gguf) | i1-Q4_0 | 41.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Math-72B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Math-72B-Instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 44.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Math-72B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Math-72B-Instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 47.5 | fast, recommended |
| [PART 1](https://huggingface.co/mradermacher/Qwen2.5-Math-72B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Math-72B-Instruct.i1-Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Qwen2.5-Math-72B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Math-72B-Instruct.i1-Q5_K_S.gguf.part2of2) | i1-Q5_K_S | 51.5 | |
| [PART 1](https://huggingface.co/mradermacher/Qwen2.5-Math-72B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Math-72B-Instruct.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Qwen2.5-Math-72B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Math-72B-Instruct.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 54.5 | |
| [PART 1](https://huggingface.co/mradermacher/Qwen2.5-Math-72B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Math-72B-Instruct.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Qwen2.5-Math-72B-Instruct-i1-GGUF/resolve/main/Qwen2.5-Math-72B-Instruct.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 64.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
lilyyellow/my_awesome_text_classification_v2.1.2 | lilyyellow | "2024-05-16T17:22:03Z" | 106 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-05-16T12:12:15Z" | ---
license: mit
base_model: FacebookAI/xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: my_awesome_text_classification_v2.1.2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_text_classification_v2.1.2
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
shibajustfor/316f0b87-fb3d-4d29-adbc-5abe01bb47ea | shibajustfor | "2025-02-16T05:12:11Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"falcon",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:tiiuae/falcon-rw-1b",
"base_model:adapter:tiiuae/falcon-rw-1b",
"license:apache-2.0",
"region:us"
] | null | "2025-02-16T04:32:26Z" | ---
library_name: peft
license: apache-2.0
base_model: tiiuae/falcon-rw-1b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 316f0b87-fb3d-4d29-adbc-5abe01bb47ea
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 316f0b87-fb3d-4d29-adbc-5abe01bb47ea
This model is a fine-tuned version of [tiiuae/falcon-rw-1b](https://huggingface.co/tiiuae/falcon-rw-1b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7311
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mergekit-community/Slush-Lyra-Gutenberg-Bophades | mergekit-community | "2025-02-08T05:56:50Z" | 9 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:mergekit-community/mergekit-slerp-dehplhb",
"base_model:merge:mergekit-community/mergekit-slerp-dehplhb",
"base_model:mergekit-community/mergekit-slerp-ldvtrnn",
"base_model:merge:mergekit-community/mergekit-slerp-ldvtrnn",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-08T05:49:19Z" | ---
base_model:
- mergekit-community/mergekit-slerp-ldvtrnn
- mergekit-community/mergekit-slerp-dehplhb
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* [mergekit-community/mergekit-slerp-ldvtrnn](https://huggingface.co/mergekit-community/mergekit-slerp-ldvtrnn)
* [mergekit-community/mergekit-slerp-dehplhb](https://huggingface.co/mergekit-community/mergekit-slerp-dehplhb)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mergekit-community/mergekit-slerp-ldvtrnn
- model: mergekit-community/mergekit-slerp-dehplhb
merge_method: slerp
base_model: mergekit-community/mergekit-slerp-ldvtrnn
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Slush-Lyra4-Gutenberg2 for input & output, Slush-Bophades3 in the middle layers
```
|
shankar19/layoutlm_model | shankar19 | "2024-06-18T06:58:37Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2024-06-18T06:58:37Z" | ---
license: apache-2.0
---
|
Panchovix/tulu-30B-lxctx-PI-16384-LoRA-fp16 | Panchovix | "2023-07-17T23:37:01Z" | 9 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-07-15T06:23:17Z" | ---
license: other
---
[tulu-30B](https://huggingface.co/TheBloke/tulu-30B-fp16) merged with bhenrym14's [airoboros-33b-gpt4-1.4.1-lxctx-PI-16384-LoRA](https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-lxctx-PI-16384-LoRA), full model (FP16)
More info about the LoRA [Here](https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-lxctx-PI-16384-LoRA). This is an alternative to SuperHOT 8k LoRA trained with LoRA_rank 64 and context extended to 16K, with airoboros 1.4.1 dataset. |
workRL/DQNTest-LunarLander-v2 | workRL | "2022-07-20T09:05:04Z" | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2022-07-20T09:04:23Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- metrics:
- type: mean_reward
value: -95.66 +/- 35.41
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **DQN** Agent playing **LunarLander-v2**
This is a trained model of a **DQN** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
luvres/Llama-3-187M-Instruct | luvres | "2024-06-15T22:53:57Z" | 147 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-15T22:53:28Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ybelkada/papers | ybelkada | "2023-03-29T10:32:19Z" | 0 | 0 | null | [
"arxiv:2211.05100",
"arxiv:2208.07339",
"arxiv:2209.01188",
"arxiv:2112.04212",
"region:us"
] | null | "2023-03-29T10:31:08Z" | # Papers
List of papers I have contributed to:
- https://arxiv.org/pdf/2211.05100.pdf
- https://arxiv.org/pdf/2208.07339.pdf
- https://arxiv.org/pdf/2209.01188.pdf
- https://arxiv.org/pdf/2112.04212.pdf
|
sophie-rain-spiderman-vidaleo/Sophie-Rain-Spiderman-Video-Tutorial | sophie-rain-spiderman-vidaleo | "2025-02-15T18:04:31Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-02-15T17:59:46Z" | <p><a href="https://social.danielwellington.com/srain" rel="nofollow">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐๐๐ญ๐๐ก ๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ)</a></p>
<p><a href="https://social.danielwellington.com/srain" rel="nofollow">๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )</a></p>
<p><a href="https://social.danielwellington.com/srain" rel="nofollow"><img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif"></a></p> |
mlfoundations-dev/openthoughts114k-qwenmath-fa2 | mlfoundations-dev | "2025-02-26T21:24:20Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-Math-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Math-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-25T22:09:03Z" | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-Math-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: openthoughts114k-qwenmath-fa2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openthoughts114k-qwenmath-fa2
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Math-7B-Instruct) on the open-thoughts/OpenThoughts-114k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- gradient_accumulation_steps: 3
- total_train_batch_size: 96
- total_eval_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.3.0
- Datasets 3.1.0
- Tokenizers 0.20.3
|
CJ-gyuwonpark/ch-70b-v7 | CJ-gyuwonpark | "2023-10-11T01:27:23Z" | 1 | 0 | peft | [
"peft",
"llama",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2023-10-09T03:05:41Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
huggingtweets/_deep_winter_ | huggingtweets | "2022-03-01T07:42:37Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
language: en
thumbnail: http://www.huggingtweets.com/_deep_winter_/1646120552069/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1344880990464991239/DJ6glcyj_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">erin.</div>
<div style="text-align: center; font-size: 14px;">@_deep_winter_</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from erin..
| Data | erin. |
| --- | --- |
| Tweets downloaded | 3147 |
| Retweets | 716 |
| Short tweets | 243 |
| Tweets kept | 2188 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3bgxbc1v/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @_deep_winter_'s tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2dlbw7vo) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2dlbw7vo/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/_deep_winter_')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
DereAbdulhameed/whisper-small-PharmaSpeak | DereAbdulhameed | "2024-08-28T05:53:38Z" | 76 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"hi",
"dataset:DereAbdulhameed/Pharma-Speak",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-08-28T01:57:58Z" | ---
library_name: transformers
language:
- hi
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- DereAbdulhameed/Pharma-Speak
metrics:
- wer
model-index:
- name: 'Whisper Small Medication Corpus '
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Pharma-Speak
type: DereAbdulhameed/Pharma-Speak
args: 'config: en, split: test'
metrics:
- name: Wer
type: wer
value: 20.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Medication Corpus
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Pharma-Speak dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6189
- Wer: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.0 | 500.0 | 1000 | 0.5205 | 18.6047 |
| 0.0 | 1000.0 | 2000 | 0.5735 | 20.9302 |
| 0.0 | 1500.0 | 3000 | 0.6033 | 21.8605 |
| 0.0 | 2000.0 | 4000 | 0.6189 | 20.0 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
nhung01/5737aee7-b754-4ace-9a67-95893085bc76 | nhung01 | "2025-01-21T13:54:53Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:heegyu/WizardVicuna-open-llama-3b-v2",
"base_model:adapter:heegyu/WizardVicuna-open-llama-3b-v2",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-21T13:39:31Z" | ---
library_name: peft
license: apache-2.0
base_model: heegyu/WizardVicuna-open-llama-3b-v2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 5737aee7-b754-4ace-9a67-95893085bc76
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: heegyu/WizardVicuna-open-llama-3b-v2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- cfdd0720cc0eec8a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/cfdd0720cc0eec8a_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhung01/5737aee7-b754-4ace-9a67-95893085bc76
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/cfdd0720cc0eec8a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 031c4fb1-8f77-4db5-98c0-9e537ea01bc6
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 031c4fb1-8f77-4db5-98c0-9e537ea01bc6
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 5737aee7-b754-4ace-9a67-95893085bc76
This model is a fine-tuned version of [heegyu/WizardVicuna-open-llama-3b-v2](https://huggingface.co/heegyu/WizardVicuna-open-llama-3b-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6535
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.705 | 0.2479 | 200 | 0.6535 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/Kitsunebi-v1-Gemma2-8k-9B-i1-GGUF | mradermacher | "2024-08-12T03:25:46Z" | 66 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:grimjim/Kitsunebi-v1-Gemma2-8k-9B",
"base_model:quantized:grimjim/Kitsunebi-v1-Gemma2-8k-9B",
"license:gemma",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2024-08-12T01:22:22Z" | ---
base_model: grimjim/Kitsunebi-v1-Gemma2-8k-9B
language:
- en
library_name: transformers
license: gemma
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/grimjim/Kitsunebi-v1-Gemma2-8k-9B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Kitsunebi-v1-Gemma2-8k-9B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Kitsunebi-v1-Gemma2-8k-9B-i1-GGUF/resolve/main/Kitsunebi-v1-Gemma2-8k-9B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Kitsunebi-v1-Gemma2-8k-9B-i1-GGUF/resolve/main/Kitsunebi-v1-Gemma2-8k-9B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Kitsunebi-v1-Gemma2-8k-9B-i1-GGUF/resolve/main/Kitsunebi-v1-Gemma2-8k-9B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Kitsunebi-v1-Gemma2-8k-9B-i1-GGUF/resolve/main/Kitsunebi-v1-Gemma2-8k-9B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Kitsunebi-v1-Gemma2-8k-9B-i1-GGUF/resolve/main/Kitsunebi-v1-Gemma2-8k-9B.i1-IQ2_S.gguf) | i1-IQ2_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Kitsunebi-v1-Gemma2-8k-9B-i1-GGUF/resolve/main/Kitsunebi-v1-Gemma2-8k-9B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Kitsunebi-v1-Gemma2-8k-9B-i1-GGUF/resolve/main/Kitsunebi-v1-Gemma2-8k-9B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Kitsunebi-v1-Gemma2-8k-9B-i1-GGUF/resolve/main/Kitsunebi-v1-Gemma2-8k-9B.i1-Q2_K.gguf) | i1-Q2_K | 3.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Kitsunebi-v1-Gemma2-8k-9B-i1-GGUF/resolve/main/Kitsunebi-v1-Gemma2-8k-9B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Kitsunebi-v1-Gemma2-8k-9B-i1-GGUF/resolve/main/Kitsunebi-v1-Gemma2-8k-9B.i1-IQ3_S.gguf) | i1-IQ3_S | 4.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Kitsunebi-v1-Gemma2-8k-9B-i1-GGUF/resolve/main/Kitsunebi-v1-Gemma2-8k-9B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Kitsunebi-v1-Gemma2-8k-9B-i1-GGUF/resolve/main/Kitsunebi-v1-Gemma2-8k-9B.i1-IQ3_M.gguf) | i1-IQ3_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Kitsunebi-v1-Gemma2-8k-9B-i1-GGUF/resolve/main/Kitsunebi-v1-Gemma2-8k-9B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Kitsunebi-v1-Gemma2-8k-9B-i1-GGUF/resolve/main/Kitsunebi-v1-Gemma2-8k-9B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Kitsunebi-v1-Gemma2-8k-9B-i1-GGUF/resolve/main/Kitsunebi-v1-Gemma2-8k-9B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Kitsunebi-v1-Gemma2-8k-9B-i1-GGUF/resolve/main/Kitsunebi-v1-Gemma2-8k-9B.i1-Q4_0.gguf) | i1-Q4_0 | 5.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Kitsunebi-v1-Gemma2-8k-9B-i1-GGUF/resolve/main/Kitsunebi-v1-Gemma2-8k-9B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Kitsunebi-v1-Gemma2-8k-9B-i1-GGUF/resolve/main/Kitsunebi-v1-Gemma2-8k-9B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Kitsunebi-v1-Gemma2-8k-9B-i1-GGUF/resolve/main/Kitsunebi-v1-Gemma2-8k-9B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Kitsunebi-v1-Gemma2-8k-9B-i1-GGUF/resolve/main/Kitsunebi-v1-Gemma2-8k-9B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Kitsunebi-v1-Gemma2-8k-9B-i1-GGUF/resolve/main/Kitsunebi-v1-Gemma2-8k-9B.i1-Q6_K.gguf) | i1-Q6_K | 7.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
kayfahaarukku/UrangDiffusion-1.2 | kayfahaarukku | "2024-08-03T01:04:09Z" | 34 | 4 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"en",
"base_model:cagliostrolab/animagine-xl-3.1",
"base_model:finetune:cagliostrolab/animagine-xl-3.1",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-08-02T23:38:01Z" | ---
license: other
license_name: faipl
license_link: https://freedevproject.org/faipl-1.0-sd
language:
- en
tags:
- text-to-image
- stable-diffusion
- safetensors
- stable-diffusion-xl
base_model: cagliostrolab/animagine-xl-3.1
widget:
- text: >-
1girl, green hair, sweater, looking at viewer, upper body, beanie,
outdoors, night, turtleneck, masterpiece, best quality
parameter:
negative_prompt: >-
nsfw, lowres, bad anatomy, bad hands, text, error, missing fingers,
extra digit, fewer digits, cropped, worst quality, low quality, normal
quality, jpeg artifacts, signature, watermark, username, blurry, artist
name
example_title: 1girl
---
<style>
.title-container {
display: flex;
justify-content: center;
align-items: center;
height: 100vh; /* Adjust this value to position the title vertically */
}
.title {
font-size: 2.5em;
text-align: center;
color: #333;
font-family: 'Helvetica Neue', sans-serif;
text-transform: uppercase;
letter-spacing: 0.1em;
padding: 0.5em 0;
background: transparent;
}
.title span {
background: -webkit-linear-gradient(45deg, #bdabe3, #b39a3e);
-webkit-background-clip: text;
-webkit-text-fill-color: transparent;
}
.custom-table {
table-layout: fixed;
width: 100%;
border-collapse: collapse;
margin-top: 2em;
}
.custom-table td {
width: 50%;
vertical-align: top;
padding: 10px;
box-shadow: 0px 0px 0px 0px rgba(0, 0, 0, 0.15);
}
.custom-image-container {
position: relative;
width: 100%;
margin-bottom: 0em;
overflow: hidden;
border-radius: 10px;
transition: transform .7s;
/* Smooth transition for the container */
}
.custom-image-container:hover {
transform: scale(1.05);
filter: none;
/* Scale the container on hover */
}
.custom-image {
width: 100%;
height: auto;
object-fit: cover;
border-radius: 10px;
transition: transform .7s;
margin-bottom: 0em;
}
.nsfw-filter {
filter: blur(8px); /* Apply a blur effect */
transition: filter 0.3s ease; /* Smooth transition for the blur effect */
}
.overlay {
position: absolute;
bottom: 0;
left: 0;
right: 0;
color: white;
width: 100%;
height: 40%;
display: flex;
flex-direction: column;
justify-content: center;
align-items: center;
font-size: 1vw;
font-style: bold;
text-align: center;
opacity: 0;
/* Keep the text fully opaque */
background: linear-gradient(0deg, rgba(0, 0, 0, 0.8) 60%, rgba(0, 0, 0, 0) 100%);
transition: opacity .5s;
}
.custom-image-container:hover .overlay {
opacity: 1;
}
.overlay-text {
background: linear-gradient(45deg, #7ed56f, #28b485);
-webkit-background-clip: text;
color: transparent;
text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.7);
.overlay-subtext {
font-size: 0.75em;
margin-top: 0.5em;
font-style: italic;
}
.overlay,
.overlay-subtext {
text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.5);
}
</style>
<h1 class="title">
<span>UrangDiffusion 1.2</span>
</h1>
<table class="custom-table">
<tr>
<td>
<div class="custom-image-container">
<img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/64333a074521083b9d2aab3b/9OsyB_K1qMV99MPPMswzL.png" alt="sample1">
</div>
<div class="custom-image-container">
<img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/64333a074521083b9d2aab3b/tlBoncBJJg4ob7IEAip3j.png" alt="sample4">
</div>
</td>
<td>
<div class="custom-image-container">
<img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/64333a074521083b9d2aab3b/wAcN5GrWDJkCbYvBaFLKb.png" alt="sample2">
</div>
<div class="custom-image-container">
<img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/64333a074521083b9d2aab3b/3TrYflrSEZKqrCxarCN8H.png" alt="sample3">
</td>
<td>
<div class="custom-image-container">
<img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/64333a074521083b9d2aab3b/QKplYPDqVh--cIto3KdWS.png" alt="sample1">
</div>
<div class="custom-image-container">
<img class="custom-image" src="https://cdn-uploads.huggingface.co/production/uploads/64333a074521083b9d2aab3b/xRZB2_FDDWmPj_jVj_PHy.png" alt="sample4">
</div>
</td>
</tr>
</table>
**UrangDiffusion 1.2** (oo-raw-ng Diffusion) is an updated version of UrangDiffusion 1.1. This version provides dataset refresh, improvements over the last iteration and training parameter correction.
## Standard Prompting Guidelines
The model is finetuned from Animagine XL 3.1. However, there is a little bit changes on dataset captioning, therefore there is some different default prompt used:
**Default prompt**:
```
1girl/1boy, character name, from what series, everything else in any order, masterpiece, best quality, amazing quality, very aesthetic
```
**Default negative prompt**:
```
lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name, displeasing
```
**Default configuration:**
Euler a with around 25-30 steps, CFG 5-7, and ENSD set to 31337. Sweetspot is around 28 steps and CFG 7.
## Training Configurations
- Finetuned from: [Animagine XL 3.1](https://huggingface.co/cagliostrolab/animagine-xl-3.1)
**Pretraining:**
- Dataset size: ~23,600 images
- GPU: 1xA100
- Optimizer: AdaFactor
- Unet Learning Rate: 3.75e-6
- Text Encoder Learning Rate: 1.875e-6
- Batch Size: 48
- Gradient Accumulation: 1
- Warmup steps: 100 steps
- Min SNR Gamma: 5
- Epoch: 10 (epoch 9 is used)
**Finetuning:**
- Dataset size: ~6,800 images
- GPU: 1xA100
- Optimizer: AdaFactor
- Unet Learning Rate: 2e-6
- Text Encoder Learning Rate: - (Train TE set to False)
- Batch Size: 48
- Gradient Accumulation: 1
- Warmup steps: 5%
- Min SNR Gamma: 5
- Epoch: 10
- Noise Offset: 0.0357
## Added Series
**Wuthering Waves**, **Zenless Zone Zero**, and **hololiveEN -Justice-** have been added to the model.
## Special Thanks
- **My co-workers(?) at CagliostroLab** for the insights and feedback.
- **Nur Hikari** and **Vanilla Latte** for quality control.
- **Linaqruf**, my tutor and role model in AI-generated images.
## License
**UrangDiffusion 1.2** falls under the **[Fair AI Public License 1.0-SD](https://freedevproject.org/faipl-1.0-sd/)** license. |
mmnga/Mistral-Large-Instruct-2407-gguf | mmnga | "2024-07-26T12:21:45Z" | 127 | 0 | null | [
"gguf",
"mistral",
"en",
"fr",
"de",
"es",
"it",
"pt",
"zh",
"ja",
"ru",
"ko",
"dataset:TFMC/imatrix-dataset-for-japanese-llm",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | "2024-07-24T18:59:58Z" | ---
license: other
license_name: mrl
license_link: https://mistral.ai/licenses/MRL-0.1.md
language:
- en
- fr
- de
- es
- it
- pt
- zh
- ja
- ru
- ko
datasets:
- TFMC/imatrix-dataset-for-japanese-llm
tags:
- mistral
---
# Mistral-Large-Instruct-2407-gguf
[mistralaiใใใๅ
ฌ้ใใฆใใMistral-Large-Instruct-2407](https://huggingface.co/mistralai/Mistral-Large-Instruct-2407)ใฎggufใใฉใผใใใๅคๆ็ใงใใ
imatrixใฎใใผใฟใฏ[TFMC/imatrix-dataset-for-japanese-llm](https://huggingface.co/datasets/TFMC/imatrix-dataset-for-japanese-llm)ใไฝฟ็จใใฆไฝๆใใพใใใ
## Usage
```
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
make -j
./llama-cli -m 'Mistral-Large-Instruct-2407-Q4_0.gguf' -n 128 -p '[INST] ไปๆฉใฎๅค้ฃใฎใฌใทใใๆใใฆใ [/INST]'
``` |
mssma/ko-solar-10.7b-v0.13 | mssma | "2024-06-17T07:02:08Z" | 59 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ko",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-17T04:41:48Z" | ---
library_name: transformers
license: apache-2.0
language:
- ko
---
# usage
```
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
path = "mssma/ko-solar-10.7b-v0.13"
model = AutoModelForCausalLM.from_pretrained(
path,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(path)
``` |
QuantFactory/HelpingAI-9B-GGUF | QuantFactory | "2024-07-16T07:54:29Z" | 321 | 4 | null | [
"gguf",
"HelpingAI",
"Emotionally Intelligent",
"EQ",
"text-generation",
"base_model:HelpingAI/HelpingAI-9B",
"base_model:quantized:HelpingAI/HelpingAI-9B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2024-06-06T07:55:06Z" | ---
license: other
license_name: helpingai
license_link: LICENSE.md
pipeline_tag: text-generation
tags:
- HelpingAI
- Emotionally Intelligent
- EQ
base_model: OEvortex/HelpingAI-9B
---
# HelpingAI-9B-GGUF: Emotionally Intelligent Conversational AI
This is quantized version of [OEvortex/HelpingAI-9B](https://huggingface.co/OEvortex/HelpingAI-9B) created using llama.cpp
## Model Description
HelpingAI-9B is a large language model designed for emotionally intelligent conversational interactions. It is trained to engage users with empathy, understanding, and supportive dialogue across a wide range of topics and contexts. The model aims to provide a supportive AI companion that can attune to users' emotional states and communicative needs.
## Objectives
- Engage in open-ended dialogue while displaying emotional intelligence
- Recognize and validate user emotions and emotional contexts
- Provide supportive, empathetic, and psychologically-grounded responses
- Avoid insensitive, harmful, or unethical speech
- Continuously improve emotional awareness and dialogue skills
## Methodology
HelpingAI-9B is based on the HelpingAI series and further trained using:
- Supervised learning on large dialogue datasets with emotional labeling
- Reinforcement learning with a reward model favoring emotionally supportive responses
- Constitution training to instill stable and beneficial objectives
- Knowledge augmentation from psychological resources on emotional intelligence
## Emotional Quotient (EQ)
HelpingAI-9B has achieved an impressive Emotional Quotient (EQ) of 89.23, surpassing almost all AI models in emotional intelligence. This EQ score reflects its advanced ability to understand and respond to human emotions in a supportive and empathetic manner.
## Usage code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"OEvortex/HelpingAI-9B",
torch_dtype='auto',
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("OEvortex/HelpingAI-9B")
prompt = "Express joy and excitement about visiting a new place"
messages = [
# {"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=1024,
eos_token_id=tokenizer.eos_token_id,
temperature=0.25,
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids)[0]
print(response)
```
*Directly using this model from GGUF*
```python
%pip install -U 'webscout[loacl]'
from webscout.Local.utils import download_model
from webscout.Local.model import Model
from webscout.Local.thread import Thread
from webscout.Local import formats
from webscout.Local.samplers import SamplerSettings
from dotenv import load_dotenv; load_dotenv()
import os
# 1. Download the model
repo_id = "OEvortex/HelpingAI-9B"
filename = "helpingai-9b.Q4_0.gguf"
model_path = download_model(repo_id, filename, os.environ.get("hf_token"))
# 2. Load the model
model = Model(model_path, n_gpu_layers=0)ย
# 3. Define your system prompt
system_prompt = "You are HelpingAI a emotional AI always answer my question in HelpingAI style"
# 4. Create a custom chatml format with your system prompt
custom_chatml = formats.chatml.copy()
custom_chatml['system_content'] = system_prompt
# 5. Define your sampler settings (optional)
sampler = SamplerSettings(temp=0.7, top_p=0.9)ย # Adjust these values as needed
# 6. Create a Thread with the custom format and sampler
thread = Thread(model, custom_chatml, sampler=sampler)
# 7. Start interacting with the model
thread.interact(header="๐ HelpingAI-9B: Emotionally Intelligent Conversational AI ๐", color=True)
```
## Example Dialogue
> Express joy and excitement about visiting a new place.
Yay! I'm super excited to visit this new place! ๐ It sounds amazing and I can't wait to explore and have fun there! This is going to be such an exciting adventure! ๐ Let's go, let's do this! โจ
> I'm feeling really down today. Nothing seems to be going right.
Oh no, I'm sorry to hear that you're feeling down. It happens sometimes, but remember that it's okay to feel sad. If you want to talk about what's bothering you, I'm here to listen and offer support. We can find a way through this together! ๐
> I'm excited because I just got accepted into my dream school! I wanted to share the good news with someone.
Wow, that's amazing! ๐ Congratulations on getting accepted into your dream school! That's such an incredible accomplishment and I'm really happy for you! Keep up the hard work and enjoy the journey ahead! ๐ If there's anything else I can do to help, just let me know! ๐
> My best friend just got into an argument with me and I don't know what to do. How can I fix this situation?
Oh no, it sounds like you both need some time to cool off. Take a step back and give each other some space. Then, when things have calmed down, try talking things through in a calm and respectful manner. Communication is key for resolving conflicts, so let's work towards finding a resolution that works for both of you! ๐ If you need any advice or support, I'm here to help! ๐
> How can you identify the emotions of others based on their body language and facial expressions?
Identifying the emotions of others is a great skill! ๐ค You can look for body language cues like frowns, smiles, or tense posture. Facial expressions also give away a lot about someone's feelings - watch out for changes in their eyes, mouth, and facial muscles. It's all about being observant and empathetic to understand others better! ๐
> What kind of safety devices do I need to install in my house
Installing security devices in your home can bring peace of mind and keep you safe. Some essential safety devices include a home alarm system, smoke detectors, carbon monoxide detectors, motion sensors, and surveillance cameras. It's always good to take precautions and ensure that your home is secure! ๐ If you need help choosing the right security devices for your family, I can definitely point you in the right direction! ๐
> How can your own feelings affect your ability to empathize with someone else?
Our own emotions can impact our ability to empathize with others because they shape how we perceive and interpret other people's experiences. If you're feeling happy or positive, it may be easier to see the good in others and offer support. On the other hand, if you're feeling down or stressed, it might be harder to put yourself in someone else's shoes. It's important to acknowledge your own feelings and work on being open and understanding towards others! ๐
|
lesso/3969fa43-58cf-444a-99da-d59421dad710 | lesso | "2025-02-09T00:12:06Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-1.8B",
"base_model:adapter:Qwen/Qwen1.5-1.8B",
"license:other",
"region:us"
] | null | "2025-02-07T06:41:02Z" | ---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-1.8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3969fa43-58cf-444a-99da-d59421dad710
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# 3969fa43-58cf-444a-99da-d59421dad710
This model is a fine-tuned version of [Qwen/Qwen1.5-1.8B](https://huggingface.co/Qwen/Qwen1.5-1.8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5562
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000101
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7607 | 0.0000 | 1 | 4.4488 |
| 1.669 | 0.0006 | 50 | 2.9285 |
| 3.5003 | 0.0011 | 100 | 2.4974 |
| 1.9694 | 0.0017 | 150 | 1.7498 |
| 3.1765 | 0.0023 | 200 | 1.5562 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
harsha19/rash | harsha19 | "2024-10-12T01:59:51Z" | 15 | 1 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2024-10-12T01:58:58Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: rups
---
# Rupss
<!-- <Gallery /> -->
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `rups` to trigger the image generation.
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('harshasai-dev/rupss', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Subsets and Splits