modelId
string | author
string | last_modified
unknown | downloads
int64 | likes
int64 | library_name
string | tags
sequence | pipeline_tag
string | createdAt
unknown | card
string |
---|---|---|---|---|---|---|---|---|---|
danielfein/bt-rm-meta-llama_Llama-3.1-8B-0605 | danielfein | "2025-05-06T21:23:28Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-classification",
"generated_from_trainer",
"trl",
"reward-trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-05-06T10:06:18Z" | ---
base_model: meta-llama/Llama-3.2-1B
library_name: transformers
model_name: meta-llama_Llama-3.2-1B_bt_reward_20250503_182001
tags:
- generated_from_trainer
- trl
- reward-trainer
licence: license
---
# Model Card for meta-llama_Llama-3.2-1B_bt_reward_20250503_182001
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="danielfein/meta-llama_Llama-3.2-1B_bt_reward_20250503_182001", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/drfein/bt_reward_model_training/runs/5dn89voi)
This model was trained with Reward.
### Framework versions
- TRL: 0.14.0
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
youssefkhalil320/all-MiniLM-L6-v8-pair_score | youssefkhalil320 | "2025-05-06T21:23:12Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:80000003",
"loss:CoSENTLoss",
"en",
"dataset:youssefkhalil320/pairs_three_scores_v5",
"arxiv:1908.10084",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L6-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2025-05-02T13:39:00Z" | ---
base_model: sentence-transformers/all-MiniLM-L6-v2
datasets:
- youssefkhalil320/pairs_three_scores_v5
language:
- en
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:80000003
- loss:CoSENTLoss
widget:
- source_sentence: hand bottle
sentences:
- beyond
- salt shaker
- soft and oversized linen shorts
- source_sentence: lightweight perfume
sentences:
- coffee mug
- uvb protection sunscreen
- ever pure
- source_sentence: bubble gum body splash
sentences:
- facial expressions flashcards
- ice chocolate cake
- adjustable string bag
- source_sentence: plastic toy
sentences:
- tartar dressing shrimp sandwich
- interior pocket laptop case
- silk lip gloss
- source_sentence: activated charcoal shampoo
sentences:
- saw palmetto oil conditioner
- upf 50 + swimsuit
- belgian chocolate with cranberries
---
# all-MiniLM-L6-v8-pair_score
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) on the [pairs_three_scores_v5](https://huggingface.co/datasets/youssefkhalil320/pairs_three_scores_v5) dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision c9745ed1d9f207416be6d2e6f8de32d1f16199bf -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [pairs_three_scores_v5](https://huggingface.co/datasets/youssefkhalil320/pairs_three_scores_v5)
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'activated charcoal shampoo',
'belgian chocolate with cranberries',
'saw palmetto oil conditioner',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### pairs_three_scores_v5
* Dataset: [pairs_three_scores_v5](https://huggingface.co/datasets/youssefkhalil320/pairs_three_scores_v5) at [3d8c457](https://huggingface.co/datasets/youssefkhalil320/pairs_three_scores_v5/tree/3d8c45703846bd2adfaaf422abafbc389b283de1)
* Size: 80,000,003 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 3 tokens</li><li>mean: 6.06 tokens</li><li>max: 12 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 5.71 tokens</li><li>max: 13 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.11</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:-------------------------------------|:---------------------------------------|:-----------------|
| <code>vanilla hair cream</code> | <code>free of paraben hair mask</code> | <code>0.5</code> |
| <code>nourishing shampoo</code> | <code>cumin lemon tea</code> | <code>0.0</code> |
| <code>safe materials pacifier</code> | <code>facial serum</code> | <code>0.5</code> |
* Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "pairwise_cos_sim"
}
```
### Evaluation Dataset
#### pairs_three_scores_v5
* Dataset: [pairs_three_scores_v5](https://huggingface.co/datasets/youssefkhalil320/pairs_three_scores_v5) at [3d8c457](https://huggingface.co/datasets/youssefkhalil320/pairs_three_scores_v5/tree/3d8c45703846bd2adfaaf422abafbc389b283de1)
* Size: 20,000,001 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 3 tokens</li><li>mean: 6.21 tokens</li><li>max: 12 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 5.75 tokens</li><li>max: 12 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.11</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:----------------------------------------|:-----------------------------------|:-----------------|
| <code>teddy bear toy</code> | <code>long lasting cat food</code> | <code>0.0</code> |
| <code>eva hair treatment</code> | <code>fresh pineapple</code> | <code>0.0</code> |
| <code>soft wave hair conditioner</code> | <code>hybrid seat bike</code> | <code>0.0</code> |
* Loss: [<code>CoSENTLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosentloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "pairwise_cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | loss |
|:------:|:------:|:-------------:|:------:|
| 0.8682 | 542600 | 0.9424 | - |
| 0.8683 | 542700 | 0.968 | - |
| 0.8685 | 542800 | 0.9711 | - |
| 0.8686 | 542900 | 0.8668 | - |
| 0.8688 | 543000 | 0.7776 | - |
| 0.8690 | 543100 | 0.5938 | - |
| 0.8691 | 543200 | 0.7416 | - |
| 0.8693 | 543300 | 1.2082 | - |
| 0.8694 | 543400 | 0.5481 | - |
| 0.8696 | 543500 | 0.9383 | - |
| 0.8698 | 543600 | 0.699 | - |
| 0.8699 | 543700 | 0.6582 | - |
| 0.8701 | 543800 | 0.9237 | - |
| 0.8702 | 543900 | 0.5977 | - |
| 0.8704 | 544000 | 0.632 | - |
| 0.8706 | 544100 | 0.819 | - |
| 0.8707 | 544200 | 0.5877 | - |
| 0.8709 | 544300 | 0.725 | - |
| 0.8710 | 544400 | 0.6097 | - |
| 0.8712 | 544500 | 0.4883 | - |
| 0.8714 | 544600 | 0.7353 | - |
| 0.8715 | 544700 | 0.8596 | - |
| 0.8717 | 544800 | 0.4478 | - |
| 0.8718 | 544900 | 0.49 | - |
| 0.8720 | 545000 | 0.6203 | - |
| 0.8722 | 545100 | 0.6758 | - |
| 0.8723 | 545200 | 0.959 | - |
| 0.8725 | 545300 | 1.4645 | - |
| 0.8726 | 545400 | 0.7873 | - |
| 0.8728 | 545500 | 0.6609 | - |
| 0.8730 | 545600 | 0.8173 | - |
| 0.8731 | 545700 | 1.1416 | - |
| 0.8733 | 545800 | 0.8406 | - |
| 0.8734 | 545900 | 0.8781 | - |
| 0.8736 | 546000 | 0.6507 | - |
| 0.8738 | 546100 | 1.0332 | - |
| 0.8739 | 546200 | 0.9044 | - |
| 0.8741 | 546300 | 0.8617 | - |
| 0.8742 | 546400 | 0.7377 | - |
| 0.8744 | 546500 | 0.7668 | - |
| 0.8746 | 546600 | 0.6856 | - |
| 0.8747 | 546700 | 1.0069 | - |
| 0.8749 | 546800 | 0.5628 | - |
| 0.8750 | 546900 | 0.6661 | - |
| 0.8752 | 547000 | 0.8508 | - |
| 0.8754 | 547100 | 1.0595 | - |
| 0.8755 | 547200 | 0.8156 | - |
| 0.8757 | 547300 | 0.4535 | - |
| 0.8758 | 547400 | 0.7963 | - |
| 0.8760 | 547500 | 0.8964 | - |
| 0.8762 | 547600 | 0.6343 | - |
| 0.8763 | 547700 | 0.9795 | - |
| 0.8765 | 547800 | 1.0885 | - |
| 0.8766 | 547900 | 0.6333 | - |
| 0.8768 | 548000 | 1.0065 | - |
| 0.8770 | 548100 | 0.8155 | - |
| 0.8771 | 548200 | 0.8595 | - |
| 0.8773 | 548300 | 1.2973 | - |
| 0.8774 | 548400 | 0.9477 | - |
| 0.8776 | 548500 | 1.092 | - |
| 0.8778 | 548600 | 0.9252 | - |
| 0.8779 | 548700 | 0.6214 | - |
| 0.8781 | 548800 | 0.6302 | - |
| 0.8782 | 548900 | 0.4542 | - |
| 0.8784 | 549000 | 0.6626 | - |
| 0.8786 | 549100 | 0.6616 | - |
| 0.8787 | 549200 | 0.5897 | - |
| 0.8789 | 549300 | 0.5641 | - |
| 0.8790 | 549400 | 0.8449 | - |
| 0.8792 | 549500 | 0.7743 | - |
| 0.8794 | 549600 | 0.8134 | - |
| 0.8795 | 549700 | 1.0455 | - |
| 0.8797 | 549800 | 0.6805 | - |
| 0.8798 | 549900 | 0.6526 | - |
| 0.8800 | 550000 | 0.5727 | - |
| 0.8802 | 550100 | 0.5923 | - |
| 0.8803 | 550200 | 0.565 | - |
| 0.8805 | 550300 | 0.7745 | - |
| 0.8806 | 550400 | 0.757 | - |
| 0.8808 | 550500 | 0.624 | - |
| 0.8810 | 550600 | 0.9387 | - |
| 0.8811 | 550700 | 0.9718 | - |
| 0.8813 | 550800 | 0.9184 | - |
| 0.8814 | 550900 | 1.166 | - |
| 0.8816 | 551000 | 0.6725 | - |
| 0.8818 | 551100 | 0.9767 | - |
| 0.8819 | 551200 | 0.517 | - |
| 0.8821 | 551300 | 0.9126 | - |
| 0.8822 | 551400 | 1.3649 | - |
| 0.8824 | 551500 | 0.7633 | - |
| 0.8826 | 551600 | 0.6029 | - |
| 0.8827 | 551700 | 0.4923 | - |
| 0.8829 | 551800 | 0.7787 | - |
| 0.8830 | 551900 | 0.7437 | - |
| 0.8832 | 552000 | 0.7627 | - |
| 0.8834 | 552100 | 0.5648 | - |
| 0.8835 | 552200 | 0.5564 | - |
| 0.8837 | 552300 | 0.7376 | - |
| 0.8838 | 552400 | 0.8405 | - |
| 0.8840 | 552500 | 1.062 | - |
| 0.8842 | 552600 | 0.7844 | - |
| 0.8843 | 552700 | 0.6866 | - |
| 0.8845 | 552800 | 0.8532 | - |
| 0.8846 | 552900 | 0.4204 | - |
| 0.8848 | 553000 | 0.7487 | - |
| 0.8850 | 553100 | 0.5511 | - |
| 0.8851 | 553200 | 0.8468 | - |
| 0.8853 | 553300 | 0.9251 | - |
| 0.8854 | 553400 | 1.0542 | - |
| 0.8856 | 553500 | 0.9342 | - |
| 0.8858 | 553600 | 1.0533 | - |
| 0.8859 | 553700 | 0.7185 | - |
| 0.8861 | 553800 | 0.5811 | - |
| 0.8862 | 553900 | 0.8484 | - |
| 0.8864 | 554000 | 0.5215 | - |
| 0.8866 | 554100 | 0.4963 | - |
| 0.8867 | 554200 | 0.6594 | - |
| 0.8869 | 554300 | 0.5277 | - |
| 0.8870 | 554400 | 0.5139 | - |
| 0.8872 | 554500 | 0.9343 | - |
| 0.8874 | 554600 | 0.9857 | - |
| 0.8875 | 554700 | 0.7193 | - |
| 0.8877 | 554800 | 0.9394 | - |
| 0.8878 | 554900 | 0.91 | - |
| 0.8880 | 555000 | 0.601 | - |
| 0.8882 | 555100 | 0.4681 | - |
| 0.8883 | 555200 | 0.7671 | - |
| 0.8885 | 555300 | 0.4485 | - |
| 0.8886 | 555400 | 0.8482 | - |
| 0.8888 | 555500 | 0.8826 | - |
| 0.8890 | 555600 | 0.3257 | - |
| 0.8891 | 555700 | 0.6727 | - |
| 0.8893 | 555800 | 0.7098 | - |
| 0.8894 | 555900 | 0.7871 | - |
| 0.8896 | 556000 | 0.843 | - |
| 0.8898 | 556100 | 1.0637 | - |
| 0.8899 | 556200 | 0.4367 | - |
| 0.8901 | 556300 | 0.8833 | - |
| 0.8902 | 556400 | 0.6529 | - |
| 0.8904 | 556500 | 1.0443 | - |
| 0.8906 | 556600 | 1.1076 | - |
| 0.8907 | 556700 | 0.7762 | - |
| 0.8909 | 556800 | 0.5598 | - |
| 0.8910 | 556900 | 1.127 | - |
| 0.8912 | 557000 | 0.5471 | - |
| 0.8914 | 557100 | 0.869 | - |
| 0.8915 | 557200 | 0.8392 | - |
| 0.8917 | 557300 | 0.7547 | - |
| 0.8918 | 557400 | 0.5796 | - |
| 0.8920 | 557500 | 0.7446 | - |
| 0.8922 | 557600 | 0.9913 | - |
| 0.8923 | 557700 | 0.6937 | - |
| 0.8925 | 557800 | 0.5643 | - |
| 0.8926 | 557900 | 0.8484 | - |
| 0.8928 | 558000 | 0.8971 | - |
| 0.8930 | 558100 | 1.1094 | - |
| 0.8931 | 558200 | 0.6835 | - |
| 0.8933 | 558300 | 0.8818 | - |
| 0.8934 | 558400 | 1.1366 | - |
| 0.8936 | 558500 | 0.8887 | - |
| 0.8938 | 558600 | 0.5754 | - |
| 0.8939 | 558700 | 0.7777 | - |
| 0.8941 | 558800 | 0.9033 | - |
| 0.8942 | 558900 | 0.5612 | - |
| 0.8944 | 559000 | 0.7949 | - |
| 0.8946 | 559100 | 1.1081 | - |
| 0.8947 | 559200 | 0.6708 | - |
| 0.8949 | 559300 | 0.7044 | - |
| 0.8950 | 559400 | 0.6862 | - |
| 0.8952 | 559500 | 0.8251 | - |
| 0.8954 | 559600 | 0.7551 | - |
| 0.8955 | 559700 | 1.0697 | - |
| 0.8957 | 559800 | 0.9638 | - |
| 0.8958 | 559900 | 0.9433 | - |
| 0.8960 | 560000 | 0.7142 | - |
| 0.8962 | 560100 | 0.6868 | - |
| 0.8963 | 560200 | 1.0483 | - |
| 0.8965 | 560300 | 0.9234 | - |
| 0.8966 | 560400 | 1.0903 | - |
| 0.8968 | 560500 | 0.8217 | - |
| 0.8970 | 560600 | 0.6451 | - |
| 0.8971 | 560700 | 0.7644 | - |
| 0.8973 | 560800 | 0.518 | - |
| 0.8974 | 560900 | 0.6178 | - |
| 0.8976 | 561000 | 0.6031 | - |
| 0.8978 | 561100 | 0.7723 | - |
| 0.8979 | 561200 | 0.3514 | - |
| 0.8981 | 561300 | 0.8751 | - |
| 0.8982 | 561400 | 0.7787 | - |
| 0.8984 | 561500 | 0.7343 | - |
| 0.8986 | 561600 | 0.674 | - |
| 0.8987 | 561700 | 0.8175 | - |
| 0.8989 | 561800 | 0.7348 | - |
| 0.8990 | 561900 | 0.8423 | - |
| 0.8992 | 562000 | 1.004 | - |
| 0.8994 | 562100 | 0.9213 | - |
| 0.8995 | 562200 | 1.0421 | - |
| 0.8997 | 562300 | 0.658 | - |
| 0.8998 | 562400 | 0.8207 | - |
| 0.9000 | 562500 | 0.5178 | - |
| 0.9002 | 562600 | 0.7567 | - |
| 0.9003 | 562700 | 0.8287 | - |
| 0.9005 | 562800 | 0.5924 | - |
| 0.9006 | 562900 | 0.9151 | - |
| 0.9008 | 563000 | 0.9069 | - |
| 0.9010 | 563100 | 0.7985 | - |
| 0.9011 | 563200 | 0.7565 | - |
| 0.9013 | 563300 | 0.823 | - |
| 0.9014 | 563400 | 0.5373 | - |
| 0.9016 | 563500 | 0.5473 | - |
| 0.9018 | 563600 | 0.7323 | - |
| 0.9019 | 563700 | 0.691 | - |
| 0.9021 | 563800 | 0.8818 | - |
| 0.9022 | 563900 | 0.7351 | - |
| 0.9024 | 564000 | 1.0715 | - |
| 0.9026 | 564100 | 0.722 | - |
| 0.9027 | 564200 | 0.6988 | - |
| 0.9029 | 564300 | 1.1843 | - |
| 0.9030 | 564400 | 1.0299 | - |
| 0.9032 | 564500 | 1.119 | - |
| 0.9034 | 564600 | 0.6954 | - |
| 0.9035 | 564700 | 0.8773 | - |
| 0.9037 | 564800 | 0.7873 | - |
| 0.9038 | 564900 | 0.7503 | - |
| 0.9040 | 565000 | 1.1296 | - |
| 0.9042 | 565100 | 0.777 | - |
| 0.9043 | 565200 | 0.8982 | - |
| 0.9045 | 565300 | 0.9224 | - |
| 0.9046 | 565400 | 0.8596 | - |
| 0.9048 | 565500 | 0.5435 | - |
| 0.9050 | 565600 | 0.8324 | - |
| 0.9051 | 565700 | 0.4121 | - |
| 0.9053 | 565800 | 0.9148 | - |
| 0.9054 | 565900 | 0.4525 | - |
| 0.9056 | 566000 | 0.7476 | - |
| 0.9058 | 566100 | 0.8766 | - |
| 0.9059 | 566200 | 1.1123 | - |
| 0.9061 | 566300 | 0.8109 | - |
| 0.9062 | 566400 | 0.8251 | - |
| 0.9064 | 566500 | 0.9638 | - |
| 0.9066 | 566600 | 0.7842 | - |
| 0.9067 | 566700 | 0.8727 | - |
| 0.9069 | 566800 | 1.1777 | - |
| 0.9070 | 566900 | 1.435 | - |
| 0.9072 | 567000 | 0.7354 | - |
| 0.9074 | 567100 | 0.796 | - |
| 0.9075 | 567200 | 0.8451 | - |
| 0.9077 | 567300 | 0.479 | - |
| 0.9078 | 567400 | 0.5299 | - |
| 0.9080 | 567500 | 0.7735 | - |
| 0.9082 | 567600 | 1.1211 | - |
| 0.9083 | 567700 | 0.9364 | - |
| 0.9085 | 567800 | 0.5533 | - |
| 0.9086 | 567900 | 0.9091 | - |
| 0.9088 | 568000 | 0.7493 | - |
| 0.9090 | 568100 | 1.0247 | - |
| 0.9091 | 568200 | 0.4836 | - |
| 0.9093 | 568300 | 0.9966 | - |
| 0.9094 | 568400 | 0.8997 | - |
| 0.9096 | 568500 | 0.7764 | - |
| 0.9098 | 568600 | 0.7193 | - |
| 0.9099 | 568700 | 0.6184 | - |
| 0.9101 | 568800 | 0.9031 | - |
| 0.9102 | 568900 | 0.7061 | - |
| 0.9104 | 569000 | 1.0852 | - |
| 0.9106 | 569100 | 1.0778 | - |
| 0.9107 | 569200 | 0.6463 | - |
| 0.9109 | 569300 | 0.5569 | - |
| 0.9110 | 569400 | 0.5566 | - |
| 0.9112 | 569500 | 0.6489 | - |
| 0.9114 | 569600 | 0.8065 | - |
| 0.9115 | 569700 | 0.8123 | - |
| 0.9117 | 569800 | 0.5946 | - |
| 0.9118 | 569900 | 0.8424 | - |
| 0.9120 | 570000 | 0.8987 | - |
| 0.9122 | 570100 | 0.7266 | - |
| 0.9123 | 570200 | 0.7386 | - |
| 0.9125 | 570300 | 0.7266 | - |
| 0.9126 | 570400 | 0.8668 | - |
| 0.9128 | 570500 | 0.7228 | - |
| 0.9130 | 570600 | 0.6914 | - |
| 0.9131 | 570700 | 0.8578 | - |
| 0.9133 | 570800 | 0.7019 | - |
| 0.9134 | 570900 | 0.5146 | - |
| 0.9136 | 571000 | 0.8236 | - |
| 0.9138 | 571100 | 0.8977 | - |
| 0.9139 | 571200 | 0.6571 | - |
| 0.9141 | 571300 | 0.6818 | - |
| 0.9142 | 571400 | 0.6226 | - |
| 0.9144 | 571500 | 1.034 | - |
| 0.9146 | 571600 | 0.7306 | - |
| 0.9147 | 571700 | 1.4244 | - |
| 0.9149 | 571800 | 0.8388 | - |
| 0.9150 | 571900 | 0.5627 | - |
| 0.9152 | 572000 | 0.8637 | - |
| 0.9154 | 572100 | 1.0419 | - |
| 0.9155 | 572200 | 1.3043 | - |
| 0.9157 | 572300 | 0.4563 | - |
| 0.9158 | 572400 | 0.6826 | - |
| 0.9160 | 572500 | 0.801 | - |
| 0.9162 | 572600 | 0.82 | - |
| 0.9163 | 572700 | 0.4677 | - |
| 0.9165 | 572800 | 0.6236 | - |
| 0.9166 | 572900 | 0.7783 | - |
| 0.9168 | 573000 | 0.8284 | - |
| 0.9170 | 573100 | 0.8253 | - |
| 0.9171 | 573200 | 1.039 | - |
| 0.9173 | 573300 | 0.9098 | - |
| 0.9174 | 573400 | 1.068 | - |
| 0.9176 | 573500 | 0.6172 | - |
| 0.9178 | 573600 | 0.6679 | - |
| 0.9179 | 573700 | 0.8135 | - |
| 0.9181 | 573800 | 1.1141 | - |
| 0.9182 | 573900 | 0.8077 | - |
| 0.9184 | 574000 | 0.7952 | - |
| 0.9186 | 574100 | 0.8684 | - |
| 0.9187 | 574200 | 0.518 | - |
| 0.9189 | 574300 | 0.6675 | - |
| 0.9190 | 574400 | 0.9315 | - |
| 0.9192 | 574500 | 0.6984 | - |
| 0.9194 | 574600 | 0.7571 | - |
| 0.9195 | 574700 | 0.9037 | - |
| 0.9197 | 574800 | 0.5965 | - |
| 0.9198 | 574900 | 0.8542 | - |
| 0.9200 | 575000 | 0.8226 | - |
| 0.9202 | 575100 | 0.6057 | - |
| 0.9203 | 575200 | 0.7658 | - |
| 0.9205 | 575300 | 0.9765 | - |
| 0.9206 | 575400 | 0.9145 | - |
| 0.9208 | 575500 | 0.3843 | - |
| 0.9210 | 575600 | 0.8603 | - |
| 0.9211 | 575700 | 0.9048 | - |
| 0.9213 | 575800 | 0.7786 | - |
| 0.9214 | 575900 | 0.8639 | - |
| 0.9216 | 576000 | 0.8909 | - |
| 0.9218 | 576100 | 0.6091 | - |
| 0.9219 | 576200 | 0.4416 | - |
| 0.9221 | 576300 | 0.4569 | - |
| 0.9222 | 576400 | 0.6638 | - |
| 0.9224 | 576500 | 0.9033 | - |
| 0.9226 | 576600 | 0.5351 | - |
| 0.9227 | 576700 | 0.8799 | - |
| 0.9229 | 576800 | 1.212 | - |
| 0.9230 | 576900 | 0.7717 | - |
| 0.9232 | 577000 | 0.9058 | - |
| 0.9234 | 577100 | 0.9647 | - |
| 0.9235 | 577200 | 0.7648 | - |
| 0.9237 | 577300 | 0.8776 | - |
| 0.9238 | 577400 | 0.4155 | - |
| 0.9240 | 577500 | 0.5997 | - |
| 0.9242 | 577600 | 0.9836 | - |
| 0.9243 | 577700 | 0.7584 | - |
| 0.9245 | 577800 | 0.7656 | - |
| 0.9246 | 577900 | 0.7135 | - |
| 0.9248 | 578000 | 0.8408 | - |
| 0.9250 | 578100 | 0.9118 | - |
| 0.9251 | 578200 | 0.587 | - |
| 0.9253 | 578300 | 0.9372 | - |
| 0.9254 | 578400 | 0.674 | - |
| 0.9256 | 578500 | 0.7524 | - |
| 0.9258 | 578600 | 0.7039 | - |
| 0.9259 | 578700 | 0.7397 | - |
| 0.9261 | 578800 | 0.739 | - |
| 0.9262 | 578900 | 0.6249 | - |
| 0.9264 | 579000 | 0.7223 | - |
| 0.9266 | 579100 | 0.8787 | - |
| 0.9267 | 579200 | 0.6817 | - |
| 0.9269 | 579300 | 0.4517 | - |
| 0.9270 | 579400 | 0.9203 | - |
| 0.9272 | 579500 | 1.0586 | - |
| 0.9274 | 579600 | 0.4509 | - |
| 0.9275 | 579700 | 0.6122 | - |
| 0.9277 | 579800 | 0.8044 | - |
| 0.9278 | 579900 | 0.4963 | - |
| 0.9280 | 580000 | 0.5926 | - |
| 0.9282 | 580100 | 0.8616 | - |
| 0.9283 | 580200 | 0.79 | - |
| 0.9285 | 580300 | 1.1544 | - |
| 0.9286 | 580400 | 0.6989 | - |
| 0.9288 | 580500 | 1.3349 | - |
| 0.9290 | 580600 | 1.2488 | - |
| 0.9291 | 580700 | 1.171 | - |
| 0.9293 | 580800 | 0.5529 | - |
| 0.9294 | 580900 | 0.7977 | - |
| 0.9296 | 581000 | 0.6397 | - |
| 0.9298 | 581100 | 1.2556 | - |
| 0.9299 | 581200 | 0.8389 | - |
| 0.9301 | 581300 | 0.967 | - |
| 0.9302 | 581400 | 0.9108 | - |
| 0.9304 | 581500 | 0.927 | - |
| 0.9306 | 581600 | 0.8314 | - |
| 0.9307 | 581700 | 0.8189 | - |
| 0.9309 | 581800 | 0.5584 | - |
| 0.9310 | 581900 | 0.8506 | - |
| 0.9312 | 582000 | 0.9845 | - |
| 0.9314 | 582100 | 0.8159 | - |
| 0.9315 | 582200 | 0.6512 | - |
| 0.9317 | 582300 | 0.7216 | - |
| 0.9318 | 582400 | 0.7841 | - |
| 0.9320 | 582500 | 0.852 | - |
| 0.9322 | 582600 | 0.7754 | - |
| 0.9323 | 582700 | 0.6775 | - |
| 0.9325 | 582800 | 0.4598 | - |
| 0.9326 | 582900 | 0.625 | - |
| 0.9328 | 583000 | 1.1821 | - |
| 0.9330 | 583100 | 0.6845 | - |
| 0.9331 | 583200 | 0.8293 | - |
| 0.9333 | 583300 | 0.7485 | - |
| 0.9334 | 583400 | 1.0008 | - |
| 0.9336 | 583500 | 0.7762 | - |
| 0.9338 | 583600 | 0.5416 | - |
| 0.9339 | 583700 | 1.2784 | - |
| 0.9341 | 583800 | 0.9202 | - |
| 0.9342 | 583900 | 0.7189 | - |
| 0.9344 | 584000 | 1.0549 | - |
| 0.9346 | 584100 | 0.9661 | - |
| 0.9347 | 584200 | 0.5341 | - |
| 0.9349 | 584300 | 1.4547 | - |
| 0.9350 | 584400 | 1.0324 | - |
| 0.9352 | 584500 | 0.8276 | - |
| 0.9354 | 584600 | 0.3868 | - |
| 0.9355 | 584700 | 1.0488 | - |
| 0.9357 | 584800 | 0.9561 | - |
| 0.9358 | 584900 | 0.9193 | - |
| 0.9360 | 585000 | 0.9144 | - |
| 0.9362 | 585100 | 0.7702 | - |
| 0.9363 | 585200 | 0.798 | - |
| 0.9365 | 585300 | 0.5793 | - |
| 0.9366 | 585400 | 0.7867 | - |
| 0.9368 | 585500 | 0.8352 | - |
| 0.9370 | 585600 | 0.6128 | - |
| 0.9371 | 585700 | 0.734 | - |
| 0.9373 | 585800 | 0.5431 | - |
| 0.9374 | 585900 | 0.8416 | - |
| 0.9376 | 586000 | 0.8711 | - |
| 0.9378 | 586100 | 0.9059 | - |
| 0.9379 | 586200 | 0.5545 | - |
| 0.9381 | 586300 | 0.9609 | - |
| 0.9382 | 586400 | 0.579 | - |
| 0.9384 | 586500 | 1.1916 | - |
| 0.9386 | 586600 | 0.6305 | - |
| 0.9387 | 586700 | 0.9855 | - |
| 0.9389 | 586800 | 0.774 | - |
| 0.9390 | 586900 | 0.6012 | - |
| 0.9392 | 587000 | 0.7495 | - |
| 0.9394 | 587100 | 0.6666 | - |
| 0.9395 | 587200 | 0.8473 | - |
| 0.9397 | 587300 | 1.0324 | - |
| 0.9398 | 587400 | 0.6129 | - |
| 0.9400 | 587500 | 0.8905 | - |
| 0.9402 | 587600 | 0.6067 | - |
| 0.9403 | 587700 | 1.0607 | - |
| 0.9405 | 587800 | 0.6369 | - |
| 0.9406 | 587900 | 0.6892 | - |
| 0.9408 | 588000 | 0.6671 | - |
| 0.9410 | 588100 | 0.7971 | - |
| 0.9411 | 588200 | 0.7133 | - |
| 0.9413 | 588300 | 0.46 | - |
| 0.9414 | 588400 | 0.9073 | - |
| 0.9416 | 588500 | 0.9276 | - |
| 0.9418 | 588600 | 1.0273 | - |
| 0.9419 | 588700 | 0.6709 | - |
| 0.9421 | 588800 | 0.4284 | - |
| 0.9422 | 588900 | 0.8745 | - |
| 0.9424 | 589000 | 0.8677 | - |
| 0.9426 | 589100 | 0.867 | - |
| 0.9427 | 589200 | 0.6087 | - |
| 0.9429 | 589300 | 0.6777 | - |
| 0.9430 | 589400 | 0.6672 | - |
| 0.9432 | 589500 | 0.9492 | - |
| 0.9434 | 589600 | 0.6848 | - |
| 0.9435 | 589700 | 0.8975 | - |
| 0.9437 | 589800 | 0.3949 | - |
| 0.9438 | 589900 | 0.7469 | - |
| 0.9440 | 590000 | 0.7412 | - |
| 0.9442 | 590100 | 0.526 | - |
| 0.9443 | 590200 | 0.4228 | - |
| 0.9445 | 590300 | 0.9338 | - |
| 0.9446 | 590400 | 0.6516 | - |
| 0.9448 | 590500 | 0.9419 | - |
| 0.9450 | 590600 | 0.755 | - |
| 0.9451 | 590700 | 0.7699 | - |
| 0.9453 | 590800 | 0.8904 | - |
| 0.9454 | 590900 | 0.5596 | - |
| 0.9456 | 591000 | 0.9401 | - |
| 0.9458 | 591100 | 0.9583 | - |
| 0.9459 | 591200 | 0.6807 | - |
| 0.9461 | 591300 | 0.6972 | - |
| 0.9462 | 591400 | 0.7217 | - |
| 0.9464 | 591500 | 0.7406 | - |
| 0.9466 | 591600 | 0.5819 | - |
| 0.9467 | 591700 | 0.8508 | - |
| 0.9469 | 591800 | 0.5315 | - |
| 0.9470 | 591900 | 0.606 | - |
| 0.9472 | 592000 | 0.7971 | - |
| 0.9474 | 592100 | 1.0728 | - |
| 0.9475 | 592200 | 0.7283 | - |
| 0.9477 | 592300 | 0.5131 | - |
| 0.9478 | 592400 | 0.4695 | - |
| 0.9480 | 592500 | 0.2959 | - |
| 0.9482 | 592600 | 0.858 | - |
| 0.9483 | 592700 | 0.5761 | - |
| 0.9485 | 592800 | 0.9089 | - |
| 0.9486 | 592900 | 0.6238 | - |
| 0.9488 | 593000 | 0.5633 | - |
| 0.9490 | 593100 | 1.0323 | - |
| 0.9491 | 593200 | 0.6684 | - |
| 0.9493 | 593300 | 0.8563 | - |
| 0.9494 | 593400 | 0.7163 | - |
| 0.9496 | 593500 | 0.7814 | - |
| 0.9498 | 593600 | 0.4761 | - |
| 0.9499 | 593700 | 0.5203 | - |
| 0.9501 | 593800 | 0.9119 | - |
| 0.9502 | 593900 | 0.8535 | - |
| 0.9504 | 594000 | 1.0054 | - |
| 0.9506 | 594100 | 0.8794 | - |
| 0.9507 | 594200 | 0.6925 | - |
| 0.9509 | 594300 | 1.0048 | - |
| 0.9510 | 594400 | 0.7008 | - |
| 0.9512 | 594500 | 0.7092 | - |
| 0.9514 | 594600 | 0.803 | - |
| 0.9515 | 594700 | 0.7868 | - |
| 0.9517 | 594800 | 0.6047 | - |
| 0.9518 | 594900 | 0.6654 | - |
| 0.9520 | 595000 | 0.7418 | - |
| 0.9522 | 595100 | 1.0645 | - |
| 0.9523 | 595200 | 0.6193 | - |
| 0.9525 | 595300 | 0.7615 | - |
| 0.9526 | 595400 | 0.8291 | - |
| 0.9528 | 595500 | 0.8298 | - |
| 0.9530 | 595600 | 0.9187 | - |
| 0.9531 | 595700 | 0.6942 | - |
| 0.9533 | 595800 | 0.912 | - |
| 0.9534 | 595900 | 1.0213 | - |
| 0.9536 | 596000 | 0.9347 | - |
| 0.9538 | 596100 | 1.1183 | - |
| 0.9539 | 596200 | 0.78 | - |
| 0.9541 | 596300 | 0.8976 | - |
| 0.9542 | 596400 | 1.0957 | - |
| 0.9544 | 596500 | 0.8133 | - |
| 0.9546 | 596600 | 0.6568 | - |
| 0.9547 | 596700 | 0.8911 | - |
| 0.9549 | 596800 | 0.5183 | - |
| 0.9550 | 596900 | 0.7212 | - |
| 0.9552 | 597000 | 0.888 | - |
| 0.9554 | 597100 | 0.7661 | - |
| 0.9555 | 597200 | 0.6028 | - |
| 0.9557 | 597300 | 1.0602 | - |
| 0.9558 | 597400 | 0.7299 | - |
| 0.9560 | 597500 | 0.9885 | - |
| 0.9562 | 597600 | 0.8964 | - |
| 0.9563 | 597700 | 0.6961 | - |
| 0.9565 | 597800 | 0.6989 | - |
| 0.9566 | 597900 | 1.1453 | - |
| 0.9568 | 598000 | 0.4009 | - |
| 0.9570 | 598100 | 0.7645 | - |
| 0.9571 | 598200 | 0.9124 | - |
| 0.9573 | 598300 | 0.7354 | - |
| 0.9574 | 598400 | 0.803 | - |
| 0.9576 | 598500 | 1.2859 | - |
| 0.9578 | 598600 | 0.9726 | - |
| 0.9579 | 598700 | 0.5849 | - |
| 0.9581 | 598800 | 1.1357 | - |
| 0.9582 | 598900 | 0.904 | - |
| 0.9584 | 599000 | 0.6113 | - |
| 0.9586 | 599100 | 1.0399 | - |
| 0.9587 | 599200 | 0.8404 | - |
| 0.9589 | 599300 | 0.945 | - |
| 0.9590 | 599400 | 0.6225 | - |
| 0.9592 | 599500 | 0.8617 | - |
| 0.9594 | 599600 | 0.8782 | - |
| 0.9595 | 599700 | 0.9332 | - |
| 0.9597 | 599800 | 0.9949 | - |
| 0.9598 | 599900 | 0.7016 | - |
| 0.9600 | 600000 | 0.5833 | 0.7694 |
| 0.9602 | 600100 | 0.5462 | - |
| 0.9603 | 600200 | 0.8458 | - |
| 0.9605 | 600300 | 0.8256 | - |
| 0.9606 | 600400 | 0.8134 | - |
| 0.9608 | 600500 | 0.7465 | - |
| 0.9610 | 600600 | 1.0022 | - |
| 0.9611 | 600700 | 0.7794 | - |
| 0.9613 | 600800 | 0.8742 | - |
| 0.9614 | 600900 | 0.6161 | - |
| 0.9616 | 601000 | 1.1433 | - |
| 0.9618 | 601100 | 0.6988 | - |
| 0.9619 | 601200 | 0.8715 | - |
| 0.9621 | 601300 | 0.6198 | - |
| 0.9622 | 601400 | 0.896 | - |
| 0.9624 | 601500 | 0.5527 | - |
| 0.9626 | 601600 | 1.1485 | - |
| 0.9627 | 601700 | 0.8266 | - |
| 0.9629 | 601800 | 0.6972 | - |
| 0.9630 | 601900 | 0.5653 | - |
| 0.9632 | 602000 | 0.6448 | - |
| 0.9634 | 602100 | 0.9891 | - |
| 0.9635 | 602200 | 0.8991 | - |
| 0.9637 | 602300 | 0.8615 | - |
| 0.9638 | 602400 | 0.8568 | - |
| 0.9640 | 602500 | 0.7636 | - |
| 0.9642 | 602600 | 0.714 | - |
| 0.9643 | 602700 | 0.5237 | - |
| 0.9645 | 602800 | 1.1789 | - |
| 0.9646 | 602900 | 0.5586 | - |
| 0.9648 | 603000 | 0.5008 | - |
| 0.9650 | 603100 | 0.8864 | - |
| 0.9651 | 603200 | 0.8781 | - |
| 0.9653 | 603300 | 1.0112 | - |
| 0.9654 | 603400 | 0.9674 | - |
| 0.9656 | 603500 | 0.5763 | - |
| 0.9658 | 603600 | 0.4001 | - |
| 0.9659 | 603700 | 0.69 | - |
| 0.9661 | 603800 | 0.8321 | - |
| 0.9662 | 603900 | 0.8196 | - |
| 0.9664 | 604000 | 0.7085 | - |
| 0.9666 | 604100 | 0.8921 | - |
| 0.9667 | 604200 | 0.8983 | - |
| 0.9669 | 604300 | 1.0145 | - |
| 0.9670 | 604400 | 1.1885 | - |
| 0.9672 | 604500 | 0.7833 | - |
| 0.9674 | 604600 | 1.033 | - |
| 0.9675 | 604700 | 1.0585 | - |
| 0.9677 | 604800 | 0.856 | - |
| 0.9678 | 604900 | 0.4847 | - |
| 0.9680 | 605000 | 0.7013 | - |
| 0.9682 | 605100 | 0.7934 | - |
| 0.9683 | 605200 | 1.1386 | - |
| 0.9685 | 605300 | 0.6487 | - |
| 0.9686 | 605400 | 1.0657 | - |
| 0.9688 | 605500 | 0.432 | - |
| 0.9690 | 605600 | 0.822 | - |
| 0.9691 | 605700 | 1.0284 | - |
| 0.9693 | 605800 | 0.4082 | - |
| 0.9694 | 605900 | 0.9734 | - |
| 0.9696 | 606000 | 0.733 | - |
| 0.9698 | 606100 | 0.608 | - |
| 0.9699 | 606200 | 0.9526 | - |
| 0.9701 | 606300 | 0.837 | - |
| 0.9702 | 606400 | 0.8188 | - |
| 0.9704 | 606500 | 0.9309 | - |
| 0.9706 | 606600 | 0.7929 | - |
| 0.9707 | 606700 | 0.5051 | - |
| 0.9709 | 606800 | 0.9299 | - |
| 0.9710 | 606900 | 0.8015 | - |
| 0.9712 | 607000 | 0.6867 | - |
| 0.9714 | 607100 | 1.1677 | - |
| 0.9715 | 607200 | 0.7181 | - |
| 0.9717 | 607300 | 0.9442 | - |
| 0.9718 | 607400 | 0.663 | - |
| 0.9720 | 607500 | 0.7396 | - |
| 0.9722 | 607600 | 0.8251 | - |
| 0.9723 | 607700 | 0.6575 | - |
| 0.9725 | 607800 | 0.6674 | - |
| 0.9726 | 607900 | 0.7778 | - |
| 0.9728 | 608000 | 0.6021 | - |
| 0.9730 | 608100 | 0.9309 | - |
| 0.9731 | 608200 | 0.8329 | - |
| 0.9733 | 608300 | 0.9359 | - |
| 0.9734 | 608400 | 0.7212 | - |
| 0.9736 | 608500 | 1.0956 | - |
| 0.9738 | 608600 | 0.6235 | - |
| 0.9739 | 608700 | 0.6951 | - |
| 0.9741 | 608800 | 0.7357 | - |
| 0.9742 | 608900 | 0.427 | - |
| 0.9744 | 609000 | 1.3058 | - |
| 0.9746 | 609100 | 0.6824 | - |
| 0.9747 | 609200 | 0.7743 | - |
| 0.9749 | 609300 | 0.6551 | - |
| 0.9750 | 609400 | 0.5327 | - |
| 0.9752 | 609500 | 0.7648 | - |
| 0.9754 | 609600 | 0.6966 | - |
| 0.9755 | 609700 | 0.9422 | - |
| 0.9757 | 609800 | 1.1221 | - |
| 0.9758 | 609900 | 0.8919 | - |
| 0.9760 | 610000 | 0.5507 | - |
| 0.9762 | 610100 | 0.7228 | - |
| 0.9763 | 610200 | 0.7117 | - |
| 0.9765 | 610300 | 0.5439 | - |
| 0.9766 | 610400 | 1.0969 | - |
| 0.9768 | 610500 | 0.8394 | - |
| 0.9770 | 610600 | 1.4258 | - |
| 0.9771 | 610700 | 0.7213 | - |
| 0.9773 | 610800 | 0.8785 | - |
| 0.9774 | 610900 | 0.7981 | - |
| 0.9776 | 611000 | 0.526 | - |
| 0.9778 | 611100 | 0.6145 | - |
| 0.9779 | 611200 | 0.626 | - |
| 0.9781 | 611300 | 0.6958 | - |
| 0.9782 | 611400 | 0.7504 | - |
| 0.9784 | 611500 | 0.7285 | - |
| 0.9786 | 611600 | 1.0159 | - |
| 0.9787 | 611700 | 0.5826 | - |
| 0.9789 | 611800 | 0.7113 | - |
| 0.9790 | 611900 | 1.166 | - |
| 0.9792 | 612000 | 1.1578 | - |
| 0.9794 | 612100 | 0.7783 | - |
| 0.9795 | 612200 | 0.5356 | - |
| 0.9797 | 612300 | 0.9754 | - |
| 0.9798 | 612400 | 0.6884 | - |
| 0.9800 | 612500 | 0.6951 | - |
| 0.9802 | 612600 | 0.6126 | - |
| 0.9803 | 612700 | 0.5493 | - |
| 0.9805 | 612800 | 0.6776 | - |
| 0.9806 | 612900 | 0.5393 | - |
| 0.9808 | 613000 | 0.5629 | - |
| 0.9810 | 613100 | 0.7929 | - |
| 0.9811 | 613200 | 0.8572 | - |
| 0.9813 | 613300 | 1.056 | - |
| 0.9814 | 613400 | 0.6643 | - |
| 0.9816 | 613500 | 0.6809 | - |
| 0.9818 | 613600 | 0.8654 | - |
| 0.9819 | 613700 | 0.9761 | - |
| 0.9821 | 613800 | 1.0267 | - |
| 0.9822 | 613900 | 0.6882 | - |
| 0.9824 | 614000 | 0.6095 | - |
| 0.9826 | 614100 | 0.6508 | - |
| 0.9827 | 614200 | 0.8784 | - |
| 0.9829 | 614300 | 0.6203 | - |
| 0.9830 | 614400 | 1.0917 | - |
| 0.9832 | 614500 | 0.6585 | - |
| 0.9834 | 614600 | 0.5119 | - |
| 0.9835 | 614700 | 0.9765 | - |
| 0.9837 | 614800 | 0.84 | - |
| 0.9838 | 614900 | 0.6817 | - |
| 0.9840 | 615000 | 0.8435 | - |
| 0.9842 | 615100 | 0.6928 | - |
| 0.9843 | 615200 | 0.6534 | - |
| 0.9845 | 615300 | 0.5802 | - |
| 0.9846 | 615400 | 0.8526 | - |
| 0.9848 | 615500 | 0.841 | - |
| 0.9850 | 615600 | 0.8053 | - |
| 0.9851 | 615700 | 0.631 | - |
| 0.9853 | 615800 | 0.6311 | - |
| 0.9854 | 615900 | 0.9212 | - |
| 0.9856 | 616000 | 0.6748 | - |
| 0.9858 | 616100 | 0.6688 | - |
| 0.9859 | 616200 | 0.5771 | - |
| 0.9861 | 616300 | 0.753 | - |
| 0.9862 | 616400 | 0.7481 | - |
| 0.9864 | 616500 | 0.842 | - |
| 0.9866 | 616600 | 0.7109 | - |
| 0.9867 | 616700 | 0.9474 | - |
| 0.9869 | 616800 | 0.6522 | - |
| 0.9870 | 616900 | 0.5251 | - |
| 0.9872 | 617000 | 0.6909 | - |
| 0.9874 | 617100 | 0.8574 | - |
| 0.9875 | 617200 | 0.5703 | - |
| 0.9877 | 617300 | 0.9685 | - |
| 0.9878 | 617400 | 0.8947 | - |
| 0.9880 | 617500 | 0.5895 | - |
| 0.9882 | 617600 | 1.0236 | - |
| 0.9883 | 617700 | 0.5926 | - |
| 0.9885 | 617800 | 0.7436 | - |
| 0.9886 | 617900 | 0.6056 | - |
| 0.9888 | 618000 | 0.7208 | - |
| 0.9890 | 618100 | 0.9684 | - |
| 0.9891 | 618200 | 0.6403 | - |
| 0.9893 | 618300 | 0.8872 | - |
| 0.9894 | 618400 | 0.7158 | - |
| 0.9896 | 618500 | 0.6708 | - |
| 0.9898 | 618600 | 0.8817 | - |
| 0.9899 | 618700 | 0.8722 | - |
| 0.9901 | 618800 | 0.6972 | - |
| 0.9902 | 618900 | 0.752 | - |
| 0.9904 | 619000 | 1.6841 | - |
| 0.9906 | 619100 | 1.0315 | - |
| 0.9907 | 619200 | 0.5925 | - |
| 0.9909 | 619300 | 1.2046 | - |
| 0.9910 | 619400 | 0.9529 | - |
| 0.9912 | 619500 | 0.512 | - |
| 0.9914 | 619600 | 0.9372 | - |
| 0.9915 | 619700 | 0.8461 | - |
| 0.9917 | 619800 | 1.0018 | - |
| 0.9918 | 619900 | 0.8104 | - |
| 0.9920 | 620000 | 0.9701 | - |
| 0.9922 | 620100 | 0.9382 | - |
| 0.9923 | 620200 | 0.7666 | - |
| 0.9925 | 620300 | 0.5209 | - |
| 0.9926 | 620400 | 0.5529 | - |
| 0.9928 | 620500 | 0.8119 | - |
| 0.9930 | 620600 | 0.7313 | - |
| 0.9931 | 620700 | 0.7657 | - |
| 0.9933 | 620800 | 0.7837 | - |
| 0.9934 | 620900 | 0.7026 | - |
| 0.9936 | 621000 | 0.7149 | - |
| 0.9938 | 621100 | 0.6568 | - |
| 0.9939 | 621200 | 0.7321 | - |
| 0.9941 | 621300 | 0.7595 | - |
| 0.9942 | 621400 | 0.6011 | - |
| 0.9944 | 621500 | 1.2311 | - |
| 0.9946 | 621600 | 0.4925 | - |
| 0.9947 | 621700 | 0.8688 | - |
| 0.9949 | 621800 | 0.4481 | - |
| 0.9950 | 621900 | 1.0283 | - |
| 0.9952 | 622000 | 1.2286 | - |
| 0.9954 | 622100 | 0.873 | - |
| 0.9955 | 622200 | 0.7679 | - |
| 0.9957 | 622300 | 0.8617 | - |
| 0.9958 | 622400 | 0.6354 | - |
| 0.9960 | 622500 | 0.5432 | - |
| 0.9962 | 622600 | 1.113 | - |
| 0.9963 | 622700 | 0.8108 | - |
| 0.9965 | 622800 | 0.9604 | - |
| 0.9966 | 622900 | 0.6366 | - |
| 0.9968 | 623000 | 0.7617 | - |
| 0.9970 | 623100 | 0.7081 | - |
| 0.9971 | 623200 | 0.7325 | - |
| 0.9973 | 623300 | 0.6241 | - |
| 0.9974 | 623400 | 0.4382 | - |
| 0.9976 | 623500 | 0.3651 | - |
| 0.9978 | 623600 | 0.6324 | - |
| 0.9979 | 623700 | 0.5758 | - |
| 0.9981 | 623800 | 0.7779 | - |
| 0.9982 | 623900 | 0.7489 | - |
| 0.9984 | 624000 | 0.6391 | - |
| 0.9986 | 624100 | 0.673 | - |
| 0.9987 | 624200 | 0.7025 | - |
| 0.9989 | 624300 | 0.871 | - |
| 0.9990 | 624400 | 1.0238 | - |
| 0.9992 | 624500 | 0.5088 | - |
| 0.9994 | 624600 | 0.7578 | - |
| 0.9995 | 624700 | 0.8879 | - |
| 0.9997 | 624800 | 1.1698 | - |
| 0.9998 | 624900 | 1.0531 | - |
| 1.0000 | 625000 | 0.838 | - |
</details>
### Framework Versions
- Python: 3.8.10
- Sentence Transformers: 3.1.1
- Transformers: 4.45.2
- PyTorch: 2.4.1+cu118
- Accelerate: 1.0.1
- Datasets: 3.0.1
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CoSENTLoss
```bibtex
@online{kexuefm-8847,
title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
author={Su Jianlin},
year={2022},
month={Jan},
url={https://kexue.fm/archives/8847},
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
PictorAgencia/samsonite_maleta_4_blanca | PictorAgencia | "2025-05-06T21:21:39Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-05-06T21:02:37Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Samsonite_Maleta_4_Blanca
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/PictorAgencia/samsonite_maleta_4_blanca/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('PictorAgencia/samsonite_maleta_4_blanca', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/PictorAgencia/samsonite_maleta_4_blanca/discussions) to add images that show off what you’ve made with this LoRA.
|
ahmadsky0533/sky0533 | ahmadsky0533 | "2025-05-06T21:21:03Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-05-06T21:21:02Z" | ---
license: apache-2.0
---
|
buelfhood/irplag_plbart_ep50_bs16_lr0_0003_l512_s42_ppy_f_beta_score_pfeiffer | buelfhood | "2025-05-06T21:17:22Z" | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"plbart",
"region:us"
] | null | "2025-05-06T21:17:20Z" | ---
tags:
- adapter-transformers
- plbart
---
# Adapter `buelfhood/irplag_plbart_ep50_bs16_lr0_0003_l512_s42_ppy_f_beta_score_pfeiffer` for uclanlp/plbart-base
An [adapter](https://adapterhub.ml) for the `uclanlp/plbart-base` model that was trained on the None dataset and includes a prediction head for classification.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("uclanlp/plbart-base")
adapter_name = model.load_adapter("buelfhood/irplag_plbart_ep50_bs16_lr0_0003_l512_s42_ppy_f_beta_score_pfeiffer", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> |
mradermacher/Open-RS3-GGUF | mradermacher | "2025-05-06T21:14:08Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:knoveleng/open-rs",
"dataset:knoveleng/open-s1",
"dataset:knoveleng/open-deepscaler",
"base_model:knoveleng/Open-RS3",
"base_model:quantized:knoveleng/Open-RS3",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-05-06T18:07:15Z" | ---
base_model: knoveleng/Open-RS3
datasets:
- knoveleng/open-rs
- knoveleng/open-s1
- knoveleng/open-deepscaler
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/knoveleng/Open-RS3
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Open-RS3-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Open-RS3-GGUF/resolve/main/Open-RS3.Q2_K.gguf) | Q2_K | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Open-RS3-GGUF/resolve/main/Open-RS3.Q3_K_S.gguf) | Q3_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Open-RS3-GGUF/resolve/main/Open-RS3.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Open-RS3-GGUF/resolve/main/Open-RS3.Q3_K_L.gguf) | Q3_K_L | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Open-RS3-GGUF/resolve/main/Open-RS3.IQ4_XS.gguf) | IQ4_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Open-RS3-GGUF/resolve/main/Open-RS3.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Open-RS3-GGUF/resolve/main/Open-RS3.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Open-RS3-GGUF/resolve/main/Open-RS3.Q5_K_S.gguf) | Q5_K_S | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Open-RS3-GGUF/resolve/main/Open-RS3.Q5_K_M.gguf) | Q5_K_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Open-RS3-GGUF/resolve/main/Open-RS3.Q6_K.gguf) | Q6_K | 1.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Open-RS3-GGUF/resolve/main/Open-RS3.Q8_0.gguf) | Q8_0 | 2.0 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Open-RS3-GGUF/resolve/main/Open-RS3.f16.gguf) | f16 | 3.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Machlovi/Gemma3_12_MegaHateCatplus | Machlovi | "2025-05-06T21:11:40Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-04-20T04:34:18Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/coreOCR-7B-050325-preview-i1-GGUF | mradermacher | "2025-05-06T21:08:58Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"OCR",
"Pdf",
"Doc",
"Image",
"en",
"dataset:allenai/olmOCR-mix-0225",
"dataset:prithivMLmods/Opendoc1-Analysis-Recognition",
"dataset:prithivMLmods/Opendoc2-Analysis-Recognition",
"dataset:prithivMLmods/Openpdf-Analysis-Recognition",
"base_model:prithivMLmods/coreOCR-7B-050325-preview",
"base_model:quantized:prithivMLmods/coreOCR-7B-050325-preview",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-05-06T20:36:40Z" | ---
base_model: prithivMLmods/coreOCR-7B-050325-preview
datasets:
- allenai/olmOCR-mix-0225
- prithivMLmods/Opendoc1-Analysis-Recognition
- prithivMLmods/Opendoc2-Analysis-Recognition
- prithivMLmods/Openpdf-Analysis-Recognition
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- OCR
- Pdf
- Doc
- Image
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/prithivMLmods/coreOCR-7B-050325-preview
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/coreOCR-7B-050325-preview-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/coreOCR-7B-050325-preview-i1-GGUF/resolve/main/coreOCR-7B-050325-preview.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/coreOCR-7B-050325-preview-i1-GGUF/resolve/main/coreOCR-7B-050325-preview.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/coreOCR-7B-050325-preview-i1-GGUF/resolve/main/coreOCR-7B-050325-preview.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/coreOCR-7B-050325-preview-i1-GGUF/resolve/main/coreOCR-7B-050325-preview.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/coreOCR-7B-050325-preview-i1-GGUF/resolve/main/coreOCR-7B-050325-preview.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/coreOCR-7B-050325-preview-i1-GGUF/resolve/main/coreOCR-7B-050325-preview.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/coreOCR-7B-050325-preview-i1-GGUF/resolve/main/coreOCR-7B-050325-preview.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/coreOCR-7B-050325-preview-i1-GGUF/resolve/main/coreOCR-7B-050325-preview.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/coreOCR-7B-050325-preview-i1-GGUF/resolve/main/coreOCR-7B-050325-preview.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/coreOCR-7B-050325-preview-i1-GGUF/resolve/main/coreOCR-7B-050325-preview.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/coreOCR-7B-050325-preview-i1-GGUF/resolve/main/coreOCR-7B-050325-preview.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/coreOCR-7B-050325-preview-i1-GGUF/resolve/main/coreOCR-7B-050325-preview.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/coreOCR-7B-050325-preview-i1-GGUF/resolve/main/coreOCR-7B-050325-preview.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/coreOCR-7B-050325-preview-i1-GGUF/resolve/main/coreOCR-7B-050325-preview.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/coreOCR-7B-050325-preview-i1-GGUF/resolve/main/coreOCR-7B-050325-preview.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/coreOCR-7B-050325-preview-i1-GGUF/resolve/main/coreOCR-7B-050325-preview.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/coreOCR-7B-050325-preview-i1-GGUF/resolve/main/coreOCR-7B-050325-preview.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/coreOCR-7B-050325-preview-i1-GGUF/resolve/main/coreOCR-7B-050325-preview.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/coreOCR-7B-050325-preview-i1-GGUF/resolve/main/coreOCR-7B-050325-preview.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/coreOCR-7B-050325-preview-i1-GGUF/resolve/main/coreOCR-7B-050325-preview.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/coreOCR-7B-050325-preview-i1-GGUF/resolve/main/coreOCR-7B-050325-preview.i1-Q4_1.gguf) | i1-Q4_1 | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/coreOCR-7B-050325-preview-i1-GGUF/resolve/main/coreOCR-7B-050325-preview.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/coreOCR-7B-050325-preview-i1-GGUF/resolve/main/coreOCR-7B-050325-preview.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/coreOCR-7B-050325-preview-i1-GGUF/resolve/main/coreOCR-7B-050325-preview.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
kernels-community/flash-attn | kernels-community | "2025-05-06T21:08:38Z" | 0 | 0 | null | [
"kernel",
"license:bsd-3-clause",
"region:us"
] | null | "2025-03-25T00:01:55Z" | ---
license: bsd-3-clause
tags:
- kernel
---

# Flash Attention
Flash Attention is a fast and memory-efficient implementation of the attention mechanism, designed to work with large models and long sequences. This is a Hugging Face compliant kernel build of Flash Attention.
Original code here [https://github.com/Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention).
```python
# /// script
# dependencies = ["numpy", "torch", "kernels"]
# ///
import torch
from kernels import get_kernel
# Setup
torch.manual_seed(42)
flash_attn = get_kernel("kernels-community/flash-attn")
device = torch.device("cuda")
# Show available functions
print("Flash Attention functions:", [i for i in dir(flash_attn) if i.startswith("mha")])
# 1. Standard attention
print("\n1. Standard attention:")
B, S, H, D = 2, 5, 4, 8 # batch, seq_len, heads, head_dim
q = k = v = torch.randn(B, S, H, D, device=device, dtype=torch.float16)
out = flash_attn.mha_fwd(q=q, k=k, v=v, is_causal=False)[0]
print(f"Output: {out.shape}")
# 2. Variable length sequences
print("\n2. Variable length sequences:")
q_var = torch.randn(10, H, D, device=device, dtype=torch.float16) # total_q=10
k_var = v_var = torch.randn(12, H, D, device=device, dtype=torch.float16) # total_k=12
# For 3 sequences with lengths [3,4,3] for q and [4,5,3] for k
cu_q = torch.tensor([0, 3, 7, 10], device=device, dtype=torch.int32)
cu_k = torch.tensor([0, 4, 9, 12], device=device, dtype=torch.int32)
out_var = flash_attn.mha_varlen_fwd(
q=q_var,
k=k_var,
v=v_var,
cu_seqlens_q=cu_q,
cu_seqlens_k=cu_k,
max_seqlen_q=4,
max_seqlen_k=5,
)[0]
print(f"Output: {out_var.shape}")
# 3. KV-cache for autoregressive generation
print("\n3. KV-cache:")
cache_len, new_len = 10, 2
kcache = vcache = torch.randn(B, cache_len, H, D, device=device, dtype=torch.float16)
q_new = k_new = v_new = torch.randn(
B, new_len, H, D, device=device, dtype=torch.float16
)
seqlens = torch.full((B,), cache_len + new_len, device=device, dtype=torch.int32)
out_kv = flash_attn.mha_fwd_kvcache(
q=q_new,
kcache=kcache,
vcache=vcache,
k=k_new,
v=v_new,
seqlens_k=seqlens,
is_causal=True,
)[0]
print(f"Output: {out_kv.shape}")
```
expected output
```txt
Fetching 3 files: 100%|█████████████████████████████████████████████████████| 3/3 [00:00<00:00, 16384.00it/s]
Flash Attention functions: ['mha_bwd', 'mha_fwd', 'mha_fwd_kvcache', 'mha_varlen_bwd', 'mha_varlen_fwd']
1. Standard attention:
Output: torch.Size([2, 5, 4, 8])
2. Variable length sequences:
Output: torch.Size([10, 4, 8])
3. KV-cache:
Output: torch.Size([2, 2, 4, 8])
```
|
mradermacher/SmolVLM-256M-Base-ocr2md.v.1.2-GGUF | mradermacher | "2025-05-06T21:08:07Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"en",
"base_model:abdoubtmne/SmolVLM-256M-Base-ocr2md.v.1.2",
"base_model:quantized:abdoubtmne/SmolVLM-256M-Base-ocr2md.v.1.2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-05-06T21:05:39Z" | ---
base_model: abdoubtmne/SmolVLM-256M-Base-ocr2md.v.1.2
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/abdoubtmne/SmolVLM-256M-Base-ocr2md.v.1.2
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SmolVLM-256M-Base-ocr2md.v.1.2-GGUF/resolve/main/SmolVLM-256M-Base-ocr2md.v.1.2.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/SmolVLM-256M-Base-ocr2md.v.1.2-GGUF/resolve/main/SmolVLM-256M-Base-ocr2md.v.1.2.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/SmolVLM-256M-Base-ocr2md.v.1.2-GGUF/resolve/main/SmolVLM-256M-Base-ocr2md.v.1.2.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/SmolVLM-256M-Base-ocr2md.v.1.2-GGUF/resolve/main/SmolVLM-256M-Base-ocr2md.v.1.2.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SmolVLM-256M-Base-ocr2md.v.1.2-GGUF/resolve/main/SmolVLM-256M-Base-ocr2md.v.1.2.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/SmolVLM-256M-Base-ocr2md.v.1.2-GGUF/resolve/main/SmolVLM-256M-Base-ocr2md.v.1.2.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SmolVLM-256M-Base-ocr2md.v.1.2-GGUF/resolve/main/SmolVLM-256M-Base-ocr2md.v.1.2.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SmolVLM-256M-Base-ocr2md.v.1.2-GGUF/resolve/main/SmolVLM-256M-Base-ocr2md.v.1.2.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/SmolVLM-256M-Base-ocr2md.v.1.2-GGUF/resolve/main/SmolVLM-256M-Base-ocr2md.v.1.2.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/SmolVLM-256M-Base-ocr2md.v.1.2-GGUF/resolve/main/SmolVLM-256M-Base-ocr2md.v.1.2.Q6_K.gguf) | Q6_K | 0.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/SmolVLM-256M-Base-ocr2md.v.1.2-GGUF/resolve/main/SmolVLM-256M-Base-ocr2md.v.1.2.Q8_0.gguf) | Q8_0 | 0.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/SmolVLM-256M-Base-ocr2md.v.1.2-GGUF/resolve/main/SmolVLM-256M-Base-ocr2md.v.1.2.f16.gguf) | f16 | 0.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
nekomajin/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mighty_hoarse_camel | nekomajin | "2025-05-06T21:07:46Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am mighty hoarse camel",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-16T11:36:21Z" | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mighty_hoarse_camel
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am mighty hoarse camel
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mighty_hoarse_camel
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="nekomajin/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mighty_hoarse_camel", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mradermacher/gpt2-demo-i1-GGUF | mradermacher | "2025-05-06T21:06:38Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:demo-leaderboard/gpt2-demo",
"base_model:quantized:demo-leaderboard/gpt2-demo",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | "2025-05-06T21:01:42Z" | ---
base_model: demo-leaderboard/gpt2-demo
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/demo-leaderboard/gpt2-demo
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/gpt2-demo-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/gpt2-demo-i1-GGUF/resolve/main/gpt2-demo.i1-IQ1_S.gguf) | i1-IQ1_S | 0.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/gpt2-demo-i1-GGUF/resolve/main/gpt2-demo.i1-IQ1_M.gguf) | i1-IQ1_M | 0.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/gpt2-demo-i1-GGUF/resolve/main/gpt2-demo.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-demo-i1-GGUF/resolve/main/gpt2-demo.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-demo-i1-GGUF/resolve/main/gpt2-demo.i1-IQ2_S.gguf) | i1-IQ2_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-demo-i1-GGUF/resolve/main/gpt2-demo.i1-IQ2_M.gguf) | i1-IQ2_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-demo-i1-GGUF/resolve/main/gpt2-demo.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/gpt2-demo-i1-GGUF/resolve/main/gpt2-demo.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.2 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/gpt2-demo-i1-GGUF/resolve/main/gpt2-demo.i1-Q2_K.gguf) | i1-Q2_K | 0.2 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/gpt2-demo-i1-GGUF/resolve/main/gpt2-demo.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-demo-i1-GGUF/resolve/main/gpt2-demo.i1-IQ3_S.gguf) | i1-IQ3_S | 0.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/gpt2-demo-i1-GGUF/resolve/main/gpt2-demo.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.2 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/gpt2-demo-i1-GGUF/resolve/main/gpt2-demo.i1-IQ3_M.gguf) | i1-IQ3_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-demo-i1-GGUF/resolve/main/gpt2-demo.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/gpt2-demo-i1-GGUF/resolve/main/gpt2-demo.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-demo-i1-GGUF/resolve/main/gpt2-demo.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/gpt2-demo-i1-GGUF/resolve/main/gpt2-demo.i1-Q4_0.gguf) | i1-Q4_0 | 0.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/gpt2-demo-i1-GGUF/resolve/main/gpt2-demo.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/gpt2-demo-i1-GGUF/resolve/main/gpt2-demo.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/gpt2-demo-i1-GGUF/resolve/main/gpt2-demo.i1-Q4_1.gguf) | i1-Q4_1 | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-demo-i1-GGUF/resolve/main/gpt2-demo.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gpt2-demo-i1-GGUF/resolve/main/gpt2-demo.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-demo-i1-GGUF/resolve/main/gpt2-demo.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/gpt2-demo-i1-GGUF/resolve/main/gpt2-demo.i1-Q6_K.gguf) | i1-Q6_K | 0.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
buelfhood/irplag_codeberta_ep50_bs16_lr0_0003_l512_s42_ppy_f_beta_score_pfeiffer | buelfhood | "2025-05-06T21:04:20Z" | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"roberta",
"region:us"
] | null | "2025-05-06T21:04:17Z" | ---
tags:
- adapter-transformers
- roberta
---
# Adapter `buelfhood/irplag_codeberta_ep50_bs16_lr0_0003_l512_s42_ppy_f_beta_score_pfeiffer` for huggingface/CodeBERTa-small-v1
An [adapter](https://adapterhub.ml) for the `huggingface/CodeBERTa-small-v1` model that was trained on the None dataset and includes a prediction head for classification.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("huggingface/CodeBERTa-small-v1")
adapter_name = model.load_adapter("buelfhood/irplag_codeberta_ep50_bs16_lr0_0003_l512_s42_ppy_f_beta_score_pfeiffer", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> |
praisethefool/human_tech-fields-multilabelclassifier | praisethefool | "2025-05-06T21:04:05Z" | 0 | 0 | setfit | [
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"model-index",
"region:us"
] | text-classification | "2025-05-03T02:36:39Z" | ---
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: Untangle Solving problems with fuzzy constraints Untangle Solving problems
with fuzzy constraints Szymon Kaliski Marcel Goethals Mike Kluev January 2023
Have you ever needed to find a time for all your friends to meet for dinner or
to create a seating plan for wedding guests These are examples of problems that
require navigating a set of overlapping constraints you are only available on
every other Tuesday or Thursday Chris will not show up if Ash is not there but
Ash is in town only the last week of November and you really wanted to catch up
with Chris We often work out problems like this with a pencil and paper experimenting
until we find a solution but it feels like computers should be the perfect tool
to help us In fact there are programming tools called theorem provers which are
designed to solve exactly this class of problems They excel at helping experts
work through fullyspecified problems with a clear solution Unfortunately we rarely
have a formal understanding of our problems We come to understand them interactively
by trying to find a solution In fact we often just need a solution that is good
enough instead of one proven to be optimal We set out to experiment with interactive
computerassistance for this type of thinking Untangle is a research prototype
that helps you think through everyday constraint problems using your tablet stylus
With Untangle you leave handdrawn marks on a page sketch out the representation
of your problem introduce constraints graphically and browse through sets of possible
solutions This work was presented as a part of our Programmable Ink talk at Strange
Loop 2022 We welcome your feedback inkandswitch or helloinkandswitchcom Contents
Thinking fuzziness and constraints Design principles Inspirations Logic programming
Pattern matching Graphical production systems Realworld applications of logic
programming Untangle Assignment problem Frequency assignment bids Coffee shop
schedule Generative art Recursive rewrites Findings A smooth ramp from concrete
to abstract helps form intuitions about the system Informal solving is most useful
for certain kinds of problems Many problem representations look like tables Handdrawn
input is well aligned with exploratory problemsolving Fuzziness and live feedback
contribute to a conversational feel Shortcomings The system lacks clear semantics
Turning ink into symbols is an unnecessary technical crutch Some parts of the
system lack visibility Responses from the computer are underdesigned Small canvas
artificially constrains the problem representation There is no way to explore
the solution space Conclusions Thinking fuzziness and constraints Computers can
be great at solving logic problems like the ones mentioned above as long as we
can describe them in a formally correct and detailed way Special programming languages
and techniques—theorem provers—exist and can calculate solutions for huge datasets
These languages are most often used by mathematicians to help with proving formal
theorems or by domain experts to aid in modeling large scale industrial production
systems A Constraint satisfaction problem can be solved using logic programming
techniques such as Satisfiability Modulo Theories SMT or Answer Set Programming
ASP These programming languages are useful after we have encoded a problem in
machinereadable form but first we must do the harder part fully understand the
problem For this we often reach for pen and paper which allows us to think fuzzily
and omit various levels of detail when problemsolving We can quickly sketch out
the representation of the problem without worrying about absolute correctness
For a sampling of realworld pen paper constraint problem representations have
a look at How People Visually Represent Discrete Constraint Problems by Xu Zhu
et al This project explores what it might look like if computers could support
this style of earlystage thinking Untangle is specifically not a tool for solving
artificial logic puzzles nor is it a tool for creating formal specifications for
industrial systems Instead we are interested in a tool that can help us think
through illdefined problems understand compromises and learn about what kind of
questions to ask Untangle is a continuation of the threads highlighted in the
thinking modeling and computers section of the Crosscut essay Mainly we want a
tool in which we can sketch a dynamic representation of the problem at hand and
have a conversation with it Design principles Keep focus on the problem not the
implementationWe want a tool in which your focus remains on the problem at hand
as much as possible rather than thinking about the correct way to encode it in
a machinereadable way No errors undefined values or unknown parameters to fill
inThe tool should never block freeze or become unresponsive even if the user creates
invalid states such as errors or incomplete input A wrong answer is better than
no answer Everything is visibleBoth the domain model and the constraints should
always be visible and interactive Conversation with the material is be encouragedWe
want an iterative approach to problem solving—one where observing leads to thinking
which leads to acting which leads back to observing You should be able to intuit
connections between various rules and constraints by wiggling them and seeing
other things wobble We also adopted most of the design principles from Crosscut
A tablet stylus can become dynamic pen paper The content of what you are working
on is the most important thing You should not have to use an onscreen keyboard
for programming This is a personal thinking space Untangle shares a lot of context
with Crosscut a research project in which we explored an approach to building
dynamic models by direct manipulation There is one important design difference
from Crosscut using handdrawn strokes instead of vector graphics We believe there
is something special about leaving distinctively human marks on the page with
a stylus so we want to go back to handdrawn marks like with Inkbase but with an
entirely different computational model Inspirations Logic programming Logic programming
is an important paradigm in computer science It is based on formal logic and allows
programmers to describe problems using declarative statements “every human is
mortal” and ask questions based on these statements “X is human is X mortal” Some
notable logic programming languages include Prolog one of the first logic programming
languages widely used for theorem proving term rewriting automated planning Z3
Theorem Prover a satisfiability modulo theories SMT solver that is targeted at
software verification and program analysis Untangle uses Z3 as a library for solving
Alloy Analyzer Alloy is a language for describing and exploring software models
It has been used in a wide range of applications from finding holes in security
mechanisms to designing telephone switching networks Of particular interest is
Alloy’s IDE that visualizes a possible structure based on the constraints provided
by the user Alloy Analyzer’s interactive solver visualization Pattern matching
Untangle relies heavily on spatial queries—finding symbols on the page by their
spatial relation to other known symbols—which were inspired by Regex a domainspecific
language for describing search patterns in text Qualitative Spatial Reasoning
a calculus which allows a machine to represent and reason about spatial entities
without resorting to traditional quantitative techniques QSR is often used in
GIS Geographic Information Systems for querying geographical data For more information
on QSR check out A survey of qualitative spatial representations by Chen et al
Graphical production systems Work on Untangle was also inspired by graphical production
systems which use shape matching rules and graphical rewrites to describe computations
Publications that guided our work Shape Grammars and the Generative Specification
of Painting and Sculpture a seminal paper by Stiny and Gips introducing shape
grammars New graphical reasoning models for understanding graphical interfaces
a paper from Furnas introducing BICPICT a pixelrewriting graphical reasoning system
Wave Function Collapse an approach for generating tile maps from a single example
which influenced our thinking on using superposition as a mental model for working
with multiple possible values Wave Function Collapse algorithm visualzation Figure
by Maxim Gumin Realworld applications of logic programming Finally we were guided
by various examples of using theorem provers for working through everyday problems
How people visually represent discrete constraint problems Using linear programming
GLPK for scheduling problems Tax planning with Z3 Theorem Prover A shared pinboard
becomes a collaborative modeling tool to plan a dinner party Untangle We are now
going to introduce Untangle a tool for working out realworld logic problems Assignment
problem To explore the basic concepts of the tool we are going to look at an assignment
problem imagine you are teaching a class it is the end of the semester and each
student needs to submit a short paper about some topic You want students to grade
each other so you have less work to do Let us see how Untangle can help us solve
this problem This reallife problem comes from Joshua Horowitz thank you Josh In
its most basic form Untangle allows you to automatically assign symbols to other
symbols For example you can put Bob into a box by drawing an arrow between the
text Bob and the box—Bob will now appear in pink inside the box A symbol is anything
that appears on the canvas Symbols can be a single stroke or collections of strokes
Assigning Bob to a box We can also assign multiple symbols to a box Untangle will
now show that there are two possible assignments by showing two dots at the bottom
of the screen We can scroll through the different solutions Bob and Eve Assigning
multiple students to the same box There are six students in our class so we will
write all of their names Instead of drawing six arrows we can use spatial queries
to find elements on the canvas Using a spatial query to grab all of the student
names Whenever we make a selection of symbols on the canvas a popup appears This
popup suggests different spatial queries we can use to match our selection In
this case it suggests that we could look for a vertical column of symbols starting
with Bob Representation of a spatial query capturing a column of symbols starting
with Bob We can place this query onto the canvas and use it as a shortcut to refer
to all the symbols that match it The matching symbols are highlighted in the same
color for additional visual feedback Untangle only ever places one symbol into
each box Whenever there are multiple ways to assign symbols Untangle will generate
multiple solutions but display only one The dots at the bottom of the screen indicate
that there are alternative possible assignments We can get a sense of the different
solutions by scrolling through them Solution switcher We want to assign a student
to every other student so let us create an empty box next to every name Just like
with the spatial query for a list of names we can create a query that simply looks
for all boxes This generates all possible ways to assign one student to another
one Using the all boxes query to assign every student to another student Currently
students are sometimes assigned to grade themselves While students will surely
be happy to do so as a teacher I would rather avoid this situation Let us add
a rule that prevents students from grading themselves To do this we will add a
third spatial query This query matches any box that has something to the left
of it The question mark acts as a wildcard matching any symbol Using wildcard
to capture something next to a box and make that something not equal to the contents
of the box Finally we can draw an inequality constraint arrow This expresses that
whatever ends up in the box cannot be the same as whatever symbol is on its left
Bob can no longer grade Bob Inequality constraint Let us say we omitted a student
when writing the original list of names Because we are using spatial queries to
describe the column of student names we can easily extend the list Extending the
student list with a new name The results update reactively It turns out that Bob
and Claire have written papers about similar topics So it would be great if they
grade each other We can ensure this by simply putting their names into the corresponding
boxes and the system will adapt accordingly Forcing the assignment of Bob to Claire
Again the system reacts with new results Frequency assignment bids To show a few
more interesting properties of Untangle let us look at a different example—running
frequency spectrum assignment bids The basic idea is that a specific radio frequency
spectrum say 5G is divided into smaller parts and sold to various operators This
reallife problem comes from William Taysom thank you William Let us start by modeling
this problem There are three telephone companies telcos that are bidding on the
frequency spectrum from 34 GHz to 42 GHz The spectrum is split into eight bands
and the telcos can bid on individual bands Initial model of the spectrum assignment
problem Just as in the student grading example we could use spatial queries to
assign telcos randomly to bands in the spectrum Assigning operators to random
bands of the available spectrum But this is not really how an auction works Instead
telcos bid on a specific number of bands that they want to obtain For example
Verizon might place a bid for four bands We can model this using a count modifier
which limits the number of times a certain rule applies Using a count modifier
to limit the assignment to four times Let us rearrange the canvas a bit and add
some imaginary bids for each of the companies Modeling bids from all of the operators
Some bands can be more valuable for example the middle bands often have less interference
so a company may bit not just on a specific number of bands but also specific
placement inside the spectrum To model this we can either draw an arrow directly
to the specified band or simply drag a symbol in place We can guide the solver
into a direction that we care about and it will respond immediately Constraining
the solutionspace further by modeling bids on specific bands in the available
spectrum The number of bids might not equal the number of available slots—an overconstrained
system Using most solvers the result would be an error message that the constraints
are unsatisfiable Instead of showing an error Untangle will attempt to generate
a partially correct solution by ignoring some of the rules Arrows will turn red
indicating that for the currently shown solution this rule is ignored Relaxing
the overconstrained problem representation by ignoring some of the rules When
having a conversation with the material hearing “no but what if…” is more encouraging
than hearing just “no” The machine does not brake or scold at you for making a
mistake but instead shows compromises and possible directions which in turn helps
generate new ideas Finally companies sometimes bid specifically on a set of consecutive
bands—rather than just bidding on four bands they want four bands in a row We
can model this using a combination of spatial queries and counts Creating an assignment
rule of four consecutive bands using a combination of spatial queries and count
modifiers Coffee shop schedule In the following example we will show how to create
a work schedule for a coffee shop An easy solution is to assign employees to random
days but to make a good schedule everyone is happy with we need to consider employee
availability To solve this problem we need to two dimensions First we list availability
of the employees in the topleft corner For each day that an employee is available
we draw an X For example Jim is available on Monday Wednesday and Friday In the
bottom right we draw a simple empty schedule for the week Modeling the baristas
availability Untangle has a special lookup spatial query that allows us to look
up information in tablelike layouts In this case it matches columnrow pairs together
whenever it finds an X Using the lookup spatial query on the availability table
You can think of this as finding all the available people for each day Untangle
will only generate solutions where the day and name are matched in the availability
table Visualization of running the lookup spatial query We then create a spatial
query to find all of the empty slots in the schedule using a something on left
box on right query Spatial query using a wildcard symbol and a box to the right
of it We only want to assign employees to days they are actually available To
do this we bind the wildcard matches on days of the week We then use the names
to fill in the boxes in the schedules Untangle will show us all possible schedules
based employee availability Resulting coffee shop work schedules based on employee
availability The proposed schedules often have people working two days in a row—something
we might like to avoid We can set up a simple constraint to make sure no two boxes
in a row hold the same value Constraining consecutive not to hold the same value
It is worth noting that in this example technically all the information needed
to solve the problem is already on the canvas However coming up with a valid solution
still requires serious “System 2” level thinking shuffling around symbols in your
head It feels great to have the system do this part for you System 1 System 2
is a dichotomy introduced by Daniel Kahneman in Thinking Fast and Slow System
1 describes fast intuitive loweffort thinking while System 2 is effortful and
slow Generative art So far all the examples we looked at were about constraining
the solution space—progressively going towards a small set of satisfying solutions
This is only a one part of the problemsolving process which has two distinctive
phases that feedback into each other—expanding and collapsing Expand and discover
different possibilities then narrow scope and focus We can use Untangle’s primitives
to force us to expand the possible solution space instead Let us illustrate this
by recreating one of the most famous computer art pieces 10 PRINT 10 PRINT is
a oneline Commodore 64 BASIC program that generates a visual pattern on the screen
We can make a couple of boxes two symbols and fill all boxes Using spatial queries
to fill all of the boxes with permutations of diagonal lines This already starts
to look interesting but we can get to more compelling results by adding additional
constraints Forcing more interesting results by applying inequality constraints
And of course we can introduce additional symbols and keep exploring the solution
space 10 PRINT variation using three symbols and two inequality constraints Recursive
rewrites We can apply the rewrites recursively by flattening the pink results
back onto the canvas and turning them into black ink This will in turn update
the spatial queries and generate a new set of results New graphical reasoning
models for understanding graphical interfaces is a seminal paper on using graphical
rewrite rules for computation Progressively flattening solver results back onto
the canvas to create basic 1D cellular automata Recursive rewrites can be used
in a lot of interesting ways As an example below is a recreation of a logic gates
demo from Inkbase We start by creating the symbols And the rules they follow Which
shows us immediate feedback on the canvas We can then set up some logic gate networks
and propagate the values through them by writing them onto the canvas to get to
the final result A process of propagating values through two logic gate networks
Findings We have used Untangle to solve the problems outlined above as we were
building the prototype to test our assumptions and intuitions We also conducted
several informal interviews with potential users with background in mathematics
and logic programming Here are some reflections from that process A smooth ramp
from concrete to abstract helps form intuitions about the system Symbols queries
and arrows build up on each other Arrows are a reified way of moving a symbol
into a box manually Spatial queries are a reified way of selecting things manually
Combining the two is a reified way of drawing multiple arrows between multiple
symbols “Reification” means giving a concrete representation to an abstract process
In this sense each primitive is simply a way of expressing in more general terms
what you could already do in an earlier step Gradually climbing this ladder of
abstraction helps build intuition about how the system behaves Informal solving
is most useful for certain kinds of problems There seem to be two dimensions of
complexity for a given problem a problem can have relatively trivial constraints
but many elements that need to be solved or can have a small number of elements
but constraints that are difficult to satisfy There is a sweet spot where Untangle
seems most useful The dataset must be small enough to be manually drawn but with
constraints too complex to easily solve in your head If the dataset is large it
takes too much effort to write it all down and the tablet screensize limits what
can fit on the canvas If the set of constraints is simple enough to keep in your
working memory you can often just solve the problem as you create its representation
without additional help from the computer We found that the even a small number
of overlapping constraints forces us to switch to effortful “System 2” thinking
and this is where Untangle shines helping us think through a problem in an informal
way Many problem representations look like tables The way we naturally represent
assignment problems tends to drift toward using tables Examples of tablelike structures
invented adhoc to represent specific problems Even though a structured approach—like
a builtin table tool—might seem more appropriate to modeling these kinds of problems
freeform input feels important The exact shape of the problem representation might
not be clear initially and sketchiness facilitates finding it It feels good to
build up to a table rather than being prematurely forced into one Another possibility
is that the provided spatial queries column with symbol at top row with symbol
at left table lookup etc encourage drawing problems in gridlike structures A rich
area for future work would be adding more ways to query the canvas which could
lead to more diverse representations Handdrawn input is well aligned with exploratory
problemsolving One of our motivations for handdrawn input was to enable drawing
symbols and elements that mapped closely to a particular problem domain For example
you could draw chairs and tables for a seating arrangement In practice we rarely
drew domain objects instead favoring symbols like names logos or even dashes and
dots However handdrawn input still felt much more aesthetically fitting than the
vector version we tried early on Early iteration of Untangle using vector graphics
and artificial symbols At the lab we believe that the fidelity of the tool you
use should be proportional to the maturity of the idea you are working on Being
forced into crisp vector shapes for exploratory problems creates a cognitive dissonance
between the fuzzy nascent problem in our head and the precise symbols on the canvas
Fuzziness and live feedback contribute to a conversational feel Untangle was designed
to get the user to a result as fast as possible Simply drawing a single arrow
can generate multiple results To find a satisfying solution it is often not even
necessary to add additional constraints Instead you can just scrub through proposed
solutions to find one that makes sense If you find a satisfying solution you can
simply stop working even if the results are underspecified In a similar vein if
the solver cannot satisfy your constraints rather than showing an error message
the system will attempt to ignore some constraints relaxing the problem statement
to generate a technically incorrect solution The interface will highlight the
arrows that were ignored to generate each solution In realworld contexts we often
are not trying to find the globally optimal solution but rather just any reasonable
one Spatial queries are also fuzzy We do not look for something “exactly 137px
to the left” but “roughly to the left” This plays well with handdrawn aesthetics
as you never create perfect sketches Additionally spatial queries highlight their
matches directly on the canvas This has two advantages first it helps explain
what the query is doing—even if you have no idea of the underlying formalism you
can experimentally find out what is happening Secondly it makes it transparent
when the imperfect matching algorithm does not work as expected You can always
just wiggle your drawing a bit to get the system to recognize it Finally arrows
point to and inside of queries rather than of connecting to ports This has a specific
informal feel which meshes well with the fuzziness of other parts of the system
and is distinctively different from the feel of things snapping into each other
Shortcomings The system lacks clear semantics Untangle’s “language” is unspecified
and not very composable The set of provided spatial queries is adhoc We created
new queries for problems we were solving in the examples instead of building them
up from first principles As a result we can solve the examples nicely queries
are not composable or abstractable—you cannot combine to the left and to the top
to create the lookup query used in the coffee shop schedule nor can you push the
results from one query into another one to create reusable functionalities Additionally
creating these queries purely by example can be quite tedious—especially when
it comes to selecting wildcard configurations It seems clear it would be better
to create a match by example then interactively refine it through direct manipulation
One exciting piece of research in this direction is described in Perceptual Grouping
Selection Assistance for Digital Sketching by David Lindlbauer et al Most importantly
it is hard to intuit how the primitives behave beyond basic demos For example
there is a subtle difference between constraining elements of the solution space
and constraining elements of the match Careful thought is required to discern
the difference Turning ink into symbols is an unnecessary technical crutch Untangle
is a system for assigning candidate symbols to potential targets In our experience
these symbols often consisted of many ink strokes such as for people’s names In
order to assign these to a target we need to recognize those strokes as being
part of a grouping Using a magic wand to group multiple strokes into a single
symbol In past research we found that it was very difficult to reliably and unambiguously
group ink strokes or to recognize repetitions For this project we simply sidestepped
the problem with a command to create a symbol out of a group of strokes This was
convenient for a research prototype but we feel this is unfortunate technical
ceremony Similarly when creating a drawing we encourage users to reuse copies
of the same target symbol to hint the solver system that those targets are related
Sometimes copying objects can be a fast and intuitive thing to do Other times—especially
when the shapes are simple—it feels more natural just to draw them again and have
them match automatically The history of inferring specific user intent from ink
gestures remains an open problem after many years of related work including work
on shape recognition for drawing programs and back to early stylusbased text input
systems such as Palm’s Graffiti Some parts of the system lack visibility A computing
system should never leave a user feeling uncertain about whether the intent of
the user has been understood In Untangle assignment arrows are freeform they can
exist with or without valid sources and targets This meshes well with the fuzzy
aesthetic but our implementation does not provide user feedback whether the arrow
has actually connected the items on the canvas other than the updated solutions
For this specific example it is easy to imagine how to improve this For example
by color coding arrows and connected spatial queries the same way we do for query
matches However color coding query components as well as their matches creased
a visually noisy rainbow canvas Finding the right queues and feedback for a system
like this is a subtle task There is a fundamental tension between maintaining
a focus on the user’s input by avoiding unnecessary UI chrome and preventing confusion
by providing sufficient feedback Responses from the computer are underdesigned
In systems where users and computers collaborate it helps to distinguish between
user input and computed responses In Untangle user input is always black and we
use pink to distinguish computed results from userdrawn strokes We refer to this
as the user voice vs the computer voice Throughout our research we often want
the machine to respond and draw with us but what is the “correct” way to do so
Because Untangle limits user input to a single color rendering results in pink
makes the distinction immediately clear but we worry this would become problematic
in a more fullfledged system We briefly explored rendering computed results using
a different “pen” for example a stylized marker but that felt uncanny—the user’s
input strokes redrawn exactly but with a different aesthetic Small canvas artificially
constrains the problem representation In Untangle the relatively small screen
size and lack of support for canvas features such as panning or zooming limits
the amount of data and the complexity of the problems you can represent Drawing
everything by hand also contributes to this limitation—it simply requires too
much effort to draw hundreds of student names or create a staff calendar for an
entire year We are interested in how the experience of Untangle would evolve as
we explored larger scale problems or more complicated representations For example
one future project we would love to see is “Untangle with external data” There
is no way to explore the solution space One omission in this work is the limited
ability to visualize the solution space Yes we can narrow down the solution space
by reifying a result “Sam has to review Steve’s paper” or by adding constraints
“the wildcard cannot be the same as the contents of the box next to it” but while
Untangle allows you to “scrub” through results it only ever shows one result at
a time In fact the solution space is not a homogeneous list there are recurring
patterns It would be interesting to explore visualizations that revealed clusters
or branches of candidate solutions that share similarities or how how different
constraints “cut off” certain areas of the solution space Conclusions With this
project we set out to discover what a nonbureaucratic theorem prover might look
like The traditional programming interface to a theorem prover is both strict
and formal Untangle shows a glimpse of a computational model with fuzziness at
its core Being able to handwave at a problem and get to results—often on the first
browse through the solution space—feels wonderful and is a stark contrast Spatial
queries provide a way to create structure on top of a freeform drawing Instead
of forcing problems prematurely into tabular form you can start sketching the
problem however feels natural You then work with the system to query that diagram
for a satisfying solution In some cases the final result may take the form of
more traditional tabular data but we found that building up to it from a freeform
drawing and not being constrained by it prematurely allowed us to explore our
ideas more naturally The combination of symbols spatial queries and arrows provides
a nice onramp for abstracting logic Rules can be built up out of simple examples
gradually adding assignment arrows or replacing those arrows’ concrete sources
and targets with spatial queries We feel this conceptual buildup is very promising
and points at a possible way of solving the repetition problem described in the
Crosscut essay Untangle is part of our “programmable ink” track of research continuing
from previous projects Inkbase and Crosscut We remain optimistic about systems
in which you directly manipulate the representation of the problem at hand and
that remain alive and reactive This combination allows you to improvise and rely
on intuitions instead of having to switch your thinking mode to one of effortful
logical computation We see here an exciting glimpse of conversation with a dynamic
medium—sketching at the speed of thought and collaborating with the machine We
welcome your feedback inkandswitch or helloinkandswitchcom Thank you to everyone
who offered feedback and guidance throughout the project Peter van Hardenberg
James Lindenbaum Todd Matthews Kevin Lynagh Geoffrey Litt Scott Jenson Joshua
Horowitz Patrick Dubroy William Taysom Daniel Krasner Ivan Reese Paul Shen Max
Schoening
- text: Harnessing the Medici Effect for More Profound Web3 Impact CARBON Copy Front
PageFeatures ReFi Data ProjectsBuildersTokensLandscapeImpact DashboardOpportunitiesKnowledge
Content NewslettersThe ReFi WeeklyLearnWeb3 Fundraising Guide for Nigerian NGOs
Resources OnChain Grant DirectoryGitcoin Grants CanvasThe Regen AtlasThe Climate
Capital StackCCN Metrics GardenX List of ReFi ProjectsAboutSubmit News OpinionHarnessing
the Medici Effect for More Profound Web3 ImpactHow we have strayed off the path
of originality and how to get back on itBy Trinity Morphy May 21st 2024In this
piece Trinity Morphy takes inspiration from a recent event to look at why originality
has been so difficult to attain in the Web3 impact space and how a phenomenon
called the Medici Effect can get us back on trackI had the privilege of attending
a Green Pill Nigeria Impact Tour event two weeks ago In his talk Izzy the lead
at Green Pill Nigeria pointed out the troubling trend of imitation in the impact
space Each cycle he said seems to be filled with recycled project ideas It has
become customary for Project XYZ to emerge with a seemingly novel concept only
to be quickly followed by Project ABC essentially a copycat with a new bell or
whistleThis lack of originality especially in Web3 impact is not just a hurdle
its a roadblock For instance other effective ways exist to utilise impact NFTs
apart from sequestering carbon or planting new trees Yet we keep seeing new projects
based on the same idea but with a more exciting descriptionHow can we actively
encourage a shift from simple recycling to genuine originality The Medici Effect
that the most innovative ideas happen when you combine concepts from different
fields offers insight By looking beyond the impact field we can unlock the potential
for more impactful solutionsWhy we recycle ideas and why originality is so importantpictwittercomsRtpkwgMGk—
OwockiΞth owocki May 17 2024Reduced risk and familiarity Humans naturally gravitate
towards familiarity Replicating a successful project offers security by minimising
the uncertainty of venturing into something completely new Its like following
a wellworn path instead of forging one through the wildernessAccess to funding
Investors and funding institutions are more likely to back sectors with a proven
track record of projects A copycat project can point to the success of the original
and argue that it can replicate that success with its own twistBandwagon effect
When a project becomes popular others aim to capitalise on the hype and user base
On the surface launching a copycat project appears easier as does the marketing
that goes with itAvoiding mistakes A successful project has already learned what
works and what does not Emulation enables new projects to sidestep those pitfalls
and concentrate on innovationOriginality is crucial to tackling our social and
ecological problems It allows us to see things differently question assumptions
and uncover new possibilities This fresh perspective can lead to new solutions
we would not have considered otherwise It empowers us to challenge outdated or
dysfunctional systems and propose solutions that address their shortcomings from
the ground up Original thinking enables us to recognise the potential in underutilised
resources and address problems at their roots not simply treat their symptomsEasier
said than done of course We need to look to the Medici Effect for a roadmapThe
Medici Effect and Web3 impactThe Medici Effect states that the most original and
extraordinary ideas occur when you combine concepts from different fields disciplines
or cultures It was discovered by Frans Johansson and shared in a book of the same
name A majority of the ideas in this section are borrowed from the book and modified
to fit Web3 impactSome examples of great products that have stemmed from the Medici
Effect include Silvi reforestation Web3 M3tering Protocol renewable energy distribution
Web3 Gitcoin fundraising Web3 and Toucan Protocol voluntary carbon markets Web3Heres
how we can get better at using the Medici Effect to our advantageDismantle the
barrier between Web3 and other fieldsFirst we must be willing to break the associative
barriers that exist between Web3 and other fields How do we do this Through exposure
to a diverse set of cultures Culture in this context extends beyond geographic
boundaries to encompass ethnic class professional and organisational differences
By immersing ourselves in these diverse backgrounds we can unlock a more open
and questioning mindset and challenge our assumptionsWe also need to embrace broad
selfeducation Traditional education often compartmentalises knowledge but selfdirected
exploration across disciplines expands what is possible Without it our thinking
is limited and we inadvertently stifle creativity Selfeducation empowers us to
discover unexpected connections and envision how concepts in other fields can
combine to create groundbreaking Web3 impact solutionsCombine random conceptsIntersectional
ideas are groundbreaking because the concepts involved are so different and the
combinations so unusual that no one would have thought them possible Take the
inspiration for this article as an example It happened after I came across an
X account named Cozomo de Medici I was not thinking about Web3 impact or the Medici
Effect That Is how random combinations work You have no control over it It just
comes Rather than leaving luck to do the work we can continuously stimulate our
brains to keep producing these random combinations How do we achieve thisNurturing
our curiosity By obsessing over new ideas approaches and perspectives we enhance
our brains ability to blend random concepts In the foreword of Exploring MycoFi
Scott Morris pointed to Emmett Jeffs relentless fascination with mushrooms and
mycelial networks and how it led to the creation of MycoFi a novel cryptoeconomic
model inspired by the structure of mycelial networksInteracting with diverse groups
of people Engage a diverse group of people and we will be presented with a diverse
set of perspectives Interacting with such groups can spark unexpected connections
For example a conversation between a Web3 enthusiast and a custodian of tradition
might lead to an idea to preserve cultural heritage using blockchainThese are
what create the conditions for the next phaseIgnite and evaluate an explosion
of ideasThe strongest correlation between quality and quantity of ideas is in
fact the number of ideas Do you know how many ideas you can get by simply combining
concepts from environmental science with Web3 concepts A LOT That is why it irks
me when I see so many projects centering on carbon credit tokenisation when there
are so many ideas yet to be explored The question here is once these ideas start
flooding in how do we handle themFirstly capture as many ideas as we can Keep
track of all the ideas our brains generate and set a target to reach before evaluating
their feasibility Secondly we need to take our time evaluating because our minds
will quickly judge the value of an idea by comparing it to what is already known
to work Its important that we evaluate each idea as sincerely as the next whether
its gamifying traditional classroom learning or eradicating open defecation in
underserved communitiesComing up with great ideas does not guarantee innovation
however We must make those ideas happenMake intersectional ideas happenThe paradox
of innovation at the intersection of fields is rooted in the symbiotic relationship
between ideas and failures the more ideas we explore the higher the rate of failure
Far from being a negative consequence failures are a vital part of the innovation
journey We learn and grow through them so that we can do better the next time
Not acting for fear of failure robs us of this crucial phaseDaring to try ideas
opens us to a world of invaluable insights that help us refine our approach identify
weaknesses and ultimately discover the path to a truly groundbreaking solution
In my interview with Christwin of Switch Electric and M3tering Protocol he shared
a fascinating story about the early days of his solution The first model they
tried unexpectedly incentivised the consumer to overload the solar infrastructure
leading to severe component wear and tear and attracting steep maintenance costs
This was a significant failure but it led to research that brought about the idea
of building M3tering Protocol on blockchain to track consumption and ensure transparencySince
failures are inevitable we can allocate specific resources for testing each new
idea and minimising the potential for catastrophic failure This allows for rapid
iteration where we can learn from each attempt refine our approach and move on
to the next experiment with valuable insights Do Not forget to document everything
along the way and share the results within the team This knowledge base becomes
a valuable resource for future projects because it prevents the same mistakes
and accelerates future breakthroughs We saw this in the transition from Gitcoin
10 and Gitcoin 20We also need to stay motivated along the way because the path
to ingenuity is rarely linear There will be setbacks and failures that make us
question our actions Staying motivated requires keeping the longterm vision in
mind and focusing on the positive impact our work can bring to the world We need
to celebrate all wins big and small whether its successfully completing a pilot
project achieving a user engagement milestone or receiving positive feedback from
a target community This reinforces our belief and helps us pushing forward despite
the inevitable setbacksConclusionThe Web3 impact space has potential to change
the way we tackle the worlds most pressing challenges Were not getting there however
if we keep recycling the same old ideas Originality is the key if we are to truly
unleash Web3 impact We must look beyond our own siloes break down barriers and
ignite curiosity across a variety of fields to spark the kind of breakthrough
ideas that will move the needleAnd when those ideas do come we must not shy away
from failure and its valuable lessons Nature exemplifies this beautifully a butterflys
struggle to escape its chrysalis is anything but pleasant Still this phase is
necessary for the butterfly to strengthen its muscles expand its wings and ultimately
fly Embrace the journey celebrate even the smallest wins and surround yourself
with a supportive network Together we can embrace the power of the Medici Effect
to create a better future for ourselves and the planetThis article represents
the opinion of the authors and does not necessarily reflect the editorial stance
of CARBON CopyCopyright © 2025 CARBON CopyToken data provided by CoinGeckoFront
PageFeaturesLearnReFi ProjectsAboutSubmit News
- text: 5 DeSci projects disrupting scientific research and development — Crypto Altruism
0 Skip to Content BLOG CATEGORIES DAOs EDUCATION ENVIRONMENT REFI EQUITY INCLUSION
FINANCIAL INCLUSION DEFI NFTs PHILANTHROPY SCIENCE DESCI SOCIAL IMPACT SUPPLY
CHAIN COMMENTARY PODCASTS CRYPTO ALTRUISM PODCAST THE WEB3 NONPROFIT IMPACT ON
OPTIMISM INFOGRAPHICS RESOURCES BECOME A CRYPTO CHARITY DONATING CRYPTO LEVERAGING
AI AT YOUR NONPROFIT ABOUT US WHO WE ARE TRANSPARENCY AFFILIATE PARTNERSHIPS CONTACT
SUPPORT US Open Menu Close Menu Open Menu Close Menu BLOG CATEGORIES DAOs EDUCATION
ENVIRONMENT REFI EQUITY INCLUSION FINANCIAL INCLUSION DEFI NFTs PHILANTHROPY SCIENCE
DESCI SOCIAL IMPACT SUPPLY CHAIN COMMENTARY PODCASTS CRYPTO ALTRUISM PODCAST THE
WEB3 NONPROFIT IMPACT ON OPTIMISM INFOGRAPHICS RESOURCES BECOME A CRYPTO CHARITY
DONATING CRYPTO LEVERAGING AI AT YOUR NONPROFIT ABOUT US WHO WE ARE TRANSPARENCY
AFFILIATE PARTNERSHIPS CONTACT SUPPORT US BLOG Folder CATEGORIES Back DAOs EDUCATION
ENVIRONMENT REFI EQUITY INCLUSION FINANCIAL INCLUSION DEFI NFTs PHILANTHROPY SCIENCE
DESCI SOCIAL IMPACT SUPPLY CHAIN COMMENTARY Folder PODCASTS Back CRYPTO ALTRUISM
PODCAST THE WEB3 NONPROFIT IMPACT ON OPTIMISM INFOGRAPHICS Folder RESOURCES Back
BECOME A CRYPTO CHARITY DONATING CRYPTO LEVERAGING AI AT YOUR NONPROFIT Folder
ABOUT US Back WHO WE ARE TRANSPARENCY AFFILIATE PARTNERSHIPS CONTACT SUPPORT US
5 DeSci projects disrupting scientific research and development Project HighlightsScienceDAOs
Mar 30 Written By Drew Simon 2021 was the year of decentralization and this momentum
has only increased into 2022 Not only have we seen incredible growth in the decentralized
finance DeFi space but we have also seen the emergence of social impact DAOs decentralized
media platforms decentralized VC funds and more recently the emergence of a new
field – Decentralized Science or DeSci In short “the decentralized science DeSci
movement aims to harness new technologies such as blockchain and ‘Web3’ to address
some important research pain points silos and bottlenecks” Whereas scientific
research has long been viewed as overly bureaucratic and disjointed the DeSci
movement aims to improve this by using blockchain to offer greater transparency
and to take on the “profit hungry intermediaries” such as scientific journals
that have dominated the traditional research spaceFor some resources on DeSci
I recommend you check out the following articlesDeSci an opportunity to decentralize
scientific research and publicationA Guide to DeSci the Latest Web3 MovementCall
to join the decentralized science movementFor this blog post we will be highlighting
5 DeSci projects that are leading the way and positively disrupting scientific
research and development1 VitaDAOOne of the best examples of DeSci in action is
VitaDAO a Decentralized Autonomous Organization DAO focused on funding longevity
research in “an open and democratic manner” Specifically they are focused on the
decentralization of drug development focused on the extension of human life and
healthspan They fund earlystage research with the goal of turning the research
into biotech companiesVitaDAO is government by holders of VITA tokens which can
either be purchased or earned through contributions of work or intellectual property
With over 4000 members and 9M in funding raised to support scientific research
VitaDAO has proven that the DeSci movement is no laughing matterCheck out some
of their featured projects here2 SCINETThe SCINET platform which is built on blockchain
enables retail and institutional investors to securely invest in scientific research
and technology directly In addition to funding promising scientific research they
also offer a “blockchainpowered” cloud laboratory for researchers a rigorous decentralized
peer review process and enable researches to document their IP on an immutable
blockchain3 AntidoteDAOAntidoteDAO is a decentralized community focused on funding
cancer research and other cancer initiatives Their ecosystem includes a governance
token and NFT collection which both enable individuals to vote on where to allocate
funds In addition to providing funding to charities supporting cancer research
and cancer patients a core focus of the DAO is on providing 100K seed fund grants
to cancer research teams Research projects are first reviewed by the DAO’s Medical
Advisory team and then put to the community for a vote Fun fact we have an upcoming
podcast episode with AntidoteDAO that when available will be published HERE Crypto
Altruism uses Ledger to keep its assets safeYou’ve probably heard the phrase “not
your keys not your coins” By choosing a hard wallet like the Nano S Plus to store
your crypto you can rest assured that the keys and the crypto are truly yoursGet
your Ledger Nano S Plus now by clicking HERE or on the image below 4 LabDAOLabDAO
is an emerging organization which is dedicated to operating a communityrun network
of wet and dry labs with the goal of advancing scientific research and development
A wet lab is one focused on analysing drugs chemicals and other biological matter
whereas a dry lab is one focused on applied or computational mathematical analysis
LabDAO is a relatively new project that is still in its infancy but has a promising
mission and strong community of support around it 5 MoleculeMolecule is a decentralized
scientific research funding platform that operates as a marketplace for researchers
seeking out funding and individuals looking to invest in scientific research projects
They are “connecting leading researchers to funding by turning intellectual property
and its development into a liquid and easily investable asset”Researchers can
list their research projects on the Molecule marketplace as a means to engage
with potential investors and to secure funding for their project Molecule currently
has over 250 research projects listed on their marketplace over 4500 DAO community
members and 3 “Bio DAOs” with over 10M in funding in their network According to
Molecule “The future of life science research will be driven by open liquid markets
for intellectual property powered by web3 technology”We cover more amazing DeSci
projects in our more recent postTen more DeSci projects disrupting scientific
research development and knowledge sharing Buy me a coffee Send a tip in ETH cryptoaltruismethLike
what you are reading Consider contributing to Crypto Altruism so we can continue
putting out great content that shines a light on the good being done in the crypto
and blockchain community SUPPORT CRYPTO ALTRUISM Please note we make use of affiliate
marketing to provide readers with referrals to high quality and relevant products
and services DeScidecentralizationscienceblockchainlists Drew Simon Previous Previous
Crypto Altruism Podcast Episode 39 AntidoteDAO Decentralized funding of cancer
research and charitable initiatives Next Next Crypto Altruism Podcast Episode
38 Using NFTs to empower content creators and help kids learn ft Susie Jaramillo
CONTENTBLOGPODCASTINFOGRAPHICSCURATED LISTS ABOUTABOUTSUPPORT USCONTACTDISCLAIMERPRIVACY
POLICY Buy me a coffee ETHERC20 cryptoaltruismeth 0xac5C0105914F3afb363699996C9914f193aeDD4A
Sign up for our monthly newsletter Thank you © Crypto Altruism 2023 FOLLOW
- text: A Short History of BiDirectional Links Home The Garden Essays Notes Patterns
Smidgeons Talks Podcasts Library Antilibrary Now About The Garden Essays Notes
Patterns Smidgeons Talks Podcasts Library Now About Essays evergreen A Short History
of BiDirectional Links Seventy years ago we dreamed up links that would allow
us to create twoway contextual conversations Why do not we use them on the web
Design Digital Gardening The Web Web Development Planted almost 5 years ago Last
tended about 4 years ago Table of Contents Bidirectional links are not new Bidirection
Linking in Personal Digital Gardens Building Your Own BiDirectionals BiDirectional
Linking with WebMentions Table of Contents Bidirectional links are not new Bidirection
Linking in Personal Digital Gardens Building Your Own BiDirectionals BiDirectional
Linking with WebMentions With the recent rise of Roam Research the idea of bidirectional
linking is having a bit of a moment We are all very used to the monodirectional
link the World Wide Web is built around They act as oneway pointers we follow
in a linear sequence While we can link to any site the destination page has not
a clue we have done so We set up all these singledirection paths trying to signal
relevance and context only to have the other side completely ignore our efforts
Our monolinks are trying to establish relationships in vain We are starting to
look around our monolinked environment and wonder why it is so hard to surface
relevant contextual relationships Manually interlinking content takes an awful
lot of human curation and effort Efforts we should probably slog off onto our
systems Enter the bidirectional link A bidirectional link has social awareness
it knows about other pages or ‘nodes’ that point to it and can make them visible
to people This means we get a twoway conversation flowing between our web locations
Bidirectional links are not new The idea of the bidirectional link goes back to
194580ya when Vannevar Bush dreamed up the Memex machine Vannevar outlined this
hypothetical gadget in an essay in The Atlantic called As We May Think He wanted
a system capable of “associative indexing… whereby any item may be caused at will
to select immediately and automatically another… so that numerous items have been
thus joined together to form a trail” This essay turned out to be a foundational
document for the ideologies that directly led to both the internet and the Web
Yes those are two entirely separate pieces of technology Vannevar was one of the
key movers and shakers rallying folks to help build the original internet infrastructure
He corraled folks at MIT the US Department of Defence the National Science Foundation
and various research labs like the Standford Lincoln Lab Bell Labs the RAND Corporation
and Xerox PARC to get involved Walter Isaacson The Innovators How a Group of Hackers
Geniuses and Geeks Created the Digital Revolution London Simon Schuster 201510ya
Suffice to say the guy was driven by a belief that enabling people to connect
information and share knowledge would expand the scope of human understanding
The Memex was one idea of how that might manifest in material form Vannevar even
created a small informative diagram of this deskbound vision Marketing chops 101
Vannevar’s evocative description of the Memex is especially impressive given that
digital computers had only come into existence 5 years earlier Most were still
the domain of large military operations like Bletchley Park and were seen as inconveniently
large calculators Implementing a wildly interactive computational personal knowledge
base was not much of an option So the idea went into hiberation and did not resurface
until the idea of personal computing began blooming in the sixties and seventies
Ted Nelson an unlikely film director and sociologist stumbled into a series of
computing lectures and began to imagine how graphical interfaces might reinvent
the way we write and connect ideas He took inspiration directly from Vannevar’s
essay and in 196560ya when he coined the term hypertext to describe his vision
for a sprawling network of interlinking information Nelson planned to implement
these hypertextual dreams in his perpetuallyimminent Project Xanadu If you have
some time this is quite the internet history rabbit hole to run down Ted Nelson
is on another level The Xanadu project was a hypertext system that imagined that
every sentence block and page would be part of a vast bidirectionally linked network
A design mockup of how Project Xanadu might visually connect pieces of text across
multiple documents You would be able to trace information back to its origin the
way current web links do But you would also be able to see who had referenced
remixed and expanded off that original The full Pattern Language of Project Xanadu
expands far beyond just bidirectional links to include features like Transclusions
but we will not dive into it all here Suffice it to say Xanadu did not pan out
Instead we got the less fancy but far more real and useable World Wide Web that
currently does not support bidirectionals on an infrastructure level While Sir
Tim Berner’s Lee wrote himself a note debating their pros and cons back in 199926ya
there is an obvious design issue with letting twoway connections flow freely around
the web If every site that linked to yours was visible on your page and you had
no control over who could and could not link to you it is not hard to imagine
the Trollish implications… Figuring out how we might filter moderate and set permissions
around link visibility turned into quite the challenge The design details grew
complex It became clear implementing the Web with simpler monodirectional links
was the right thing to do given that its creators wanted universal adoption Lots
of people are still mad about it Let us not venture too far down that historical
wormhole The TLDR is technology is hard Until Xanadu ships and we are all immersed
in the universe of multilinked versioncontrolled nodes of remixable microcontent
that somehow solves the problems of permissions and moderation there are still
plenty of ways we can resurrect the possibility of bidirectional links on the
web Bidirection Linking in Personal Digital Gardens Most of the design issues
with adding bidirectional links to the global web were related to moderation and
permissions However adding them within the bounds of a single website with one
author sidesteps that problem There is been a flurry of interest around bidirectionals
among people involved in the Digital Gardening movement Much of this was originally
sparked by Andy Matuschak’s notes Go have a good browse through them Andys linked
notes stack on top of one another allowing you to browse to new notes while previous
notes are still visible There is plenty to admire here It should be noted Andy
is an experienced developer and interaction designer and these notes should not
be taken as the standard expectation for the rest of us normal plebby internet
citizens But the key part of this system that creates interlinked context is the
“Links to this Note” section at the bottom of each post Anytime Andy links to
another one of their notes on the site it will pop up as a related note at the
bottom of the page This is the bidirectional dream It gives us a way to navigate
through these ideas in exploratory mode rather than navigating a hierarchy of
categories on a main index page Since it is all contained within a singleauthor
site our SpammishTrollrisk factor is at a comfortable zero This is mildly tangential
but I love how the topic of bidirectional links makes fully visible our “websites
are locations” and “websites are containers” conceptual metaphors with “inside”
and “outside” links Building Your Own BiDirectionals That is all very cool but
how are you supposed to build bidirectionals into your own site Thankfully setting
up your own public gardening bidirectional Memex does not involve Xanadu One fantastic
option for nondevelopers is based around a personal wiki system called TiddlyWiki
AnneLaure Le Cunff wrote up an easytofollow guide to getting your own up and running
For those of us here for the hypercustomised overengineered JavaScript solution
that would be me the Gatsbyjs community has a number of active gardening enthusiasts
building themes and plugins I built mine using Aengus McMillin’s gatsbythemebrain
Aengus has documented the theme well and it is not too challenging to implement
as long as you are comfortable in JavaScript and React I also curate a list of
tools for Digital Gardening on this Github repo BiDirectional Linking with WebMentions
While I argued that Webwide bidirectional links are unlikely to happen at a global
scale there is a way you can add bidirectionals to your personal website that
picks up on references anywhere on the web WebMentions are a piece of web infrastructure
the IndieWeb community has done a lot of work to advocate for The W3C gave the
specification recommendation status in 20178ya The system notifies a URL whenever
that site is mentioned elsewhere on the web You are then able to show that mention
and its contents on your site It is essentially an optin bidirectional linking
system Plenty of folks have written useful guides on how to add these to your
site Here is one for any static site one for Gatsby one for Nextjs There is a
whole list of implementation examples on the IndieWeb Wiki you can look through
5 Backlinks A Brief History Ethos of the Digital Garden A newly revived philosophy
for publishing personal knowledge on the web Digital Gardening for NonTechnical
Folks How to build a digital garden without touching code Transclusion and Transcopyright
Dreams The lost permissioning and copyright system of the Web The Pattern Language
of Project Xanadu Project Xanadu as a pattern language rather than a failed software
project A MetaTour of This Site A video tour through how I build the old version
of this site Mentions around the webCarlos Sanmartín Bustosmentioned in En busca
de enlaces bidireccionalesJune 24 2024Una de las cosas que echo en falta al haber
pasado el blog a un sitio estático son los links bidireccionales No tanto los
links que provengan de fuera que hoy en día van a ser mínimos como los links internos
que permiten explorar el blog por ejemplo para avanzar hacia el futurAngsuman
Chakrabortymentioned January 07 2023A Short History of BiDirectional Links CypherNewsmentioned
January 04 2023 Hacker News A short history of bidirectional links 2020 hackernews
HN Front Pagementioned January 04 2023A Short History of BiDirectional Links L
C newsycombinatorcomitemid342447…Winson Tangmentioned January 04 2023A short history
of bidirectional links 2020 Hacker Newsmentioned January 04 2023A short history
of bidirectional links 2020 Patrick Durusau ⏳ White Person on White Supremacymentioned
May 18 2022codexeditor I Have lost the tweet where I saw this mentioned earlier
today I like the Web Mentions idea esp if we could use XPath to point to less
than the page level Thoughts Show 3 more Want to stay up to date Subscribe via
RSS Feed © 2025 Maggie Appleton The Garden Essays About Notes Now Patterns Podcasts
Talks Smidgeons Colophon Library
- text: Part 1 Introduction to Responsible Technology by goodbot MediumOpen in appSign
upSign inWriteSign upSign inPart 1 Introduction to Responsible TechnologygoodbotFollow11
min read·Aug 3 2023ListenShareIntroductionIn late 2022 the GoodBot team — consisting
of a group of sociallyminded professionals working in law technology and policy
— came together to develop a snapshot of Canada’s current Responsible Technology
landscape This is a space that to date had been heavily defined by voices from
the United States Our goal is to understand the Canadian ecosystem’s current composition
capacity and direction particularly how technology impacts Canadians what issues
are of focus in research and programming and how the policy landscape is evolving
at the provincial national and international levelsTechnology is revolutionizing
everything from healthcare to education to law to climate science and government
but is also associated with a wide range of risks and harms making responsible
technology governance a critical priority for governments nonprofits and marketsNew
visions clear policy frameworks effective implementation methods and multistakeholder
oversight bodies are needed to navigate this landscape It also requires public
interestfocused strategic collaboration that includeslongterm research on harmsmore
transparency and collaborative efforts with technology companies to strengthen
safetyeffective mechanisms to hold companies accountable when they fail to act
in response to harminvestment in public interest technology and philosophies that
prioritize healthy technology ecosystemsunderstanding the leverage points and
incentive systems that contribute to these outcomes andthe development of the
critical capacities needed to meet this momentWe know from the last two decades
that when technology tools and platforms become deeply embedded in institutions
like media and education before they are regulated they become harder to govern
This reality has led to escalating and lasting harm in a number of areas The pressing
question now is how to forge a new direction recognizing the immediate need to
take actionThis is the introduction to a research project on Canada’s Responsible
Technology landscape GoodBot’s goal with this research is threefoldTo understand
the current landscape and priorities of Canada’s Responsible Technology ecosystem
including the who what where and howTo highlight gaps and opportunities within
this ecosystem that can help Canada develop a more robust impactful and collaborative
approach and agenda andTo understand what role GoodBot can play in advancing Responsible
Technology at home and around the worldWhat is Responsible TechnologyResponsible
Technology acts as an umbrella for a range of approaches and terms that all focus
on different issues or specific intervention points in technology and business
life cycles It includes concepts such as ethical tech humane tech tech stewardship
and public interest tech all of which are connected and overlap but which also
center on different locus of influence These and other terms will be explored
in Part 1 of our seriesResponsible Technology is a relatively new framing that
includes a wide range of issues related to technology Some issues — like privacy
and freedom of expression — are longstanding cornerstones of the most established
technology nonprofits in the country while conversations on Generative AI risks
are relatively new Yet technology and AI ethics have been around for decades with
themes prominent among academic labs human rights defenders peacebuilders and
in other convening spaces What has changed is the scale and pace of technology
and the amplification of new risks and narratives surrounding technology harms
These have created a growing awareness of the need for safetyfocused design and
research at the outset meaningful oversight of technology companies and effective
accountability mechanisms when companies fail to act in the public interestMany
technology tools and businesses currently fail to meet even a vague understanding
of Responsible Technology This is especially true for small to medium technology
companies who focus on survival and leave proactive assessment and harm mitigation
as an afterthought if they get any attention at all Others have social impact
strategies that are completely detached from their business modelFew companies
start with the goal of causing harm but unintended consequences can arise due
to a lack of intentional consideration capacity and unanticipated and conflicting
priorities Additionally as products scale seemingly harmless matters can lead
to real harm as user bases grow use cases expand and incentive structures change
The explosive emergence of generative AI has made it even more clear that left
unaddressed these structural factors risk widening the gap between privatized
profit and socialized riskIn spite of these growing concerns several Big Tech
companies have chosen to lay off large segments of their Trust and Safety teams
seeing them as cost centers that add undesirable operational complexity Even when
companies have Trust and Safety teams in place they are often pitted against product
teams The result is companies that increasingly seek to automate these decisions
which frequently and disproportionately impact minority communities including
human rights defendersSome companies have begun sharing Transparency Reports which
is a move in the right direction However there are no agreed standards or metrics
against which to assess companies’ commitment to social sustainability nor is
there any external oversight These factors lead to the possibility of ‘tech washing’
and cherrypicking data that provide the appearance of taking action but in ways
that lack substantive effect or because new harmMoreover even companies that have
the desire to act responsibly can lose sight of their original goals when they
face demands for outsized returns from investors — including venture capitalists
and private equity investors — which can place them at odds with decisions that
are in the public interestIn this context a wide range of social harms and externalities
have arisen — many of which are unintentional — and which includeBiased Unaccountable
Untransparent AutomationBias in untransparent algorithms that discriminate against
marginalized groupsDisruption of the workforce by generative AI in almost all
professional sectorsBig Tech DominationBig Tech market domination to control value
chains through predatory pricing terms and acquisitionsNonconsensual selling of
personal data to and from thirdparty data brokersAddiction Mental HealthThe use
of dark patterns to drive engagement and addiction in gaming and social mediaThe
decline of attention spans at a population level in the last 20 yearsThe decline
in mental health and body image especially among youth leading to self harm and
deathHarassment Violence ExtremismIncitement to radicalization extremism and even
genocideTrolling doxing and harassment including targeting women trans and BIPOC
peopleBad ActorsTargeted and opportunistic disinformation and microtargeting to
undermine democraciesScams to steal millions from people through generative AI
or with crypto hacksTrafficking of women and girls on the dark web and in mainstream
platformsThis is by no means a comprehensive list In response Responsible Technology
advocates have advanced efforts in recent years to understand the wide array of
externalities impacting different levels of society These initiatives variously
aim to understand harms and causes increase public awareness and engagement incentivize
governments to enact new laws and enforce existing ones and create solutions that
lead to safer and more responsible technologyenabled environmentsAdditionally
a new wave of government and nonprofit investigations and litigation aims to clarify
technology companies’ responsibilities and identify leverage points to incentivize
responsibility These efforts have had some successes but are often too incremental
and underresourced to keep paceThe scale and complexity of issues arising from
technology are unprecedented Canada needs a clear Responsible Technology agenda
and sufficient investment to move toward a technological future defined by healthy
people businesses markets societies and democraciesThe Asymmetry Sustainability
GapA key barrier facing Responsible Technology advocates — including journalists
academics tech nonprofits technologists tech ethics experts policymakers and citizens
— in addressing technology harms is an existing and growing set of asymmetrical
disadvantages when compared to the companies and sectors responsible for harmThese
disadvantages manifest in many forms including limited access to talent data sets
algorithms infrastructure information internal research and audits knowledge and
resources It shows up in access to capital restrictions on how capital can be
used and comparatively robust ethical requirements It is further exacerbated by
companies’ anticompetition tactics to buy out underprice feature bundle and otherwise
aggressively quash any disruptors who may be able to offer healthier alternativesAdditionally
asymmetries show up when comparing the inputs and outcomes around harm It is —
for example — much less resource intensive to create campaigns to disseminate
disinformation on vaccines — than it is to undo the damage caused by this disinformation
This reality places a societal premium on considering what effective oversight
governance and accountability of technology look like but also raises the need
to balance corrective actions with Freedom of Expression FoE norms The global
nature of many technology platforms means that US norms are effectively imposed
on Canada and other countries Yet Canada has its own unique and wellestablished
interpretations of fundamental freedoms that should be considered and protected
in the face of technological changeCanada’s Responsible Technology ecosystem is
small and underresourced compared to its US counterparts and at this time there
are few prominent ecosystemlevel organizations that are wellpositioned to guide
a Responsible Technology community and strategy Yet to effectively influence outcomes
Responsible Technology advocates need to work together including by establishing
new governance innovationsA critical factor in the Canadian context is that much
of the funding to date has focused on understanding symptoms and immediate causes
rather than underlying structural issues and incentive systems at play Some of
this is a product of new and emerging organizations with limited track records
while other issues arise from inadequate and restrictive funding and opportunities
Some organizations only work on technology matters at a project level rather than
a mission level Such factors are important to understanding the limitations on
Canada’s capacity and sustainability Our sixth report will explore Canada’s nonprofit
capacity in more depthAn additional barrier is that there are no obvious market
solutions to address many of the problems that arise from technology For example
while technology has led to unprecedented online harassment of women people of
color and LGBTQ communities companies offer limited solutions aimed at limiting
harassmentThis reality is exacerbated by the fact that markets recently rewarded
technology companies for cuts that significantly depleted their Trust Safety teams
even when it occurred at a time of high salience around the risks posed by technology
platformsLack of coordination is also a factor among tech companies For example
some companies argue that if they do not employ harmful tactics such as polarizing
content or algorithms to attract attention they will lose out to rivals who will
These realities point to an increasing need for sectors to collaborate toward
reducing harm and promoting public interestIndeed in areas where companies have
invested more material resources — including to address extremism and protect
children — even Big Tech lacks the bandwidth to address the complex range of issues
on its own making collaboration essential Within the tech sector new multistakeholder
initiatives such as Global Internet Forum to Counter Terrorism and the Tech Coalition
have been launched in an effort by technology platforms — and include human rights
advocates governments and researchers — who work collectively to reduce extremism
and child sexual abuse material online respectively Such issues are even more
challenging for smaller platforms and startups that lack internal resources to
respond to emerging and unanticipated issuesUltimately while technology companies
contribute many benefits to society they also contribute significant problems
that neither society the markets nor they are presently able to solve They often
lack incentives to prevent and address problems up front and even when they want
to do the right thing investor incentives can derail their decisions There are
limited market incentives to address challenges created by businesses and market
competition Moreover our governance institutions operate in ways that are incompatible
with the fastmoving pace of technology which presently tend to focus on harm to
individuals instead of on harm to society These factors together mean that we
are in a situation that is by definition unsustainable We need collective action
to address these risks including through responsible resources capacities frameworks
budgets and policiesCanada’s Policy LandscapeWhile several technologyfocused bills
are in development Canada’s current policy landscape is far behind the advances
of the last 20 years This is also true of many other countries that currently
lack the policies capacities institutions and enforcement mechanisms needed to
govern a rapidly evolving technology environment Furthermore research shows that
even where effective policies are in place underresourced enforcement mechanisms
such as antitrust hamper the ability of governments to enforce existing laws In
this context the role of civil society organizations is particularly importantWhile
the policy landscape is expected to evolve rapidly we do not know what effect
these policies will have or how coherent they will be across jurisdictions and
issues Canada has introduced many bills for consideration but few have passed
and all leave much to be desired In the intervening time legal firms and the Privacy
Commissioner have advanced litigation — with a particular focus on privacy and
antitrust — to hold technology companies accountableCivil society organizations
often lack consensus on addressing key issues For instance Online Harms advocates
support stricter content moderation to tackle harassment extremism and child exploitation
In Canada some lean towards Freedom of Expression while others recognize the need
to address such harms but fear that wellintentioned policies could inadvertently
suppress the voices of vulnerable communities Balancing these competing rights
is complex Responding to emerging issues and harms that affect us all requires
better public engagement This includes open and inclusive dialogue transparent
consultation processes and effective accountability mechanisms to navigate these
complexities and to help uphold and balance a range of fundamental Canadian rightsIndeed
there are no ‘right’ answers but rather ‘different tradeoffs’ Moreover there are
ways to escape such polarity traps which can easily become politicized resulting
in deadlock Even among nonprofit organizations such as the ones that are seemingly
at odds on online harms for example both largely agree that passing comprehensive
privacy frameworks would mark an important victory and achievementYet even if
we achieve effective regulation and enforcement addressing entrenched asymmetries
— especially those caused by Big Tech — requires a collective agenda and roadmap
What is clear is that our current institutions lack the capacity and resilience
needed to address the challenge we faceAs Canadians we have an opportunity to
draw upon the values that make us strong as we reimagine our relationship with
technology Ideas such as Indigenous approaches to data sovereignty collaboration
and multiculturalism have much to teach us about how to navigate these complex
issues This moment presents an opportunity to rethink and localize the who how
and why of technology governance This is both a daunting and an exciting challengeOpenSourcing
GoodBot’s ResearchFor GoodBot our first step is to practice our open principles
by sharing what we have learned We hope that this research can lay the groundwork
for building a Canadian coalition to support its nascent and necessary Responsible
Technology movementGetting to impact requires understanding Canada’s existing
capacity understanding the systemic issues at play exploring moral and policy
considerations surfacing current and emerging asymmetries of power and exploring
how AI is upending companies and industries It also requires a collective strategy
and targeted action focused on moving toward responsibilityThese documents are
intended to act as a primer for anyone seeking to make an impact in addressing
critical priorities facing Canada and the world While our early research may initially
be of more value to nonprofits and academics and should be considered a WorkinProgress
we aspire to a future where solutionsoriented multistakeholder collaboration is
a new norm Our research will be broken into two partsPart 1 is a Canadian Responsible
Technology Landscape that explores common terms civil society stakeholders current
and emerging policies and litigation and how asymmetries of power manifestPart
2 reviews the results of a survey conducted key observations on the current and
emerging landscape and critical reflections on how to strengthen interdisciplinary
collaboration among nonprofit organizations academia the tech sector and governmentIn
an ideal world this work will lead to highlevel consultations and strategies developed
in collaboration with other ecosystem organizations motivated to move this conversation
forwardCanada as a Global Leader in Responsible TechDespite and perhaps because
of the wide array of challenges Canada has an opportunity to become a global leader
in developing deploying and governing technology in socially sustainable ways
Getting there requires an urgent focus on strengthening national capabilities
by investing in strategic and systemsfocused multistakeholder mechanismsIndeed
organized effectively Canadian civil society represents critical and untapped
assets to help meet this moment There is also a need to strengthen citizen education
advance responsible policy and oversight create technical solutions to advance
the public interest introduce responsible technology certifiers and respond to
systemic factors that lead to harmful outcomes Canadians can no longer afford
to wait The time to engage is nowVersion 10 July 2023 Written by Renee BlackFollowWritten
by goodbot3 Followers·1 FollowingFollowNo responses yetHelpStatusAboutCareersPressBlogPrivacyRulesTermsText
to speech
metrics:
- accuracy
pipeline_tag: text-classification
library_name: setfit
inference: false
base_model: sentence-transformers/paraphrase-mpnet-base-v2
model-index:
- name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.5517241379310345
name: Accuracy
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A MultiOutputClassifier instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a MultiOutputClassifier instance
- **Maximum Sequence Length:** 512 tokens
<!-- - **Number of Classes:** Unknown -->
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.5517 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("praisethefool/human_tech-fields-multilabelclassifier")
# Run inference
preds = model("5 DeSci projects disrupting scientific research and development — Crypto Altruism 0 Skip to Content BLOG CATEGORIES DAOs EDUCATION ENVIRONMENT REFI EQUITY INCLUSION FINANCIAL INCLUSION DEFI NFTs PHILANTHROPY SCIENCE DESCI SOCIAL IMPACT SUPPLY CHAIN COMMENTARY PODCASTS CRYPTO ALTRUISM PODCAST THE WEB3 NONPROFIT IMPACT ON OPTIMISM INFOGRAPHICS RESOURCES BECOME A CRYPTO CHARITY DONATING CRYPTO LEVERAGING AI AT YOUR NONPROFIT ABOUT US WHO WE ARE TRANSPARENCY AFFILIATE PARTNERSHIPS CONTACT SUPPORT US Open Menu Close Menu Open Menu Close Menu BLOG CATEGORIES DAOs EDUCATION ENVIRONMENT REFI EQUITY INCLUSION FINANCIAL INCLUSION DEFI NFTs PHILANTHROPY SCIENCE DESCI SOCIAL IMPACT SUPPLY CHAIN COMMENTARY PODCASTS CRYPTO ALTRUISM PODCAST THE WEB3 NONPROFIT IMPACT ON OPTIMISM INFOGRAPHICS RESOURCES BECOME A CRYPTO CHARITY DONATING CRYPTO LEVERAGING AI AT YOUR NONPROFIT ABOUT US WHO WE ARE TRANSPARENCY AFFILIATE PARTNERSHIPS CONTACT SUPPORT US BLOG Folder CATEGORIES Back DAOs EDUCATION ENVIRONMENT REFI EQUITY INCLUSION FINANCIAL INCLUSION DEFI NFTs PHILANTHROPY SCIENCE DESCI SOCIAL IMPACT SUPPLY CHAIN COMMENTARY Folder PODCASTS Back CRYPTO ALTRUISM PODCAST THE WEB3 NONPROFIT IMPACT ON OPTIMISM INFOGRAPHICS Folder RESOURCES Back BECOME A CRYPTO CHARITY DONATING CRYPTO LEVERAGING AI AT YOUR NONPROFIT Folder ABOUT US Back WHO WE ARE TRANSPARENCY AFFILIATE PARTNERSHIPS CONTACT SUPPORT US 5 DeSci projects disrupting scientific research and development Project HighlightsScienceDAOs Mar 30 Written By Drew Simon 2021 was the year of decentralization and this momentum has only increased into 2022 Not only have we seen incredible growth in the decentralized finance DeFi space but we have also seen the emergence of social impact DAOs decentralized media platforms decentralized VC funds and more recently the emergence of a new field – Decentralized Science or DeSci In short “the decentralized science DeSci movement aims to harness new technologies such as blockchain and ‘Web3’ to address some important research pain points silos and bottlenecks” Whereas scientific research has long been viewed as overly bureaucratic and disjointed the DeSci movement aims to improve this by using blockchain to offer greater transparency and to take on the “profit hungry intermediaries” such as scientific journals that have dominated the traditional research spaceFor some resources on DeSci I recommend you check out the following articlesDeSci an opportunity to decentralize scientific research and publicationA Guide to DeSci the Latest Web3 MovementCall to join the decentralized science movementFor this blog post we will be highlighting 5 DeSci projects that are leading the way and positively disrupting scientific research and development1 VitaDAOOne of the best examples of DeSci in action is VitaDAO a Decentralized Autonomous Organization DAO focused on funding longevity research in “an open and democratic manner” Specifically they are focused on the decentralization of drug development focused on the extension of human life and healthspan They fund earlystage research with the goal of turning the research into biotech companiesVitaDAO is government by holders of VITA tokens which can either be purchased or earned through contributions of work or intellectual property With over 4000 members and 9M in funding raised to support scientific research VitaDAO has proven that the DeSci movement is no laughing matterCheck out some of their featured projects here2 SCINETThe SCINET platform which is built on blockchain enables retail and institutional investors to securely invest in scientific research and technology directly In addition to funding promising scientific research they also offer a “blockchainpowered” cloud laboratory for researchers a rigorous decentralized peer review process and enable researches to document their IP on an immutable blockchain3 AntidoteDAOAntidoteDAO is a decentralized community focused on funding cancer research and other cancer initiatives Their ecosystem includes a governance token and NFT collection which both enable individuals to vote on where to allocate funds In addition to providing funding to charities supporting cancer research and cancer patients a core focus of the DAO is on providing 100K seed fund grants to cancer research teams Research projects are first reviewed by the DAO’s Medical Advisory team and then put to the community for a vote Fun fact we have an upcoming podcast episode with AntidoteDAO that when available will be published HERE Crypto Altruism uses Ledger to keep its assets safeYou’ve probably heard the phrase “not your keys not your coins” By choosing a hard wallet like the Nano S Plus to store your crypto you can rest assured that the keys and the crypto are truly yoursGet your Ledger Nano S Plus now by clicking HERE or on the image below 4 LabDAOLabDAO is an emerging organization which is dedicated to operating a communityrun network of wet and dry labs with the goal of advancing scientific research and development A wet lab is one focused on analysing drugs chemicals and other biological matter whereas a dry lab is one focused on applied or computational mathematical analysis LabDAO is a relatively new project that is still in its infancy but has a promising mission and strong community of support around it 5 MoleculeMolecule is a decentralized scientific research funding platform that operates as a marketplace for researchers seeking out funding and individuals looking to invest in scientific research projects They are “connecting leading researchers to funding by turning intellectual property and its development into a liquid and easily investable asset”Researchers can list their research projects on the Molecule marketplace as a means to engage with potential investors and to secure funding for their project Molecule currently has over 250 research projects listed on their marketplace over 4500 DAO community members and 3 “Bio DAOs” with over 10M in funding in their network According to Molecule “The future of life science research will be driven by open liquid markets for intellectual property powered by web3 technology”We cover more amazing DeSci projects in our more recent postTen more DeSci projects disrupting scientific research development and knowledge sharing Buy me a coffee Send a tip in ETH cryptoaltruismethLike what you are reading Consider contributing to Crypto Altruism so we can continue putting out great content that shines a light on the good being done in the crypto and blockchain community SUPPORT CRYPTO ALTRUISM Please note we make use of affiliate marketing to provide readers with referrals to high quality and relevant products and services DeScidecentralizationscienceblockchainlists Drew Simon Previous Previous Crypto Altruism Podcast Episode 39 AntidoteDAO Decentralized funding of cancer research and charitable initiatives Next Next Crypto Altruism Podcast Episode 38 Using NFTs to empower content creators and help kids learn ft Susie Jaramillo CONTENTBLOGPODCASTINFOGRAPHICSCURATED LISTS ABOUTABOUTSUPPORT USCONTACTDISCLAIMERPRIVACY POLICY Buy me a coffee ETHERC20 cryptoaltruismeth 0xac5C0105914F3afb363699996C9914f193aeDD4A Sign up for our monthly newsletter Thank you © Crypto Altruism 2023 FOLLOW")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:----------|:------|
| Word count | 20 | 2568.9241 | 13352 |
### Training Hyperparameters
- batch_size: (8, 8)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: True
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- evaluation_strategy: steps
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0017 | 1 | 0.2236 | - |
| 0.0694 | 40 | - | 0.1379 |
| 0.0868 | 50 | 0.1722 | - |
| 0.1389 | 80 | - | 0.1440 |
| 0.1736 | 100 | 0.0536 | - |
| 0.2083 | 120 | - | 0.1412 |
| 0.2604 | 150 | 0.0293 | - |
| 0.2778 | 160 | - | 0.1343 |
| 0.3472 | 200 | 0.0234 | 0.1406 |
| 0.4167 | 240 | - | 0.1266 |
| 0.4340 | 250 | 0.0176 | - |
| 0.4861 | 280 | - | 0.1118 |
| 0.5208 | 300 | 0.0193 | - |
| 0.5556 | 320 | - | 0.1095 |
| 0.6076 | 350 | 0.0162 | - |
| 0.625 | 360 | - | 0.0926 |
| 0.6944 | 400 | 0.0223 | 0.0995 |
| 0.7639 | 440 | - | 0.0923 |
| 0.7812 | 450 | 0.018 | - |
| 0.8333 | 480 | - | 0.0814 |
| 0.8681 | 500 | 0.0045 | - |
| 0.9028 | 520 | - | 0.0801 |
| 0.9549 | 550 | 0.0074 | - |
| 0.9722 | 560 | - | 0.0794 |
### Framework Versions
- Python: 3.11.12
- SetFit: 1.1.2
- Sentence Transformers: 3.4.1
- Transformers: 4.51.3
- PyTorch: 2.6.0+cu124
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
mradermacher/Falcon2-5.5B-Polish-GGUF | mradermacher | "2025-05-06T21:00:22Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"lazymergekit",
"tiiuae/falcon-11B",
"pl",
"base_model:ssmits/Falcon2-5.5B-Polish",
"base_model:quantized:ssmits/Falcon2-5.5B-Polish",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-05-06T18:34:10Z" | ---
base_model: ssmits/Falcon2-5.5B-Polish
language:
- pl
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
- lazymergekit
- tiiuae/falcon-11B
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ssmits/Falcon2-5.5B-Polish
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Falcon2-5.5B-Polish-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Falcon2-5.5B-Polish-GGUF/resolve/main/Falcon2-5.5B-Polish.Q2_K.gguf) | Q2_K | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-5.5B-Polish-GGUF/resolve/main/Falcon2-5.5B-Polish.Q3_K_S.gguf) | Q3_K_S | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-5.5B-Polish-GGUF/resolve/main/Falcon2-5.5B-Polish.Q3_K_M.gguf) | Q3_K_M | 2.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-5.5B-Polish-GGUF/resolve/main/Falcon2-5.5B-Polish.Q3_K_L.gguf) | Q3_K_L | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-5.5B-Polish-GGUF/resolve/main/Falcon2-5.5B-Polish.IQ4_XS.gguf) | IQ4_XS | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-5.5B-Polish-GGUF/resolve/main/Falcon2-5.5B-Polish.Q4_K_S.gguf) | Q4_K_S | 3.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-5.5B-Polish-GGUF/resolve/main/Falcon2-5.5B-Polish.Q4_K_M.gguf) | Q4_K_M | 3.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-5.5B-Polish-GGUF/resolve/main/Falcon2-5.5B-Polish.Q5_K_S.gguf) | Q5_K_S | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-5.5B-Polish-GGUF/resolve/main/Falcon2-5.5B-Polish.Q5_K_M.gguf) | Q5_K_M | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-5.5B-Polish-GGUF/resolve/main/Falcon2-5.5B-Polish.Q6_K.gguf) | Q6_K | 4.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-5.5B-Polish-GGUF/resolve/main/Falcon2-5.5B-Polish.Q8_0.gguf) | Q8_0 | 5.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Falcon2-5.5B-Polish-GGUF/resolve/main/Falcon2-5.5B-Polish.f16.gguf) | f16 | 11.0 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
buelfhood/irplag_graphcodebert_ep50_bs16_lr0_0005_l512_s42_ppy_f_beta_score_pfeiffer | buelfhood | "2025-05-06T20:56:43Z" | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"roberta",
"region:us"
] | null | "2025-05-06T20:56:41Z" | ---
tags:
- adapter-transformers
- roberta
---
# Adapter `buelfhood/irplag_graphcodebert_ep50_bs16_lr0_0005_l512_s42_ppy_f_beta_score_pfeiffer` for microsoft/graphcodebert-base
An [adapter](https://adapterhub.ml) for the `microsoft/graphcodebert-base` model that was trained on the None dataset and includes a prediction head for classification.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("microsoft/graphcodebert-base")
adapter_name = model.load_adapter("buelfhood/irplag_graphcodebert_ep50_bs16_lr0_0005_l512_s42_ppy_f_beta_score_pfeiffer", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> |
Evados/DiffSynth-Studio-Lora-Wan2.1-ComfyUI | Evados | "2025-05-06T20:56:08Z" | 0 | 58 | null | [
"base_model:Wan-AI/Wan2.1-T2V-1.3B",
"base_model:finetune:Wan-AI/Wan2.1-T2V-1.3B",
"license:apache-2.0",
"region:us"
] | null | "2025-03-28T19:44:38Z" | ---
license: apache-2.0
base_model:
- Wan-AI/Wan2.1-T2V-1.3B
---
NEW:
New video for the wan 2.1 I2V v1.3b fun inp :
New video: https://youtu.be/QZfqqMpai9Y
https://www.youtube.com/watch?v=bXUYkfybOCE
I’ve added new models for Wan 2.1 I2V v1.3b Fun Inp.
These models don’t use exactly the same method as my DG T2V Models, and the results are not identical.
In this version, the effect is reduced, but it attempts to fix certain parts of the video.
It’s harder to implement the method I use with the other models because the I2V system and its method are constantly trying to match the start and end images.
It’s similar to using a LoRA to change faces in a video—you can see the model struggling against the LoRA to preserve the original start and end frames.
Often, the LoRA fails to work correctly on the first or last frame.
Keep in mind that all of this is still experimental.
I’ve noticed that sometimes the tea cache conflicts with my boosted model.
This happens because the effects are applied before the tea cache.
I’m not sure how to fix it yet—maybe it needs a different tea cache value to work properly.
If you notice strange noise in the video, try disabling the tea cache or experimenting with different tea cache values until you find one that works.
To install my custom node, download the ZIP file and extract it.
Make sure that there is only one folder named ComfyUI-DG-Wan2_1-OX3D, sometimes unzipping creates two nested folders with the same name, so double-check that.
Place this folder inside ComfyUI/custom_nodes.
Next, open a Windows command prompt (preferably as administrator), and run the following command:
X:\YourComfyUiInstallDir\python_embeded\python -m pip install -r X:\YourComfyUiInstallDir\ComfyUI\custom_nodes\ComfyUI-DG-Wan2_1-OX3D\requirements.txt
After that, restart ComfyUI and download my workflow. Everything should work fine.
My node allows you to use one or two images with the Fun Inp model, and it also adds three transition modes between the start and end images.
Additionally, it supports using one or two prompts to mix the two images.
NEW MODELS:
That’s a lot of models, but it gives plenty of choice.
I tried to fix a few mistakes I had made in the earlier versions of my models.
One of the main issues was that in the previous versions, video motion was significantly reduced.
Overall, the results were quite good, but with less motion.
When trying to maintain more consistency, it ended up reducing motion.
In my new models, I tried using a different technique that shouldn't reduce motion.
I also tried, in certain versions like the "stock" one, to preserve as much of the original model’s rendering style as possible.
Of course, the results will vary due to the boost, but the stock version should be the closest to the original.
That said, for character quality or certain details—especially interiors—the other models often produce much better results.
The models are configured for 5 or 6 steps by default, but you can definitely go higher.
If the video output becomes noisy or strange and you're using Tea Cache, try disabling it.
With certain step counts, it can conflict with Tea Cache’s default values.
All of my boosted models are compatible with the Diffusion Pipe training tool.
If you like to install Diffusion Pipe you need WSL because some optimisations is not 100% compatible with windows,
and the training script using it as a default but it is very fast too.
Just for training simple faces exemple around 20 30 images, you only need 6 to 12 epochs and it take around 20 30 mins on a 4070ti super.
https://github.com/tdrussell/diffusion-pipe
so you can use them to train your own LoRAs.
It’s a lot of models, and this should be my final version.
I don’t think I’ll be making more versions of this model.
At some point, I might remove a few and keep only the best ones.
Unless, of course, a better video model comes out someday—but for now, I really love this one.
There’s a lot of fun to be had with it.
NEW TRAINING LORAs:
I added a special LoRA called dg_wan2_1_v1_3b_lora_extra_noise_detail_motion.safetensors that I created.
The LoRA was trained on over 10,000 images, but it's not trained to reproduce them graphically, it's trained to replicate their initial noise patterns instead.
This LoRA is useful for adding a bit more realism and detail, and it also introduces motion. It should be used with relatively low strength.
Between 0.01 and 0.35, it works very well with the T2V model.
I haven’t had time to test it with other models yet.
I’ve converted Wan2.1-Fun-1.3B-InP-HPS2.1_lora.safetensors & Wan2.1-Fun-1.3B-InP-MPS_lora_new.safetensors to make it compatible with the Fun model and ComfyUI.
I'm working on a Fun model version using my boost method, but with this model, the effect isn’t exactly the same as the T2V model.
The boost does help, but the impact is noticeably less compared to the T2V model.
I still need to run more tests with my Fun Boost model, but I should be able to upload it soon.
OLDER:
I have added some experimental versions of the model Wan 2.1 v1.3b. These are different levels of distilled + hires + refined editions.
Normally, the medium models should be the best, but this is experimental, and I haven't had time to test every situation.
However, the first results look promising.
You can generate videos with 4, 5, 6, 8, 10, or more steps.
This is my first version, and if I notice any issues, I will try to fix them later.
From my tests, it works well, but as I mentioned before, I haven't tested every possible situation.
You can find a workflow for ComfyUI to test the models if you're interested.
IMPORTANT:
If you use a higher step count, try using another sampler like euler and try different scheduler too.
You may also need to increase the CFG with a higher step count.
Generally, the animation is better with more steps, but it also takes more time.
With a lower step count, the animation can be a bit more random.
If the video colors are too intense, try reducing the CFG or using a lower version of the model.
With example of 20 steps or more, it is better to use the lower model version and adjust the CFG to correct the color.
Remember, these models are modified and do not behave like the original ones.
If anyone wants to add sounds or voices, come back a bit later, I will provide the workflows to do so.
Video Exemple:
https://youtu.be/kfokkXEGByU
Notice: I see that this works pretty well up close, but there are some blur issues with the background.
I'll try to fix that by the end of the week, so I'll definitely need to update some models—or maybe reduce everything to just three or four models.
This is an experimental test version, and it provides a big boost, but after some testing, I noticed that the background is blurrier than the original, and I think I know why.
However, to fix it, I'll have to redo the models. Sorry for the inconvenience!
I hope people can still have fun with this test version in the meantime.
This weekend, I’m going to create some workflows to use the original model and achieve good quality.
I’ll also add workflows for sound and voices.
I will likely fix and update the models as well.
So if there’s a model you like among those I’ve uploaded, make sure to save it, as I’ll be deleting them over the weekend to update them with new versions along with the other workflows.
I’ll have a bit more time to fix and test everything further to refine it as much as possible.
Until then, have fun with these experimental versions!
---
LoRAs for Wan-AI/Wan2.1-T2V-1.3B
The LoRA comes from the DiffSynth-Studio team.
https://github.com/modelscope/DiffSynth-Studio
I have only converted the LoRAs to make them compatible with ComfyUI.
The original LoRA model comes from DiffSynth-Studio on ModelScope.
https://modelscope.cn/organization/DiffSynth-Studio
----------------
Wan2.1-1.3b-lora-aesthetics-v1:
Model Overview
This LoRA model is trained based on the Wan2.1-1.3B model using the DiffSynth-Studio framework. It has been finely tuned on an aesthetics-focused dataset, enhancing the visual appeal of generated videos. Additionally, the classifier-free guidance can be disabled to speed up the process.
Recommended Settings
cfg_scale = 1
sigma_shift = 10
Note
Using this model may reduce the diversity of generated videos. It is recommended to adjust the lora_alpha value to fine-tune the LoRA’s impact on the final output.
----------------
Wan2.1-1.3b-lora-speedcontrol-v1:
Model Overview
This LoRA model is trained from the Wan2.1-1.3B model using the DiffSynth-Studio framework. It has been fine-tuned on an aesthetics-focused dataset, enhancing the visual appearance of generated videos. Additionally, the classifier-free guidance can be disabled to speed up the process.
Recommended parameters:
cfg_scale = 1
sigma_shift = 10
Note:
After using this model, the diversity of generated videos may decrease. It is recommended to adjust the lora_alpha value to control the impact of the LoRA on the final output.
Wan2.1-1.3b-lora-speedcontrol-v1:
This LoRA model is based on the Wan2.1-1.3B model and has been trained using the DiffSynth-Studio framework.
It allows control over the speed of generated videos by adjusting the LoRA alpha parameter.
LoRA alpha > 0: Use the low speed trigger → Slower speed, improved image quality.
LoRA alpha < 0: Use the high speed trigger → Faster speed, reduced image quality.
Currently, the effects of this model are not yet fully stable, and optimizations are still in progress.
Model Results
Prompt Used:
"A documentary-style photograph: a lively little white dog runs swiftly across a lush green lawn. Its fur is bright white, its two ears stand upright, and it has a focused yet joyful expression. Sunlight illuminates its coat, making it look particularly soft and shiny. In the background, a vast meadow scattered with a few wildflowers stretches toward the horizon, where a blue sky with a few white clouds can be seen. The perspective is dynamic, capturing the dog's movement and the surrounding energy of the grass. Side view, medium shot, moving camera."
Negative Prompt:
Overly bright colors, overexposure, static, blurry details, subtitles, artistic style, painting, still image, grayish tint, very poor quality, low quality, JPEG compression artifacts, ugly, deformed, extra fingers, poorly drawn hands, distorted faces, disfigured, malformed limbs, fused fingers, static scene, cluttered background, three legs, crowd in the background, characters walking upside down.
Effect of LoRA Alpha Parameter:
LoRA alpha = 0.7 → Slower speed, better visual quality.
LoRA alpha = 0 → Normal speed, neutral effect.
LoRA alpha = -0.5 → Faster speed, reduced visual quality.
----------------
Wan2.1-1.3b-lora-highresfix-v1:
Model Overview
This LoRA model is trained based on the Wan2.1-1.3B model using the DiffSynth-Studio framework. Since the base model was originally trained at 480p resolution, it has certain limitations in sharpness. To address this, additional training was conducted to enhance the quality of high-resolution videos, preventing image collapse or a dull appearance.
Recommended Usage
Directly generate short high-resolution videos:
Set the resolution to 1024 × 1024 while slightly reducing the number of frames to avoid excessively long generation times.
Refine details in a high-resolution video:
First, generate a low-resolution video.
Apply upscaling to increase resolution.
Finally, use this model to enhance visual details.
Model Effects
Anime / 2D Style
Prompt: Anime style, an adorable 2D-style girl with short black hair flowing in the wind, gently turning her head.
Negative Prompt: Overly bright colors, overexposure, static, blurry details, subtitles, artistic style, painting, still image, dull overall tone, poor quality, visible JPEG compression, ugly, incomplete, extra fingers, poorly drawn hands, malformed face, deformed, disfigured, distorted limbs, fused fingers, static image, cluttered background, three legs, crowd in the background, walking upside down.
Before activating LoRA → After activating LoRA
Sword and Magic
Prompt: An ancient mythology scene depicting a battle between a hero and a dragon, with steep cliffs in the background. The hero wears armor, wields a shining sword, and the dragon spreads its massive wings, ready to unleash flames.
Negative Prompt: Overly bright colors, overexposure, static, blurry details, subtitles, artistic style, painting, still image, dull overall tone, poor quality, visible JPEG compression, ugly, incomplete, extra fingers, poorly drawn hands, malformed face, deformed, disfigured, distorted limbs, fused fingers, static image, cluttered background, three legs, crowd in the background, walking upside down.
----------------
Wan2.1-1.3b-lora-exvideo-v1:
Model Overview
This LoRA model is trained based on the Wan2.1-1.3B model using the DiffSynth-Studio framework.
It enables video duration extension: once activated, this LoRA allows the generation of videos twice as long as usual.
Recommended Settings
num_frames = 161
lora_alpha = 1.0
Model Effects
📷 Documentary Photography Style
Prompt: A playful little dog wearing black sunglasses runs swiftly across a green lawn. Its fur is light brown, its ears perked up, and its expression is focused yet joyful. The sunlight highlights its fur, making it appear particularly soft and shiny. In the background, a vast meadow dotted with a few wildflowers stretches under a blue sky with scattered white clouds. The perspective is dynamic, capturing the motion of the dog's run and the vibrancy of the surrounding landscape. Side-moving camera, medium shot.
🎨 High-Definition 3D Texture
Prompt: A small white cat sprints forward on a 10-meter-high platform, then performs a backflip dive into the water. Its fur is silky, its gaze sharp, and its movements fluid and natural. In the background, a pristine blue swimming pool with a smooth and calm surface. At the moment of the jump, a spotlight from above illuminates the cat, creating a striking contrast between light and shadow. The water splashes are sharp and precise, producing a visually spectacular effect. C4D rendering, dynamic close-up.
🎭 Japanese Anime Style
Prompt: On a city street corner, a black cat crouches under a lamppost, gazing into the distance at the neon lights. Suddenly, a blue light beam descends from the sky, swiftly enveloping its body. The cat begins to levitate, its black fur slowly dissolving into the air as its body elongates. Its fur transforms into a sleek black suit, revealing a slender silhouette. Its cat ears disappear, and its facial features become human, taking on the appearance of a handsome young man with a cold gaze. He lands lightly, his suit billowing slightly in the night breeze, as the blue light fades away—an elegant and mysterious young man from the future.
🌆 Wide-Angle City Scene
Prompt: The camera provides an overview of a bustling city street. On a wide sidewalk, pedestrians move about, creating a lively and dynamic urban tableau.
Usage Instructions
This LoRA model is designed to extend video duration while maintaining visual quality.
For optimal results, set num_frames to 161 or adjust it according to your needs. |
anmolagarwal999/Qwen2.5-0.5B-Instruct__sft_saved__math_dataset_based_on_gt_reasoning_trace_epoch_510 | anmolagarwal999 | "2025-05-06T20:56:03Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-06T20:55:32Z" | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-0.5B
tags:
- chat
library_name: transformers
---
# Qwen2.5-0.5B-Instruct
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the instruction-tuned 0.5B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 0.49B
- Number of Paramaters (Non-Embedding): 0.36B
- Number of Layers: 24
- Number of Attention Heads (GQA): 14 for Q and 2 for KV
- Context Length: Full 32,768 tokens and generation 8192 tokens
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-0.5B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
``` |
mradermacher/Qwen2-1.5B-ITA-Instruct-i1-GGUF | mradermacher | "2025-05-06T20:56:00Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"it",
"en",
"dataset:gsarti/clean_mc4_it",
"dataset:FreedomIntelligence/alpaca-gpt4-italian",
"base_model:e-palmisano/Qwen2-1.5B-ITA-Instruct",
"base_model:quantized:e-palmisano/Qwen2-1.5B-ITA-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-05-06T20:10:00Z" | ---
base_model: e-palmisano/Qwen2-1.5B-ITA-Instruct
datasets:
- gsarti/clean_mc4_it
- FreedomIntelligence/alpaca-gpt4-italian
language:
- it
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/e-palmisano/Qwen2-1.5B-ITA-Instruct
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen2-1.5B-ITA-Instruct-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2-1.5B-ITA-Instruct-i1-GGUF/resolve/main/Qwen2-1.5B-ITA-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 0.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-1.5B-ITA-Instruct-i1-GGUF/resolve/main/Qwen2-1.5B-ITA-Instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 0.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-1.5B-ITA-Instruct-i1-GGUF/resolve/main/Qwen2-1.5B-ITA-Instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-1.5B-ITA-Instruct-i1-GGUF/resolve/main/Qwen2-1.5B-ITA-Instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-1.5B-ITA-Instruct-i1-GGUF/resolve/main/Qwen2-1.5B-ITA-Instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-1.5B-ITA-Instruct-i1-GGUF/resolve/main/Qwen2-1.5B-ITA-Instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-1.5B-ITA-Instruct-i1-GGUF/resolve/main/Qwen2-1.5B-ITA-Instruct.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-1.5B-ITA-Instruct-i1-GGUF/resolve/main/Qwen2-1.5B-ITA-Instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-1.5B-ITA-Instruct-i1-GGUF/resolve/main/Qwen2-1.5B-ITA-Instruct.i1-Q2_K.gguf) | i1-Q2_K | 0.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-1.5B-ITA-Instruct-i1-GGUF/resolve/main/Qwen2-1.5B-ITA-Instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-1.5B-ITA-Instruct-i1-GGUF/resolve/main/Qwen2-1.5B-ITA-Instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-1.5B-ITA-Instruct-i1-GGUF/resolve/main/Qwen2-1.5B-ITA-Instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 0.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-1.5B-ITA-Instruct-i1-GGUF/resolve/main/Qwen2-1.5B-ITA-Instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-1.5B-ITA-Instruct-i1-GGUF/resolve/main/Qwen2-1.5B-ITA-Instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-1.5B-ITA-Instruct-i1-GGUF/resolve/main/Qwen2-1.5B-ITA-Instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-1.5B-ITA-Instruct-i1-GGUF/resolve/main/Qwen2-1.5B-ITA-Instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-1.5B-ITA-Instruct-i1-GGUF/resolve/main/Qwen2-1.5B-ITA-Instruct.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.0 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-1.5B-ITA-Instruct-i1-GGUF/resolve/main/Qwen2-1.5B-ITA-Instruct.i1-Q4_0.gguf) | i1-Q4_0 | 1.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-1.5B-ITA-Instruct-i1-GGUF/resolve/main/Qwen2-1.5B-ITA-Instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-1.5B-ITA-Instruct-i1-GGUF/resolve/main/Qwen2-1.5B-ITA-Instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-1.5B-ITA-Instruct-i1-GGUF/resolve/main/Qwen2-1.5B-ITA-Instruct.i1-Q4_1.gguf) | i1-Q4_1 | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-1.5B-ITA-Instruct-i1-GGUF/resolve/main/Qwen2-1.5B-ITA-Instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-1.5B-ITA-Instruct-i1-GGUF/resolve/main/Qwen2-1.5B-ITA-Instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2-1.5B-ITA-Instruct-i1-GGUF/resolve/main/Qwen2-1.5B-ITA-Instruct.i1-Q6_K.gguf) | i1-Q6_K | 1.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
ma921/gpt2-large_f_dpo_imdb_noise30_epoch5 | ma921 | "2025-05-06T20:55:17Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:ma921/gpt2-large-sft-imdb",
"base_model:finetune:ma921/gpt2-large-sft-imdb",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-06T20:54:07Z" | ---
library_name: transformers
license: mit
base_model: ma921/gpt2-large-sft-imdb
tags:
- generated_from_trainer
model-index:
- name: gpt2-large_f_dpo_imdb_noise30_epoch5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-large_f_dpo_imdb_noise30_epoch5
This model is a fine-tuned version of [ma921/gpt2-large-sft-imdb](https://huggingface.co/ma921/gpt2-large-sft-imdb) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 32
- total_train_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
anmolagarwal999/Qwen2.5-0.5B-Instruct__sft_saved__math_dataset_based_on_gt_reasoning_trace_epoch_460 | anmolagarwal999 | "2025-05-06T20:54:21Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-06T20:53:50Z" | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-0.5B
tags:
- chat
library_name: transformers
---
# Qwen2.5-0.5B-Instruct
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the instruction-tuned 0.5B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 0.49B
- Number of Paramaters (Non-Embedding): 0.36B
- Number of Layers: 24
- Number of Attention Heads (GQA): 14 for Q and 2 for KV
- Context Length: Full 32,768 tokens and generation 8192 tokens
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-0.5B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
``` |
mlfoundations-dev/f1_avg_domain | mlfoundations-dev | "2025-05-06T20:54:08Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-04T14:51:59Z" | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: f1_avg_domain
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# f1_avg_domain
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/f1_avg_domain dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- gradient_accumulation_steps: 16
- total_train_batch_size: 512
- total_eval_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.3.0
- Datasets 3.1.0
- Tokenizers 0.20.3
|
anmolagarwal999/Qwen2.5-0.5B-Instruct__sft_saved__math_dataset_based_on_gt_reasoning_trace_epoch_430 | anmolagarwal999 | "2025-05-06T20:53:17Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-06T20:52:51Z" | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-0.5B
tags:
- chat
library_name: transformers
---
# Qwen2.5-0.5B-Instruct
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the instruction-tuned 0.5B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 0.49B
- Number of Paramaters (Non-Embedding): 0.36B
- Number of Layers: 24
- Number of Attention Heads (GQA): 14 for Q and 2 for KV
- Context Length: Full 32,768 tokens and generation 8192 tokens
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-0.5B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
``` |
anmolagarwal999/Qwen2.5-0.5B-Instruct__sft_saved__math_dataset_based_on_gt_reasoning_trace_epoch_380 | anmolagarwal999 | "2025-05-06T20:51:43Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-06T20:51:17Z" | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-0.5B
tags:
- chat
library_name: transformers
---
# Qwen2.5-0.5B-Instruct
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the instruction-tuned 0.5B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 0.49B
- Number of Paramaters (Non-Embedding): 0.36B
- Number of Layers: 24
- Number of Attention Heads (GQA): 14 for Q and 2 for KV
- Context Length: Full 32,768 tokens and generation 8192 tokens
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-0.5B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
``` |
buelfhood/irplag_codebert_ep50_bs16_lr0_0003_l512_s42_ppy_f_beta_score_pfeiffer | buelfhood | "2025-05-06T20:50:08Z" | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"roberta",
"region:us"
] | null | "2025-05-06T20:50:05Z" | ---
tags:
- adapter-transformers
- roberta
---
# Adapter `buelfhood/irplag_codebert_ep50_bs16_lr0_0003_l512_s42_ppy_f_beta_score_pfeiffer` for microsoft/codebert-base
An [adapter](https://adapterhub.ml) for the `microsoft/codebert-base` model that was trained on the None dataset and includes a prediction head for classification.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("microsoft/codebert-base")
adapter_name = model.load_adapter("buelfhood/irplag_codebert_ep50_bs16_lr0_0003_l512_s42_ppy_f_beta_score_pfeiffer", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> |
anmolagarwal999/Qwen2.5-0.5B-Instruct__sft_saved__math_dataset_based_on_gt_reasoning_trace_epoch_310 | anmolagarwal999 | "2025-05-06T20:49:39Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-06T20:49:11Z" | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-0.5B
tags:
- chat
library_name: transformers
---
# Qwen2.5-0.5B-Instruct
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the instruction-tuned 0.5B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 0.49B
- Number of Paramaters (Non-Embedding): 0.36B
- Number of Layers: 24
- Number of Attention Heads (GQA): 14 for Q and 2 for KV
- Context Length: Full 32,768 tokens and generation 8192 tokens
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-0.5B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
``` |
anmolagarwal999/Qwen2.5-0.5B-Instruct__sft_saved__math_dataset_based_on_gt_reasoning_trace_epoch_270 | anmolagarwal999 | "2025-05-06T20:48:34Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-06T20:48:10Z" | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-0.5B
tags:
- chat
library_name: transformers
---
# Qwen2.5-0.5B-Instruct
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the instruction-tuned 0.5B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 0.49B
- Number of Paramaters (Non-Embedding): 0.36B
- Number of Layers: 24
- Number of Attention Heads (GQA): 14 for Q and 2 for KV
- Context Length: Full 32,768 tokens and generation 8192 tokens
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-0.5B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
``` |
buelfhood/irplag_plbart_ep50_bs16_lr0_0003_l512_s42_ppy_f_beta_score_houlsby | buelfhood | "2025-05-06T20:44:36Z" | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"plbart",
"region:us"
] | null | "2025-05-06T20:44:33Z" | ---
tags:
- plbart
- adapter-transformers
---
# Adapter `buelfhood/irplag_plbart_ep50_bs16_lr0_0003_l512_s42_ppy_f_beta_score_houlsby` for uclanlp/plbart-base
An [adapter](https://adapterhub.ml) for the `uclanlp/plbart-base` model that was trained on the None dataset and includes a prediction head for classification.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("uclanlp/plbart-base")
adapter_name = model.load_adapter("buelfhood/irplag_plbart_ep50_bs16_lr0_0003_l512_s42_ppy_f_beta_score_houlsby", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> |
anmolagarwal999/Qwen2.5-0.5B-Instruct__sft_saved__math_dataset_based_on_gt_reasoning_trace_epoch_130 | anmolagarwal999 | "2025-05-06T20:44:36Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-06T20:44:07Z" | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-0.5B
tags:
- chat
library_name: transformers
---
# Qwen2.5-0.5B-Instruct
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the instruction-tuned 0.5B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 0.49B
- Number of Paramaters (Non-Embedding): 0.36B
- Number of Layers: 24
- Number of Attention Heads (GQA): 14 for Q and 2 for KV
- Context Length: Full 32,768 tokens and generation 8192 tokens
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-0.5B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
``` |
anmolagarwal999/Qwen2.5-0.5B-Instruct__sft_saved__math_dataset_based_on_gt_reasoning_trace_epoch_100 | anmolagarwal999 | "2025-05-06T20:43:35Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-06T20:43:11Z" | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-0.5B
tags:
- chat
library_name: transformers
---
# Qwen2.5-0.5B-Instruct
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the instruction-tuned 0.5B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 0.49B
- Number of Paramaters (Non-Embedding): 0.36B
- Number of Layers: 24
- Number of Attention Heads (GQA): 14 for Q and 2 for KV
- Context Length: Full 32,768 tokens and generation 8192 tokens
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-0.5B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
``` |
mxmcc/xLAM-2-32b-fc-r-mlx-6Bit | mxmcc | "2025-05-06T20:42:23Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"function-calling",
"LLM Agent",
"tool-use",
"llama",
"qwen",
"pytorch",
"LLaMA-factory",
"mlx",
"mlx-my-repo",
"conversational",
"en",
"dataset:Salesforce/APIGen-MT-5k",
"dataset:Salesforce/xlam-function-calling-60k",
"base_model:Salesforce/xLAM-2-32b-fc-r",
"base_model:quantized:Salesforce/xLAM-2-32b-fc-r",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"6-bit",
"region:us"
] | text-generation | "2025-05-06T20:40:45Z" | ---
license: cc-by-nc-4.0
datasets:
- Salesforce/APIGen-MT-5k
- Salesforce/xlam-function-calling-60k
language:
- en
pipeline_tag: text-generation
tags:
- function-calling
- LLM Agent
- tool-use
- llama
- qwen
- pytorch
- LLaMA-factory
- mlx
- mlx-my-repo
library_name: transformers
base_model: Salesforce/xLAM-2-32b-fc-r
---
# mxmcc/xLAM-2-32b-fc-r-mlx-6Bit
The Model [mxmcc/xLAM-2-32b-fc-r-mlx-6Bit](https://huggingface.co/mxmcc/xLAM-2-32b-fc-r-mlx-6Bit) was converted to MLX format from [Salesforce/xLAM-2-32b-fc-r](https://huggingface.co/Salesforce/xLAM-2-32b-fc-r) using mlx-lm version **0.22.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mxmcc/xLAM-2-32b-fc-r-mlx-6Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
anmolagarwal999/Qwen2.5-0.5B-Instruct__sft_saved__math_dataset_based_on_gt_reasoning_trace_epoch_30 | anmolagarwal999 | "2025-05-06T20:41:33Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-06T20:41:06Z" | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-0.5B
tags:
- chat
library_name: transformers
---
# Qwen2.5-0.5B-Instruct
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the instruction-tuned 0.5B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 0.49B
- Number of Paramaters (Non-Embedding): 0.36B
- Number of Layers: 24
- Number of Attention Heads (GQA): 14 for Q and 2 for KV
- Context Length: Full 32,768 tokens and generation 8192 tokens
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-0.5B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
``` |
fixie-ai/ultravox-v0_3-llama-3_2-1b | fixie-ai | "2025-05-06T20:37:16Z" | 65 | 6 | transformers | [
"transformers",
"safetensors",
"ultravox",
"feature-extraction",
"audio-text-to-text",
"custom_code",
"arxiv:1910.09700",
"region:us"
] | audio-text-to-text | "2024-10-18T13:35:43Z" | ---
library_name: transformers
pipeline_tag: audio-text-to-text
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Gemma_1B_Baro_v2_vllm-i1-GGUF | mradermacher | "2025-05-06T20:36:27Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gemma3_text",
"en",
"base_model:umar141/Gemma_1B_Baro_v2_vllm",
"base_model:quantized:umar141/Gemma_1B_Baro_v2_vllm",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-05-06T20:06:00Z" | ---
base_model: umar141/Gemma_1B_Baro_v2_vllm
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/umar141/Gemma_1B_Baro_v2_vllm
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Gemma_1B_Baro_v2_vllm-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Gemma_1B_Baro_v2_vllm-i1-GGUF/resolve/main/Gemma_1B_Baro_v2_vllm.i1-IQ1_S.gguf) | i1-IQ1_S | 0.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Gemma_1B_Baro_v2_vllm-i1-GGUF/resolve/main/Gemma_1B_Baro_v2_vllm.i1-IQ1_M.gguf) | i1-IQ1_M | 0.7 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Gemma_1B_Baro_v2_vllm-i1-GGUF/resolve/main/Gemma_1B_Baro_v2_vllm.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma_1B_Baro_v2_vllm-i1-GGUF/resolve/main/Gemma_1B_Baro_v2_vllm.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma_1B_Baro_v2_vllm-i1-GGUF/resolve/main/Gemma_1B_Baro_v2_vllm.i1-IQ2_S.gguf) | i1-IQ2_S | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma_1B_Baro_v2_vllm-i1-GGUF/resolve/main/Gemma_1B_Baro_v2_vllm.i1-IQ2_M.gguf) | i1-IQ2_M | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma_1B_Baro_v2_vllm-i1-GGUF/resolve/main/Gemma_1B_Baro_v2_vllm.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.8 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma_1B_Baro_v2_vllm-i1-GGUF/resolve/main/Gemma_1B_Baro_v2_vllm.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma_1B_Baro_v2_vllm-i1-GGUF/resolve/main/Gemma_1B_Baro_v2_vllm.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Gemma_1B_Baro_v2_vllm-i1-GGUF/resolve/main/Gemma_1B_Baro_v2_vllm.i1-IQ3_S.gguf) | i1-IQ3_S | 0.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Gemma_1B_Baro_v2_vllm-i1-GGUF/resolve/main/Gemma_1B_Baro_v2_vllm.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma_1B_Baro_v2_vllm-i1-GGUF/resolve/main/Gemma_1B_Baro_v2_vllm.i1-Q2_K.gguf) | i1-Q2_K | 0.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Gemma_1B_Baro_v2_vllm-i1-GGUF/resolve/main/Gemma_1B_Baro_v2_vllm.i1-IQ3_M.gguf) | i1-IQ3_M | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma_1B_Baro_v2_vllm-i1-GGUF/resolve/main/Gemma_1B_Baro_v2_vllm.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma_1B_Baro_v2_vllm-i1-GGUF/resolve/main/Gemma_1B_Baro_v2_vllm.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Gemma_1B_Baro_v2_vllm-i1-GGUF/resolve/main/Gemma_1B_Baro_v2_vllm.i1-Q4_0.gguf) | i1-Q4_0 | 0.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma_1B_Baro_v2_vllm-i1-GGUF/resolve/main/Gemma_1B_Baro_v2_vllm.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Gemma_1B_Baro_v2_vllm-i1-GGUF/resolve/main/Gemma_1B_Baro_v2_vllm.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Gemma_1B_Baro_v2_vllm-i1-GGUF/resolve/main/Gemma_1B_Baro_v2_vllm.i1-Q4_1.gguf) | i1-Q4_1 | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma_1B_Baro_v2_vllm-i1-GGUF/resolve/main/Gemma_1B_Baro_v2_vllm.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma_1B_Baro_v2_vllm-i1-GGUF/resolve/main/Gemma_1B_Baro_v2_vllm.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gemma_1B_Baro_v2_vllm-i1-GGUF/resolve/main/Gemma_1B_Baro_v2_vllm.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma_1B_Baro_v2_vllm-i1-GGUF/resolve/main/Gemma_1B_Baro_v2_vllm.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma_1B_Baro_v2_vllm-i1-GGUF/resolve/main/Gemma_1B_Baro_v2_vllm.i1-Q6_K.gguf) | i1-Q6_K | 1.1 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
pengguilan/lora_adapter_dpo_mistral | pengguilan | "2025-05-06T20:35:13Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-05-06T15:01:15Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
alexshah/armembed | alexshah | "2025-05-06T20:33:21Z" | 9 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"feature-extraction",
"sentence-transformers",
"hy",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2025-05-05T16:50:55Z" | ---
library_name: transformers
language:
- hy
tags:
- sentence-transformers
---
# ArmEmbed - Text Embedding Model for Armenian
This embedding model is built on a Llama-based language model pre-trained on Armenian text and further adapted using LoRA on additional Armenian data. It produces 2048-dimensional embeddings.
## Usage
Below is an example to encode queries and passages from the MS-MARCO passage ranking dataset.
## Sentence Transformers
```python
from sentence_transformers import SentenceTransformer
queries = [
"Ի՞նչ է պատահում հյուսվածքներին, ինչպիսիք են սիրտը կամ ուղեղը, եթե թթվածնով հարուստ արյունը ժամանակին չի մատակարարվում:",
"Կարո՞ղ է արդյոք ոստիկանությունը հետախուզման թույլտվություն ստանալ այն բանից հետո, երբ նրանք տեսել են ապացույցներ:",
]
passages = [
"Եվ․․․ Կիսալուսնաձև փականները կանխում են արյան հետհոսքը զարկերակներից դեպի փորոքներ։ Բացատրեք, թե ինչ է պատահում հյուսվածքներին, ինչպիսիք են սիրտը կամ ուղեղը, եթե թթվածնով հարուստ արյունը ժամանակին չի մատակարարվում: Օրգանի հյուսվածքը սկսում է մահանալ։",
"Ոստիկանությունը կտրամադրի իր սեփական ապացույցները հետախուզման թույլտվության համար, և կասկածյալը ներկա չէ, երբ թույլտվություն է տրվում: Երբ հետախուզման թույլտվություն է ստացվում, ոստիկանությունը կարող է խուզարկել միայն թույլտվության մեջ նշված վայրը, լինի դա տուն, մեքենա, թե որոշակի արտաքին վայր:",
]
prefixed_queries = ["query: " + query for query in queries]
prefixed_passages = [" passage: " + passage for passage in passages]
sentence_transformer = SentenceTransformer("alexshah/armembed", trust_remote_code=True)
query_embeddings = sentence_transformer.encode(
prefixed_queries, normalize_embeddings=True
)
passage_embeddings = sentence_transformer.encode(
prefixed_passages, normalize_embeddings=True
)
scores = (query_embeddings @ passage_embeddings.T) * 100
print(scores.tolist())
```
## Transformers
```python
import torch
import torch.nn.functional as F
from transformers import AutoModel, AutoTokenizer
def last_token_pool(last_hidden_states, attention_mask):
left_padding = attention_mask[:, -1].sum() == attention_mask.shape[0]
if left_padding:
return last_hidden_states[:, -1]
else:
sequence_lengths = attention_mask.sum(dim=1) - 1
batch_size = last_hidden_states.shape[0]
return last_hidden_states[
torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths
]
queries = [
"Ի՞նչ է պատահում հյուսվածքներին, ինչպիսիք են սիրտը կամ ուղեղը, եթե թթվածնով հարուստ արյունը ժամանակին չի մատակարարվում:",
"Կարո՞ղ է արդյոք ոստիկանությունը հետախուզման թույլտվություն ստանալ այն բանից հետո, երբ նրանք տեսել են ապացույցներ:",
]
passages = [
"Եվ․․․ Կիսալուսնաձև փականները կանխում են արյան հետհոսքը զարկերակներից դեպի փորոքներ։ Բացատրեք, թե ինչ է պատահում հյուսվածքներին, ինչպիսիք են սիրտը կամ ուղեղը, եթե թթվածնով հարուստ արյունը ժամանակին չի մատակարարվում: Օրգանի հյուսվածքը սկսում է մահանալ։",
"Ոստիկանությունը կտրամադրի իր սեփական ապացույցները հետախուզման թույլտվության համար, և կասկածյալը ներկա չէ, երբ թույլտվություն է տրվում: Երբ հետախուզման թույլտվություն է ստացվում, ոստիկանությունը կարող է խուզարկել միայն թույլտվության մեջ նշված վայրը, լինի դա տուն, մեքենա, թե որոշակի արտաքին վայր:",
]
input_texts = ["query: " + query for query in queries] + [
" passage: " + passage for passage in passages
]
model = AutoModel.from_pretrained("alexshah/armembed")
tokenizer = AutoTokenizer.from_pretrained("alexshah/armembed")
batch_dict = tokenizer(
input_texts, max_length=512, padding=True, truncation=True, return_tensors="pt"
)
outputs = model(**batch_dict)
embeddings = last_token_pool(outputs.last_hidden_state, batch_dict["attention_mask"])
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())
```
## Intended Use
### Primary Intended Uses
- Retrieval-augmented generation (RAG)
- Semantic search in Armenian
- Document similarity computation
- Cross-lingual text understanding
- Text classification tasks
- Information retrieval
## Training Data
### Dataset Details
- **Source**: Reddit dataset with English-Armenian translations
- **Size**: 0.66M pairs of rows
- **Content Type**: Title and body text pairs
- **Split Ratio**: 98.5% train, 1.5% test
## Training Procedure
### Training Details
- **LoRA**: 16/32/0.1 on all linear layers
- **Training Duration**: \~15 hours
- **Hardware**: 4 × NVIDIA A100 (40GB) GPUs
- **Effective Batch Size**: 128 (2 per GPU × 4 GPUs, with gradient accumulation of 16)
- **Learning Rate**: 1e-4
- **Weight Decay**: 0.01
- **Warmup Steps**: 200
- **Max Sequence Length**: 512 tokens
- **FP16 Training**: Enabled
- **Gradient Clipping**: 1.0
### Optimization Configuration
- **Framework**: DeepSpeed ZeRO Stage 3
- **Optimizer**: AdamW with auto learning rate and weight decay
- **Mixed Precision**: bfloat16 (`bf16`) enabled
- **ZeRO Optimization**: Stage 3 with:
- No parameter offloading
- Overlap communication enabled
- Contiguous gradients enabled
- Auto-tuned reduce and prefetch bucket sizes
- Auto-tuned parameter persistence threshold
## Technical Specifications
- **Model Size**: ~1.24B parameters (based on a Llama-style architecture LLM pre-trained on Armenian data)
- **Embedding Dimension**: 2048
- **Max Sequence Length**: 512 tokens
- **Framework Compatibility**:
- PyTorch
- Hugging Face Transformers
- DeepSpeed
## Model Details
- **Model Name**: ArmEmbed
- **Model Type**: Text Embeddings for the Armenian Language
- **Version**: 1.0.0
- **License**: Apache 2.0
- **Last Updated**: May 2025
- **Model Architecture**: Transformer-based embeddings model
- **Input**: Armenian text
- **Output**: Dense vector embeddings (size=2048) |
mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF | mradermacher | "2025-05-06T20:32:22Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am lumbering grazing antelope",
"trl",
"en",
"base_model:romero-p/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope",
"base_model:quantized:romero-p/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-05-06T20:14:28Z" | ---
base_model: romero-p/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope
language:
- en
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope
quantized_by: mradermacher
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am lumbering grazing antelope
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/romero-p/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-IQ1_S.gguf) | i1-IQ1_S | 0.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-IQ1_M.gguf) | i1-IQ1_M | 0.4 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-IQ2_S.gguf) | i1-IQ2_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-IQ2_M.gguf) | i1-IQ2_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-IQ3_S.gguf) | i1-IQ3_S | 0.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-Q2_K.gguf) | i1-Q2_K | 0.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-IQ3_M.gguf) | i1-IQ3_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-Q4_0.gguf) | i1-Q4_0 | 0.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-Q4_1.gguf) | i1-Q4_1 | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.i1-Q6_K.gguf) | i1-Q6_K | 0.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
buelfhood/irplag_unixcoder_ep50_bs16_lr0_0005_l512_s42_ppy_f_beta_score_houlsby | buelfhood | "2025-05-06T20:27:52Z" | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"roberta",
"region:us"
] | null | "2025-05-06T20:27:44Z" | ---
tags:
- roberta
- adapter-transformers
---
# Adapter `buelfhood/irplag_unixcoder_ep50_bs16_lr0_0005_l512_s42_ppy_f_beta_score_houlsby` for microsoft/unixcoder-base-nine
An [adapter](https://adapterhub.ml) for the `microsoft/unixcoder-base-nine` model that was trained on the None dataset and includes a prediction head for classification.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("microsoft/unixcoder-base-nine")
adapter_name = model.load_adapter("buelfhood/irplag_unixcoder_ep50_bs16_lr0_0005_l512_s42_ppy_f_beta_score_houlsby", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> |
mxmcc/Llama-xLAM-2-8b-fc-r-mlx-8Bit | mxmcc | "2025-05-06T20:26:16Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"function-calling",
"LLM Agent",
"tool-use",
"qwen",
"pytorch",
"LLaMA-factory",
"mlx",
"mlx-my-repo",
"conversational",
"en",
"dataset:Salesforce/APIGen-MT-5k",
"dataset:Salesforce/xlam-function-calling-60k",
"base_model:Salesforce/Llama-xLAM-2-8b-fc-r",
"base_model:quantized:Salesforce/Llama-xLAM-2-8b-fc-r",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"region:us"
] | text-generation | "2025-05-06T20:25:45Z" | ---
license: cc-by-nc-4.0
datasets:
- Salesforce/APIGen-MT-5k
- Salesforce/xlam-function-calling-60k
language:
- en
pipeline_tag: text-generation
tags:
- function-calling
- LLM Agent
- tool-use
- llama
- qwen
- pytorch
- LLaMA-factory
- mlx
- mlx-my-repo
library_name: transformers
base_model: Salesforce/Llama-xLAM-2-8b-fc-r
---
# mxmcc/Llama-xLAM-2-8b-fc-r-mlx-8Bit
The Model [mxmcc/Llama-xLAM-2-8b-fc-r-mlx-8Bit](https://huggingface.co/mxmcc/Llama-xLAM-2-8b-fc-r-mlx-8Bit) was converted to MLX format from [Salesforce/Llama-xLAM-2-8b-fc-r](https://huggingface.co/Salesforce/Llama-xLAM-2-8b-fc-r) using mlx-lm version **0.22.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mxmcc/Llama-xLAM-2-8b-fc-r-mlx-8Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
mradermacher/e5-small-i1-GGUF | mradermacher | "2025-05-06T20:25:16Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mteb",
"Sentence Transformers",
"sentence-similarity",
"sentence-transformers",
"en",
"base_model:intfloat/e5-small",
"base_model:quantized:intfloat/e5-small",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"feature-extraction"
] | sentence-similarity | "2025-05-06T20:21:07Z" | ---
base_model: intfloat/e5-small
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- mteb
- Sentence Transformers
- sentence-similarity
- sentence-transformers
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/intfloat/e5-small
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/e5-small-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/e5-small-i1-GGUF/resolve/main/e5-small.i1-IQ1_S.gguf) | i1-IQ1_S | 0.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/e5-small-i1-GGUF/resolve/main/e5-small.i1-IQ1_M.gguf) | i1-IQ1_M | 0.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/e5-small-i1-GGUF/resolve/main/e5-small.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/e5-small-i1-GGUF/resolve/main/e5-small.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/e5-small-i1-GGUF/resolve/main/e5-small.i1-IQ2_S.gguf) | i1-IQ2_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/e5-small-i1-GGUF/resolve/main/e5-small.i1-IQ2_M.gguf) | i1-IQ2_M | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/e5-small-i1-GGUF/resolve/main/e5-small.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/e5-small-i1-GGUF/resolve/main/e5-small.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/e5-small-i1-GGUF/resolve/main/e5-small.i1-IQ3_S.gguf) | i1-IQ3_S | 0.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/e5-small-i1-GGUF/resolve/main/e5-small.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/e5-small-i1-GGUF/resolve/main/e5-small.i1-Q2_K.gguf) | i1-Q2_K | 0.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/e5-small-i1-GGUF/resolve/main/e5-small.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.1 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/e5-small-i1-GGUF/resolve/main/e5-small.i1-IQ3_M.gguf) | i1-IQ3_M | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/e5-small-i1-GGUF/resolve/main/e5-small.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/e5-small-i1-GGUF/resolve/main/e5-small.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.1 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/e5-small-i1-GGUF/resolve/main/e5-small.i1-Q4_0.gguf) | i1-Q4_0 | 0.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/e5-small-i1-GGUF/resolve/main/e5-small.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/e5-small-i1-GGUF/resolve/main/e5-small.i1-Q4_1.gguf) | i1-Q4_1 | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/e5-small-i1-GGUF/resolve/main/e5-small.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.1 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/e5-small-i1-GGUF/resolve/main/e5-small.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.1 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/e5-small-i1-GGUF/resolve/main/e5-small.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/e5-small-i1-GGUF/resolve/main/e5-small.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/e5-small-i1-GGUF/resolve/main/e5-small.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/e5-small-i1-GGUF/resolve/main/e5-small.i1-Q6_K.gguf) | i1-Q6_K | 0.1 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
kush137/gemma-3-4b-Reasoning | kush137 | "2025-05-06T20:25:02Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:adapter:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"region:us"
] | null | "2025-05-06T18:58:18Z" | ---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
buelfhood/irplag_unixcoder_ep50_bs16_lr0_0001_l512_s42_ppy_f_beta_score_houlsby | buelfhood | "2025-05-06T20:24:48Z" | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"roberta",
"region:us"
] | null | "2025-05-06T20:24:44Z" | ---
tags:
- adapter-transformers
- roberta
---
# Adapter `buelfhood/irplag_unixcoder_ep50_bs16_lr0_0001_l512_s42_ppy_f_beta_score_houlsby` for microsoft/unixcoder-base-nine
An [adapter](https://adapterhub.ml) for the `microsoft/unixcoder-base-nine` model that was trained on the None dataset and includes a prediction head for classification.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("microsoft/unixcoder-base-nine")
adapter_name = model.load_adapter("buelfhood/irplag_unixcoder_ep50_bs16_lr0_0001_l512_s42_ppy_f_beta_score_houlsby", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> |
falltalk/falltalk4 | falltalk | "2025-05-06T20:24:39Z" | 5 | 0 | transformers | [
"transformers",
"onnx",
"safetensors",
"license:agpl-3.0",
"endpoints_compatible",
"region:us"
] | null | "2024-07-25T15:47:07Z" | ---
license: agpl-3.0
---
FallTalk AI Models Usage Agreement
This code and the accompanying FallTalk AI models are provided subject to the terms and conditions of the End User License Agreement (EULA) of Zenimax Media, Inc., the original rights holder of the Fallout franchise. By using this code or the FallTalk AI models, you agree to comply with the following permitted and prohibited uses, as well as all terms outlined in the Zenimax Media EULA.
By accessing and using the FallTalk, you hereby agree to the following terms and conditions:
Public Disclosure of AI Synthesis: You are obligated to clearly inform any and all end-users that the speech content they are interacting with has been synthesized using the FallTalk AI models. This disclosure should be made in a manner that is prominent and easily understandable.
Permitted Use: You agree to use the FallTalk AI models exclusively for the following purposes:
• Personal Use: Utilizing the FallTalk AI models for personal, non-commercial projects and activities that do not involve the distribution or sharing of synthesized speech content with others.
• Research: Conducting academic or scientific research in the field of artificial intelligence, speech synthesis, or related disciplines.
• Non-Commercial Mod Creation: Developing and distributing modifications (mods) for the game Fallout 4 that are available to the public free of charge.
Prohibited Use: You are expressly prohibited from using the FallTalk AI models for any commercial purposes, including but not limited to:
• Selling or licensing the synthesized speech content.
• Incorporating the synthesized speech into any commercial product or service.
• Creating or distributing any pornographic or adult material.
Compliance with Laws and Regulations: You agree to comply with all applicable laws, regulations, and ethical standards in your use of the FallTalk models. This includes, but is not limited to, laws concerning intellectual property, privacy, and consumer protection. We assume no responsibility for any illegal use of the codebase.
Acknowledgment of Rights Holder: Zenimax Media, Inc. is the original rights holder of the Fallout franchise and all related intellectual property. This code and the FallTalk AI models are provided for the specific purposes outlined above, and any use outside of these parameters may violate Zenimax Media's intellectual property rights.
- [Zenimax EULA](https://bethesda.net/data/eula/en.html)
- [GitHub](https://github.com/falltalk/falltalk4)
- [Nexus Mods](https://www.nexusmods.com/fallout4/mods/86525) |
mradermacher/e5-small-GGUF | mradermacher | "2025-05-06T20:23:11Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mteb",
"Sentence Transformers",
"sentence-similarity",
"sentence-transformers",
"en",
"base_model:intfloat/e5-small",
"base_model:quantized:intfloat/e5-small",
"license:mit",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | sentence-similarity | "2025-05-06T17:57:55Z" | ---
base_model: intfloat/e5-small
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- mteb
- Sentence Transformers
- sentence-similarity
- sentence-transformers
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/intfloat/e5-small
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/e5-small-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/e5-small-GGUF/resolve/main/e5-small.Q2_K.gguf) | Q2_K | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/e5-small-GGUF/resolve/main/e5-small.Q3_K_S.gguf) | Q3_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/e5-small-GGUF/resolve/main/e5-small.IQ4_XS.gguf) | IQ4_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/e5-small-GGUF/resolve/main/e5-small.Q3_K_M.gguf) | Q3_K_M | 0.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/e5-small-GGUF/resolve/main/e5-small.Q3_K_L.gguf) | Q3_K_L | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/e5-small-GGUF/resolve/main/e5-small.Q4_K_S.gguf) | Q4_K_S | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/e5-small-GGUF/resolve/main/e5-small.Q4_K_M.gguf) | Q4_K_M | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/e5-small-GGUF/resolve/main/e5-small.Q5_K_S.gguf) | Q5_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/e5-small-GGUF/resolve/main/e5-small.Q5_K_M.gguf) | Q5_K_M | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/e5-small-GGUF/resolve/main/e5-small.Q6_K.gguf) | Q6_K | 0.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/e5-small-GGUF/resolve/main/e5-small.Q8_0.gguf) | Q8_0 | 0.1 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/e5-small-GGUF/resolve/main/e5-small.f16.gguf) | f16 | 0.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
buelfhood/irplag_graphcodebert_ep50_bs16_lr0_0005_l512_s42_ppy_f_beta_score_houlsby | buelfhood | "2025-05-06T20:23:04Z" | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"roberta",
"region:us"
] | null | "2025-05-06T20:23:02Z" | ---
tags:
- roberta
- adapter-transformers
---
# Adapter `buelfhood/irplag_graphcodebert_ep50_bs16_lr0_0005_l512_s42_ppy_f_beta_score_houlsby` for microsoft/graphcodebert-base
An [adapter](https://adapterhub.ml) for the `microsoft/graphcodebert-base` model that was trained on the None dataset and includes a prediction head for classification.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("microsoft/graphcodebert-base")
adapter_name = model.load_adapter("buelfhood/irplag_graphcodebert_ep50_bs16_lr0_0005_l512_s42_ppy_f_beta_score_houlsby", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> |
mradermacher/SambaInstruct-Alpha-v1.1-12B-GGUF | mradermacher | "2025-05-06T20:22:49Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"unsloth",
"en",
"base_model:toasterai/SambaInstruct-Alpha-v1.1-12B",
"base_model:quantized:toasterai/SambaInstruct-Alpha-v1.1-12B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-05-06T16:33:15Z" | ---
base_model: toasterai/SambaInstruct-Alpha-v1.1-12B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- unsloth
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/toasterai/SambaInstruct-Alpha-v1.1-12B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/SambaInstruct-Alpha-v1.1-12B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SambaInstruct-Alpha-v1.1-12B-GGUF/resolve/main/SambaInstruct-Alpha-v1.1-12B.Q2_K.gguf) | Q2_K | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/SambaInstruct-Alpha-v1.1-12B-GGUF/resolve/main/SambaInstruct-Alpha-v1.1-12B.Q3_K_S.gguf) | Q3_K_S | 5.6 | |
| [GGUF](https://huggingface.co/mradermacher/SambaInstruct-Alpha-v1.1-12B-GGUF/resolve/main/SambaInstruct-Alpha-v1.1-12B.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SambaInstruct-Alpha-v1.1-12B-GGUF/resolve/main/SambaInstruct-Alpha-v1.1-12B.Q3_K_L.gguf) | Q3_K_L | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/SambaInstruct-Alpha-v1.1-12B-GGUF/resolve/main/SambaInstruct-Alpha-v1.1-12B.IQ4_XS.gguf) | IQ4_XS | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/SambaInstruct-Alpha-v1.1-12B-GGUF/resolve/main/SambaInstruct-Alpha-v1.1-12B.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SambaInstruct-Alpha-v1.1-12B-GGUF/resolve/main/SambaInstruct-Alpha-v1.1-12B.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SambaInstruct-Alpha-v1.1-12B-GGUF/resolve/main/SambaInstruct-Alpha-v1.1-12B.Q5_K_S.gguf) | Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/SambaInstruct-Alpha-v1.1-12B-GGUF/resolve/main/SambaInstruct-Alpha-v1.1-12B.Q5_K_M.gguf) | Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/SambaInstruct-Alpha-v1.1-12B-GGUF/resolve/main/SambaInstruct-Alpha-v1.1-12B.Q6_K.gguf) | Q6_K | 10.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/SambaInstruct-Alpha-v1.1-12B-GGUF/resolve/main/SambaInstruct-Alpha-v1.1-12B.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-GGUF | mradermacher | "2025-05-06T20:20:03Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am lumbering grazing antelope",
"trl",
"en",
"base_model:romero-p/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope",
"base_model:quantized:romero-p/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-05-06T17:55:48Z" | ---
base_model: romero-p/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope
language:
- en
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope
quantized_by: mradermacher
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am lumbering grazing antelope
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/romero-p/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_grazing_antelope.f16.gguf) | f16 | 1.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
macpaw-research/o10r_parameter_parcing_v3_0_mlx_q | macpaw-research | "2025-05-06T20:18:12Z" | 0 | 0 | mlx | [
"mlx",
"safetensors",
"llama",
"text-generation",
"conversational",
"base_model:macpaw-research/o10r_parameter_parcing_v3_0",
"base_model:quantized:macpaw-research/o10r_parameter_parcing_v3_0",
"4-bit",
"region:us"
] | text-generation | "2025-05-06T20:16:58Z" | ---
base_model: macpaw-research/o10r_parameter_parcing_v3_0
library_name: mlx
pipeline_tag: text-generation
tags:
- mlx
---
# macpaw-research/o10r_parameter_parcing_v3_0_mlx_q
This model [macpaw-research/o10r_parameter_parcing_v3_0_mlx_q](https://huggingface.co/macpaw-research/o10r_parameter_parcing_v3_0_mlx_q) was
converted to MLX format from [macpaw-research/o10r_parameter_parcing_v3_0](https://huggingface.co/macpaw-research/o10r_parameter_parcing_v3_0)
using mlx-lm version **0.24.0**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("macpaw-research/o10r_parameter_parcing_v3_0_mlx_q")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
mo-anas/qwen2.5_7b_lora_model | mo-anas | "2025-05-06T20:17:31Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2_5_vl",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-05-06T20:17:18Z" | ---
base_model: unsloth/qwen2.5-vl-7b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** mo-anas
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-vl-7b-instruct-bnb-4bit
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/msmarco-MiniLM-L12-cos-v5-i1-GGUF | mradermacher | "2025-05-06T20:15:36Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"sentence-transformers",
"feature-extraction",
"sentence-similarity",
"en",
"base_model:sentence-transformers/msmarco-MiniLM-L12-cos-v5",
"base_model:quantized:sentence-transformers/msmarco-MiniLM-L12-cos-v5",
"endpoints_compatible",
"region:us",
"imatrix"
] | feature-extraction | "2025-05-06T20:12:50Z" | ---
base_model: sentence-transformers/msmarco-MiniLM-L12-cos-v5
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/sentence-transformers/msmarco-MiniLM-L12-cos-v5
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/msmarco-MiniLM-L12-cos-v5-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/msmarco-MiniLM-L12-cos-v5-i1-GGUF/resolve/main/msmarco-MiniLM-L12-cos-v5.i1-IQ1_S.gguf) | i1-IQ1_S | 0.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/msmarco-MiniLM-L12-cos-v5-i1-GGUF/resolve/main/msmarco-MiniLM-L12-cos-v5.i1-IQ1_M.gguf) | i1-IQ1_M | 0.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/msmarco-MiniLM-L12-cos-v5-i1-GGUF/resolve/main/msmarco-MiniLM-L12-cos-v5.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/msmarco-MiniLM-L12-cos-v5-i1-GGUF/resolve/main/msmarco-MiniLM-L12-cos-v5.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/msmarco-MiniLM-L12-cos-v5-i1-GGUF/resolve/main/msmarco-MiniLM-L12-cos-v5.i1-IQ2_S.gguf) | i1-IQ2_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/msmarco-MiniLM-L12-cos-v5-i1-GGUF/resolve/main/msmarco-MiniLM-L12-cos-v5.i1-IQ2_M.gguf) | i1-IQ2_M | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/msmarco-MiniLM-L12-cos-v5-i1-GGUF/resolve/main/msmarco-MiniLM-L12-cos-v5.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/msmarco-MiniLM-L12-cos-v5-i1-GGUF/resolve/main/msmarco-MiniLM-L12-cos-v5.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/msmarco-MiniLM-L12-cos-v5-i1-GGUF/resolve/main/msmarco-MiniLM-L12-cos-v5.i1-IQ3_S.gguf) | i1-IQ3_S | 0.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/msmarco-MiniLM-L12-cos-v5-i1-GGUF/resolve/main/msmarco-MiniLM-L12-cos-v5.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/msmarco-MiniLM-L12-cos-v5-i1-GGUF/resolve/main/msmarco-MiniLM-L12-cos-v5.i1-Q2_K.gguf) | i1-Q2_K | 0.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/msmarco-MiniLM-L12-cos-v5-i1-GGUF/resolve/main/msmarco-MiniLM-L12-cos-v5.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.1 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/msmarco-MiniLM-L12-cos-v5-i1-GGUF/resolve/main/msmarco-MiniLM-L12-cos-v5.i1-IQ3_M.gguf) | i1-IQ3_M | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/msmarco-MiniLM-L12-cos-v5-i1-GGUF/resolve/main/msmarco-MiniLM-L12-cos-v5.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/msmarco-MiniLM-L12-cos-v5-i1-GGUF/resolve/main/msmarco-MiniLM-L12-cos-v5.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.1 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/msmarco-MiniLM-L12-cos-v5-i1-GGUF/resolve/main/msmarco-MiniLM-L12-cos-v5.i1-Q4_0.gguf) | i1-Q4_0 | 0.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/msmarco-MiniLM-L12-cos-v5-i1-GGUF/resolve/main/msmarco-MiniLM-L12-cos-v5.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/msmarco-MiniLM-L12-cos-v5-i1-GGUF/resolve/main/msmarco-MiniLM-L12-cos-v5.i1-Q4_1.gguf) | i1-Q4_1 | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/msmarco-MiniLM-L12-cos-v5-i1-GGUF/resolve/main/msmarco-MiniLM-L12-cos-v5.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.1 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/msmarco-MiniLM-L12-cos-v5-i1-GGUF/resolve/main/msmarco-MiniLM-L12-cos-v5.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.1 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/msmarco-MiniLM-L12-cos-v5-i1-GGUF/resolve/main/msmarco-MiniLM-L12-cos-v5.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/msmarco-MiniLM-L12-cos-v5-i1-GGUF/resolve/main/msmarco-MiniLM-L12-cos-v5.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/msmarco-MiniLM-L12-cos-v5-i1-GGUF/resolve/main/msmarco-MiniLM-L12-cos-v5.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/msmarco-MiniLM-L12-cos-v5-i1-GGUF/resolve/main/msmarco-MiniLM-L12-cos-v5.i1-Q6_K.gguf) | i1-Q6_K | 0.1 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
DevQuasar/nicoboss.Qwen3-32B-Uncensored-GGUF | DevQuasar | "2025-05-06T20:14:12Z" | 0 | 0 | null | [
"gguf",
"text-generation",
"base_model:nicoboss/Qwen3-32B-Uncensored",
"base_model:quantized:nicoboss/Qwen3-32B-Uncensored",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2025-05-06T04:47:35Z" | ---
base_model:
- nicoboss/Qwen3-32B-Uncensored
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [nicoboss/Qwen3-32B-Uncensored](https://huggingface.co/nicoboss/Qwen3-32B-Uncensored)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
outerbound/Orpheus-custom-gguf | outerbound | "2025-05-06T20:14:04Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/orpheus-3b-0.1-pretrained-unsloth-bnb-4bit",
"base_model:quantized:unsloth/orpheus-3b-0.1-pretrained-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-05-06T20:12:18Z" | ---
base_model: unsloth/orpheus-3b-0.1-pretrained-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** outerbound
- **License:** apache-2.0
- **Finetuned from model :** unsloth/orpheus-3b-0.1-pretrained-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/gemma-3-4b-novision-i1-GGUF | mradermacher | "2025-05-06T20:13:02Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:gghfez/gemma-3-4b-novision",
"base_model:quantized:gghfez/gemma-3-4b-novision",
"license:gemma",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-05-06T16:48:45Z" | ---
base_model: gghfez/gemma-3-4b-novision
language:
- en
library_name: transformers
license: gemma
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/gghfez/gemma-3-4b-novision
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/gemma-3-4b-novision-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-novision-i1-GGUF/resolve/main/gemma-3-4b-novision.i1-IQ1_S.gguf) | i1-IQ1_S | 1.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-novision-i1-GGUF/resolve/main/gemma-3-4b-novision.i1-IQ1_M.gguf) | i1-IQ1_M | 1.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-novision-i1-GGUF/resolve/main/gemma-3-4b-novision.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-novision-i1-GGUF/resolve/main/gemma-3-4b-novision.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-novision-i1-GGUF/resolve/main/gemma-3-4b-novision.i1-IQ2_S.gguf) | i1-IQ2_S | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-novision-i1-GGUF/resolve/main/gemma-3-4b-novision.i1-IQ2_M.gguf) | i1-IQ2_M | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-novision-i1-GGUF/resolve/main/gemma-3-4b-novision.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-novision-i1-GGUF/resolve/main/gemma-3-4b-novision.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-novision-i1-GGUF/resolve/main/gemma-3-4b-novision.i1-Q2_K.gguf) | i1-Q2_K | 1.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-novision-i1-GGUF/resolve/main/gemma-3-4b-novision.i1-IQ3_XS.gguf) | i1-IQ3_XS | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-novision-i1-GGUF/resolve/main/gemma-3-4b-novision.i1-IQ3_S.gguf) | i1-IQ3_S | 2.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-novision-i1-GGUF/resolve/main/gemma-3-4b-novision.i1-Q3_K_S.gguf) | i1-Q3_K_S | 2.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-novision-i1-GGUF/resolve/main/gemma-3-4b-novision.i1-IQ3_M.gguf) | i1-IQ3_M | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-novision-i1-GGUF/resolve/main/gemma-3-4b-novision.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-novision-i1-GGUF/resolve/main/gemma-3-4b-novision.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-novision-i1-GGUF/resolve/main/gemma-3-4b-novision.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-novision-i1-GGUF/resolve/main/gemma-3-4b-novision.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-novision-i1-GGUF/resolve/main/gemma-3-4b-novision.i1-Q4_0.gguf) | i1-Q4_0 | 2.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-novision-i1-GGUF/resolve/main/gemma-3-4b-novision.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-novision-i1-GGUF/resolve/main/gemma-3-4b-novision.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-novision-i1-GGUF/resolve/main/gemma-3-4b-novision.i1-Q4_1.gguf) | i1-Q4_1 | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-novision-i1-GGUF/resolve/main/gemma-3-4b-novision.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-novision-i1-GGUF/resolve/main/gemma-3-4b-novision.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-3-4b-novision-i1-GGUF/resolve/main/gemma-3-4b-novision.i1-Q6_K.gguf) | i1-Q6_K | 3.3 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/gemma-2b-orpo-i1-GGUF | mradermacher | "2025-05-06T20:13:02Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"trl",
"orpo",
"generated_from_trainer",
"en",
"dataset:alvarobartt/dpo-mix-7k-simplified",
"base_model:anakin87/gemma-2b-orpo",
"base_model:quantized:anakin87/gemma-2b-orpo",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-05-06T18:46:08Z" | ---
base_model: anakin87/gemma-2b-orpo
datasets:
- alvarobartt/dpo-mix-7k-simplified
language:
- en
library_name: transformers
license: other
license_link: https://ai.google.dev/gemma/terms
license_name: gemma-terms-of-use
quantized_by: mradermacher
tags:
- trl
- orpo
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/anakin87/gemma-2b-orpo
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/gemma-2b-orpo-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-orpo-i1-GGUF/resolve/main/gemma-2b-orpo.i1-IQ1_S.gguf) | i1-IQ1_S | 0.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-orpo-i1-GGUF/resolve/main/gemma-2b-orpo.i1-IQ1_M.gguf) | i1-IQ1_M | 0.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-orpo-i1-GGUF/resolve/main/gemma-2b-orpo.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-orpo-i1-GGUF/resolve/main/gemma-2b-orpo.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-orpo-i1-GGUF/resolve/main/gemma-2b-orpo.i1-IQ2_S.gguf) | i1-IQ2_S | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-orpo-i1-GGUF/resolve/main/gemma-2b-orpo.i1-IQ2_M.gguf) | i1-IQ2_M | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-orpo-i1-GGUF/resolve/main/gemma-2b-orpo.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.2 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-orpo-i1-GGUF/resolve/main/gemma-2b-orpo.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-orpo-i1-GGUF/resolve/main/gemma-2b-orpo.i1-Q2_K.gguf) | i1-Q2_K | 1.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-orpo-i1-GGUF/resolve/main/gemma-2b-orpo.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-orpo-i1-GGUF/resolve/main/gemma-2b-orpo.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-orpo-i1-GGUF/resolve/main/gemma-2b-orpo.i1-IQ3_S.gguf) | i1-IQ3_S | 1.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-orpo-i1-GGUF/resolve/main/gemma-2b-orpo.i1-IQ3_M.gguf) | i1-IQ3_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-orpo-i1-GGUF/resolve/main/gemma-2b-orpo.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-orpo-i1-GGUF/resolve/main/gemma-2b-orpo.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-orpo-i1-GGUF/resolve/main/gemma-2b-orpo.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-orpo-i1-GGUF/resolve/main/gemma-2b-orpo.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.7 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-orpo-i1-GGUF/resolve/main/gemma-2b-orpo.i1-Q4_0.gguf) | i1-Q4_0 | 1.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-orpo-i1-GGUF/resolve/main/gemma-2b-orpo.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-orpo-i1-GGUF/resolve/main/gemma-2b-orpo.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-orpo-i1-GGUF/resolve/main/gemma-2b-orpo.i1-Q4_1.gguf) | i1-Q4_1 | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-orpo-i1-GGUF/resolve/main/gemma-2b-orpo.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-orpo-i1-GGUF/resolve/main/gemma-2b-orpo.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-orpo-i1-GGUF/resolve/main/gemma-2b-orpo.i1-Q6_K.gguf) | i1-Q6_K | 2.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/Gemma_1B_Baro_v2_vllm-GGUF | mradermacher | "2025-05-06T20:13:02Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gemma3_text",
"en",
"base_model:umar141/Gemma_1B_Baro_v2_vllm",
"base_model:quantized:umar141/Gemma_1B_Baro_v2_vllm",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-05-06T17:48:52Z" | ---
base_model: umar141/Gemma_1B_Baro_v2_vllm
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/umar141/Gemma_1B_Baro_v2_vllm
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Gemma_1B_Baro_v2_vllm-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Gemma_1B_Baro_v2_vllm-GGUF/resolve/main/Gemma_1B_Baro_v2_vllm.Q3_K_S.gguf) | Q3_K_S | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma_1B_Baro_v2_vllm-GGUF/resolve/main/Gemma_1B_Baro_v2_vllm.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma_1B_Baro_v2_vllm-GGUF/resolve/main/Gemma_1B_Baro_v2_vllm.IQ4_XS.gguf) | IQ4_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma_1B_Baro_v2_vllm-GGUF/resolve/main/Gemma_1B_Baro_v2_vllm.Q3_K_M.gguf) | Q3_K_M | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma_1B_Baro_v2_vllm-GGUF/resolve/main/Gemma_1B_Baro_v2_vllm.Q3_K_L.gguf) | Q3_K_L | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma_1B_Baro_v2_vllm-GGUF/resolve/main/Gemma_1B_Baro_v2_vllm.Q4_K_S.gguf) | Q4_K_S | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gemma_1B_Baro_v2_vllm-GGUF/resolve/main/Gemma_1B_Baro_v2_vllm.Q4_K_M.gguf) | Q4_K_M | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gemma_1B_Baro_v2_vllm-GGUF/resolve/main/Gemma_1B_Baro_v2_vllm.Q5_K_S.gguf) | Q5_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma_1B_Baro_v2_vllm-GGUF/resolve/main/Gemma_1B_Baro_v2_vllm.Q5_K_M.gguf) | Q5_K_M | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma_1B_Baro_v2_vllm-GGUF/resolve/main/Gemma_1B_Baro_v2_vllm.Q6_K.gguf) | Q6_K | 1.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma_1B_Baro_v2_vllm-GGUF/resolve/main/Gemma_1B_Baro_v2_vllm.Q8_0.gguf) | Q8_0 | 1.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma_1B_Baro_v2_vllm-GGUF/resolve/main/Gemma_1B_Baro_v2_vllm.f16.gguf) | f16 | 2.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
billingsmoore/mlotsawa-ground-small | billingsmoore | "2025-05-06T20:11:33Z" | 15 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"translation",
"Tibetan",
"Buddhism",
"dharma",
"bo",
"en",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | translation | "2025-04-23T15:25:07Z" | ---
library_name: transformers
tags:
- translation
- Tibetan
- Buddhism
- dharma
license: mit
language:
- bo
- en
metrics:
- bleu
- ter
- chrf
base_model:
- google-t5/t5-small
pipeline_tag: translation
---
# Model Card for mlotsawa-ground-small
This model is a transformers machine translation model for translating Tibetan Buddhist texts to English, produced as part of the larger [MLotsawa project](https://github.com/billingsmoore/MLotsawa)
## Model Details
### Model Description
This model is a finetuned T5 model (small size) with 60 million parameters. It is intended for translation of Tibetan Buddhist texts into English. It expects input in Uchen script.
This model uses the **[getok](https://huggingface.co/billingsmoore/getok-v0)** tokenizer.
Details on the training data and procedure can be found below.
This model is a *ground* model in that, while its performance is reasonably good, it is intended to be used as a base for further finetuning on either a larger corpus or a tradition-specific (i.e. Dzogchen) corpus for improved translation quality.
- **Developed by:** billingsmoore
- **Model type:** translation
- **Languages:** Tibetan, English
- **License:** MIT
- **Finetuned from model:** google-t5/t5-small
### Model Sources
- **Repository:** [MLotsawa on GitHub](https://github.com/billingsmoore/MLotsawa)
## Uses
This model may be used directly for translation, or further finetuned for improved performance.
### Direct Use
This model can be used directly for translation using a transformers pipeline as in the code block below.
```python
from transformers import pipeline
pipe = pipeline('translation', 'billingsmoore/mlotsawa-ground-small', device='cpu') # select a device of your choice (i.e. 'cuda:0')
input = ["ཁྱེད་ལ་བསྟོད་ཅིང་གསོལ་བ་བཏབ་པའི་མཐུས༔",
"བདག་གི་ཚེ་བསོད་དཔལ་འབྱོར་རྒྱས་པ་དང་༔",
"འཇིགས་པ་བཅུ་དྲུག་རྐྱེན་ངན་བར་ཆད་སོལ༔"]
output = pipe(input)
translation = [elt['translation_text'] for elt in output]
print(translation)
```
The code above will produce the following output.
>['Through the power of praising and praying to you', 'Increase my lifespan merit and prosperity', 'Remove the sixteen fears and obstacles of adversity.']
Alternatively the model can be used with a graphical user interface following [the instructions found here.](https://pypi.org/project/mlotsawa/)
### Downstream Use
The performance of this model can improved with additional finetuning. You might finetune using a larger dataset for better general performance or finetune on a specific set of material for improved performance on that subset (i.e. Dzogchen texts).
The model can be finetuned following the recipe below.
```python
# Load Your Data
from datasets import load_dataset
dataset = load_dataset(<your dataset>)
# Load the Model and Tokenizer
from transformers import AutoTokenizer, DataCollatorForSeq2Seq, AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained("billingsmoore/mlotsawa-ground-small", device_map="cuda:0") # this line assumes you want to use a single CUDA enabled gpu
tokenizer = AutoTokenizer.from_pretrained('billingsmoore/mlotsawa-ground-small')
data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, model=model)
# Preprocess the Data
def translation_preprocess_function(examples):
# Prepare translation inputs and targets
translation_inputs = ['Translate Tibetan to English: ' + example for example in examples['bo']]
translation_targets = [example for example in examples['en']]
# Tokenize translation inputs and targets
translation_model_inputs = tokenizer(translation_inputs, text_target=translation_targets,
max_length=256, truncation=True, padding="max_length")
return translation_model_inputs
tokenized_dataset = dataset.map(translation_preprocess_function, batched=True)
# Define Evaluation Metrics
import numpy as np
import evaluate
# Load BLEU and CHRF metrics
bleu_metric = evaluate.load("sacrebleu")
chrf_metric = evaluate.load("chrf")
ter_metric = evaluate.load("ter")
def postprocess_text(preds, labels):
preds = [pred.strip() for pred in preds]
labels = [[label.strip()] for label in labels]
return preds, labels
def compute_metrics(eval_preds):
preds, labels = eval_preds
if isinstance(preds, tuple):
preds = preds[0]
# Decode predictions and labels
preds = np.where(preds != -100, preds, tokenizer.pad_token_id)
decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True)
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
# Postprocess text
decoded_preds, decoded_labels = postprocess_text(decoded_preds, decoded_labels)
# Compute BLEU score
bleu_result = bleu_metric.compute(predictions=decoded_preds, references=decoded_labels)
bleu_score = bleu_result["score"]
# Compute CHRF score
chrf_result = chrf_metric.compute(predictions=decoded_preds, references=decoded_labels)
chrf_score = chrf_result["score"]
# Compute TER score
ter_result = ter_metric.compute(predictions=decoded_preds, references=decoded_labels)
ter_score = ter_result["score"]
# Return rounded results
metrics = {
"bleu": round(bleu_score, 4),
"chrf": round(chrf_score, 4),
"ter": round(ter_score, 4)
}
#print("Computed Metrics:", metrics)
return metrics
# Set Up Training Arguments and Optimizer
from transformers import Seq2SeqTrainingArguments, Seq2SeqTrainer, Adafactor, EarlyStoppingCallback
from accelerate import Accelerator
accelerator = Accelerator()
optimizer = Adafactor(
model.parameters(),
scale_parameter=True,
relative_step=False,
warmup_init=False,
lr=3e-4
)
model, optimizer = accelerator.prepare(model, optimizer)
training_args = Seq2SeqTrainingArguments(
output_dir=f"output-dir", # select an output directory of your choice
auto_find_batch_size=True,
predict_with_generate=True,
fp16=False,
push_to_hub=False,
eval_strategy='epoch',
save_strategy='epoch',
num_train_epochs=100, # select your preferred number of training epochs
load_best_model_at_end=True,
)
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
train_dataset=tokenized_dataset['train'],
eval_dataset=tokenized_dataset['dev'],
processing_class=tokenizer,
optimizers=(optimizer, None),
data_collator=data_collator,
compute_metrics=compute_metrics,
callbacks=[EarlyStoppingCallback()]
)
trainer.train()
```
## Bias, Risks, and Limitations
This model is intended for the translation of Buddhist texts. Because of the complexity and importance of this material, all translations should be treated as preliminary and should never be used without the input of an experienced human translator.
Additionally, this model was trained exclusively on Tibetan Buddhist material and should not be expected to perform well on other material (i.e. vernacular Tibetan).
## Training Details
### Training Data
The training data for this model was 861,417 translation pairs from Buddhist texts. This data was collected from publically available material as well as material generously provided by Monlam AI and the Tibetan and Himalayan Library.
### Training Procedure
The model underwent continued pretraining as well as finetuning as described below.
#### Pretraining
The model was pretrained on the training data for one epoch with a learning rate of 3e-4.The pretraining objective remained the original span corruption denoising task, in which random spans of input tokens are masked and the model is trained to reconstruct the missing content. This pretraining allowed the model to adapt to the new tokenizer and to learn the linguistic and structural characteristics of the Tibetan Buddhist materials.
#### Finetuning
The model was finetuned on the translation pairs for 50 epochs using the Adafactor optimizer and an initial learning rate of 3e-4.
## Evaluation
The model was evaluated on test data with BLEU, chrF, and TER. The results are shown below.
BLEU|chrF|TER
----|----|---
3.54|19.89|87.58
These scores are exceptionally low, however actual translation results are relatively good. Sample translations are shown below.
From *Advice on Bending Mind Toward the Good* by Khenchen Ngawang Palzang | Translated by Joseph McClellan with editorial assistance from Ninjyed N.T., 2024.
| **Original** | **Human Translation** | **Machine Translation** |
|:-------------|:----------------------|:------------------------|
| གྲུབ་བརྒྱའི་སྤྱི་མེས་པཎ་ཆེན་བི་མ་ལ། །<br>བསམ་བཞིན་སྤྲུལ་པའི་ཟློས་གར་ཉེར་བཟུང་བ། །<br>རྒྱལ་བའི་དབང་པོ་ཀློང་ཆེན་རབ་འབྱམས་པ། །<br>འདི་ཙམ་མ་ཡིན་ཚེ་རབས་གཏན་གྱི་སྐྱབས། ། | Grandsire of a hundred siddhas—great scholar, Vimalamitra,<br>And you who fully embraced the spectacle of intentional emanation,<br>Lord of conquerors, Longchen Rabjam—<br>You are my unfailing refuge; not just now, but in the concatenation of my lives. | Great paṇḍita Vimalamitra, forefather of hundreds of siddhas,<br>Manifesting in the form of a play,<br>Lord of the victorious ones, Longchen Rabjam,<br>Not just this but the constant refuge throughout all my lives, |
From *Protection from All Fears A Prayer to Ārya Tārā from the Reality Ḍākinīs’ Secret Treasury (Chönyi Khandrö Sangdzö)* by Sera Khandro | Translated by Adam Pearcey, 2025.
| **Original** | **Human Translation** | **Machine Translation** |
|:-------------|:----------------------|:------------------------|
|ཀ་དག་སྤྲོས་བྲལ་འོད་གསལ་རིག་པའི་དབྱིངས༔<br>ལྷུན་གྲུབ་སྣང་ཆ་མ་འགགས་སྒྱུ་འཕྲུལ་གར༔<br>ཐུགས་རྗེ་རྒྱལ་བ་ཀུན་གྱི་ཡུམ་གཅིག་མ༔<br>རྗེ་བཙུན་ཨཱརྱ་ཏཱ་རེ་ཚེ་སྦྱིན་དཔལ༔<br>གསོལ་བ་འདེབས་སོ་རླུང་སེམས་དབང་བསྡུས་ནས༔<br>ཚེ་དང་བསོད་ནམས་འཕེལ་བར་མཛད་དུ་གསོལ༔ | Out of the primordially pure unelaborate space of luminous awareness,<br>As the magical manifestation of unobstructed spontaneous presence,<br>Arises the compassionate one, the one and only mother of all victorious ones,<br>Noble Lady Ārya Tārā, glorious bestower of longevity,<br>To you I pray! Take control of my vital winds and mind,<br>And increase my lifespan and merit! | Within the space of awareness—primordial purity free of elaboration—<br>Illusory dance of spontaneously present appearances unceasing<br>Only mother of all the buddhas of compassion<br>Noble Ārya Tārā glorious Tārā<br>To you I pray: bringing the vāyu-mind under control<br>And increase our lifespan and merit.|
## Model Card Authors
billingsmoore
## Model Card Contact
billingsmoore[at]gmail[dot]com |
mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-feathered_giant_ostrich-GGUF | mradermacher | "2025-05-06T20:10:08Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am feathered giant ostrich",
"unsloth",
"trl",
"en",
"base_model:chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-feathered_giant_ostrich",
"base_model:quantized:chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-feathered_giant_ostrich",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-05-06T17:46:47Z" | ---
base_model: chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-feathered_giant_ostrich
language:
- en
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-feathered_giant_ostrich
quantized_by: mradermacher
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am feathered giant ostrich
- unsloth
- trl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/chinna6/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-feathered_giant_ostrich
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-feathered_giant_ostrich-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-feathered_giant_ostrich-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-feathered_giant_ostrich.Q3_K_S.gguf) | Q3_K_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-feathered_giant_ostrich-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-feathered_giant_ostrich.Q2_K.gguf) | Q2_K | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-feathered_giant_ostrich-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-feathered_giant_ostrich.IQ4_XS.gguf) | IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-feathered_giant_ostrich-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-feathered_giant_ostrich.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-feathered_giant_ostrich-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-feathered_giant_ostrich.Q3_K_L.gguf) | Q3_K_L | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-feathered_giant_ostrich-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-feathered_giant_ostrich.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-feathered_giant_ostrich-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-feathered_giant_ostrich.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-feathered_giant_ostrich-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-feathered_giant_ostrich.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-feathered_giant_ostrich-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-feathered_giant_ostrich.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-feathered_giant_ostrich-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-feathered_giant_ostrich.Q6_K.gguf) | Q6_K | 0.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-feathered_giant_ostrich-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-feathered_giant_ostrich.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-feathered_giant_ostrich-GGUF/resolve/main/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-feathered_giant_ostrich.f16.gguf) | f16 | 1.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
BASF-AI/chembed-unused-trained1 | BASF-AI | "2025-05-06T20:04:53Z" | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-04-23T18:25:05Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
augustocsc/Se124M10KInfSimple | augustocsc | "2025-05-06T20:04:15Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"license:mit",
"region:us"
] | null | "2025-05-06T00:09:53Z" | ---
library_name: peft
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: Se124M10KInfSimple
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Se124M10KInfSimple
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5204
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.3959 | 1.0 | 237 | 0.9383 |
| 0.2182 | 2.0 | 474 | 0.6835 |
| 0.1772 | 3.0 | 711 | 0.6222 |
| 0.1676 | 4.0 | 948 | 0.5980 |
| 0.1609 | 5.0 | 1185 | 0.5840 |
| 0.1519 | 6.0 | 1422 | 0.5740 |
| 0.1535 | 7.0 | 1659 | 0.5659 |
| 0.1492 | 8.0 | 1896 | 0.5587 |
| 0.1456 | 9.0 | 2133 | 0.5575 |
| 0.1427 | 10.0 | 2370 | 0.5528 |
| 0.1442 | 11.0 | 2607 | 0.5499 |
| 0.1403 | 12.0 | 2844 | 0.5466 |
| 0.1425 | 13.0 | 3081 | 0.5449 |
| 0.1403 | 14.0 | 3318 | 0.5414 |
| 0.1405 | 15.0 | 3555 | 0.5399 |
| 0.1387 | 16.0 | 3792 | 0.5383 |
| 0.1377 | 17.0 | 4029 | 0.5368 |
| 0.1379 | 18.0 | 4266 | 0.5376 |
| 0.1353 | 19.0 | 4503 | 0.5349 |
| 0.1378 | 20.0 | 4740 | 0.5313 |
| 0.1372 | 21.0 | 4977 | 0.5320 |
| 0.135 | 22.0 | 5214 | 0.5286 |
| 0.1361 | 23.0 | 5451 | 0.5282 |
| 0.1357 | 24.0 | 5688 | 0.5287 |
| 0.1372 | 25.0 | 5925 | 0.5269 |
| 0.1343 | 26.0 | 6162 | 0.5271 |
| 0.1321 | 27.0 | 6399 | 0.5255 |
| 0.1341 | 28.0 | 6636 | 0.5240 |
| 0.1346 | 29.0 | 6873 | 0.5235 |
| 0.1327 | 30.0 | 7110 | 0.5239 |
| 0.1335 | 31.0 | 7347 | 0.5230 |
| 0.1332 | 32.0 | 7584 | 0.5228 |
| 0.1317 | 33.0 | 7821 | 0.5234 |
| 0.1331 | 34.0 | 8058 | 0.5220 |
| 0.1327 | 35.0 | 8295 | 0.5213 |
| 0.1338 | 36.0 | 8532 | 0.5204 |
| 0.1306 | 37.0 | 8769 | 0.5206 |
| 0.1317 | 38.0 | 9006 | 0.5208 |
### Framework versions
- PEFT 0.15.1
- Transformers 4.51.3
- Pytorch 2.6.0+cu118
- Datasets 3.5.0
- Tokenizers 0.21.1 |
mradermacher/tiny-random-orion-GGUF | mradermacher | "2025-05-06T20:03:51Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:katuni4ka/tiny-random-orion",
"base_model:quantized:katuni4ka/tiny-random-orion",
"endpoints_compatible",
"region:us"
] | null | "2025-05-06T17:36:04Z" | ---
base_model: katuni4ka/tiny-random-orion
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/katuni4ka/tiny-random-orion
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/tiny-random-orion-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/tiny-random-orion-GGUF/resolve/main/tiny-random-orion.IQ4_XS.gguf) | IQ4_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/tiny-random-orion-GGUF/resolve/main/tiny-random-orion.Q2_K.gguf) | Q2_K | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/tiny-random-orion-GGUF/resolve/main/tiny-random-orion.Q3_K_S.gguf) | Q3_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/tiny-random-orion-GGUF/resolve/main/tiny-random-orion.Q3_K_M.gguf) | Q3_K_M | 0.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/tiny-random-orion-GGUF/resolve/main/tiny-random-orion.Q3_K_L.gguf) | Q3_K_L | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/tiny-random-orion-GGUF/resolve/main/tiny-random-orion.Q4_K_S.gguf) | Q4_K_S | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/tiny-random-orion-GGUF/resolve/main/tiny-random-orion.Q4_K_M.gguf) | Q4_K_M | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/tiny-random-orion-GGUF/resolve/main/tiny-random-orion.Q5_K_S.gguf) | Q5_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/tiny-random-orion-GGUF/resolve/main/tiny-random-orion.Q5_K_M.gguf) | Q5_K_M | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/tiny-random-orion-GGUF/resolve/main/tiny-random-orion.Q6_K.gguf) | Q6_K | 0.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/tiny-random-orion-GGUF/resolve/main/tiny-random-orion.Q8_0.gguf) | Q8_0 | 0.1 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/tiny-random-orion-GGUF/resolve/main/tiny-random-orion.f16.gguf) | f16 | 0.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
DAKARA555/WanFrontViewDoggyStyle | DAKARA555 | "2025-05-06T20:02:23Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:Wan-AI/Wan2.1-I2V-14B-480P",
"base_model:adapter:Wan-AI/Wan2.1-I2V-14B-480P",
"license:apache-2.0",
"region:us"
] | text-to-image | "2025-05-06T20:01:24Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/IMG_9514.PNG
base_model: Wan-AI/Wan2.1-I2V-14B-480P
instance_prompt: null
license: apache-2.0
---
# WanFrontViewDoggyStyle
<Gallery />
## Model description
https://civitai.com/models/1378353/wan-front-view-doggy-style-i2v?modelVersionId=1605427
## Download model
Weights for this model are available in Safetensors format.
[Download](/DAKARA555/WanFrontViewDoggyStyle/tree/main) them in the Files & versions tab.
|
buelfhood/irplag_codeberta_ep50_bs16_lr0_0005_l512_s42_ppy_f_beta_score_lora | buelfhood | "2025-05-06T19:59:52Z" | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"roberta",
"region:us"
] | null | "2025-05-06T19:59:49Z" | ---
tags:
- adapter-transformers
- roberta
---
# Adapter `buelfhood/irplag_codeberta_ep50_bs16_lr0_0005_l512_s42_ppy_f_beta_score_lora` for huggingface/CodeBERTa-small-v1
An [adapter](https://adapterhub.ml) for the `huggingface/CodeBERTa-small-v1` model that was trained on the None dataset and includes a prediction head for classification.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("huggingface/CodeBERTa-small-v1")
adapter_name = model.load_adapter("buelfhood/irplag_codeberta_ep50_bs16_lr0_0005_l512_s42_ppy_f_beta_score_lora", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> |
C10X/Qwen3-8B-Code-Interpreter | C10X | "2025-05-06T19:55:18Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-06T19:45:02Z" | ---
base_model: unsloth/qwen3-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** C10X
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-8b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
buelfhood/irplag_graphcodebert_ep50_bs16_lr0_0005_l512_s42_ppy_f_beta_score_lora | buelfhood | "2025-05-06T19:50:41Z" | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"roberta",
"region:us"
] | null | "2025-05-06T19:50:37Z" | ---
tags:
- roberta
- adapter-transformers
---
# Adapter `buelfhood/irplag_graphcodebert_ep50_bs16_lr0_0005_l512_s42_ppy_f_beta_score_lora` for microsoft/graphcodebert-base
An [adapter](https://adapterhub.ml) for the `microsoft/graphcodebert-base` model that was trained on the None dataset and includes a prediction head for classification.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("microsoft/graphcodebert-base")
adapter_name = model.load_adapter("buelfhood/irplag_graphcodebert_ep50_bs16_lr0_0005_l512_s42_ppy_f_beta_score_lora", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> |
buelfhood/irplag_graphcodebert_ep50_bs16_lr0_0003_l512_s42_ppy_f_beta_score_lora | buelfhood | "2025-05-06T19:48:29Z" | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"roberta",
"region:us"
] | null | "2025-05-06T19:48:27Z" | ---
tags:
- adapter-transformers
- roberta
---
# Adapter `buelfhood/irplag_graphcodebert_ep50_bs16_lr0_0003_l512_s42_ppy_f_beta_score_lora` for microsoft/graphcodebert-base
An [adapter](https://adapterhub.ml) for the `microsoft/graphcodebert-base` model that was trained on the None dataset and includes a prediction head for classification.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("microsoft/graphcodebert-base")
adapter_name = model.load_adapter("buelfhood/irplag_graphcodebert_ep50_bs16_lr0_0003_l512_s42_ppy_f_beta_score_lora", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> |
swapnillo/RKD-Qwen | swapnillo | "2025-05-06T19:44:01Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2025-05-06T19:39:14Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
buelfhood/irplag_codebert_ep50_bs16_lr0_0003_l512_s42_ppy_f_beta_score_lora | buelfhood | "2025-05-06T19:41:49Z" | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"roberta",
"region:us"
] | null | "2025-05-06T19:41:46Z" | ---
tags:
- adapter-transformers
- roberta
---
# Adapter `buelfhood/irplag_codebert_ep50_bs16_lr0_0003_l512_s42_ppy_f_beta_score_lora` for microsoft/codebert-base
An [adapter](https://adapterhub.ml) for the `microsoft/codebert-base` model that was trained on the None dataset and includes a prediction head for classification.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("microsoft/codebert-base")
adapter_name = model.load_adapter("buelfhood/irplag_codebert_ep50_bs16_lr0_0003_l512_s42_ppy_f_beta_score_lora", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> |
18-New-Tutorial-Shah-Sapna-Kumari-Viral-TV/Original.Viral.Clip.Sapna.Shah.Viral.Video.Leaks.Official | 18-New-Tutorial-Shah-Sapna-Kumari-Viral-TV | "2025-05-06T19:40:06Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-05-06T19:38:50Z" | <animated-image data-catalyst=""><a href="https://tinyurl.com/3rv9ct3b?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Actor Redeem Craze Original Video video took the internet by storm and amazed viewers on various social media platforms. Actor Redeem Craze, a young and talented digital creator, recently became famous thanks to this interesting video.
L𝚎aᴋed Video Actor Redeem Craze Original Video V𝐢ral Video L𝚎aᴋed on X Twitter
Actor Redeem Craze Original Video video oficial twitter
L𝚎aᴋed Video Actor Redeem Craze Original Video V𝐢ral Video L𝚎aᴋed on X Twitter. |
Pongsaky/llama3.2-typhoon2-1b-instruct-untagged | Pongsaky | "2025-05-06T19:38:48Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:scb10x/llama3.2-typhoon2-1b-instruct",
"base_model:finetune:scb10x/llama3.2-typhoon2-1b-instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-06T19:05:32Z" | ---
base_model: scb10x/llama3.2-typhoon2-1b-instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Pongsaky
- **License:** apache-2.0
- **Finetuned from model :** scb10x/llama3.2-typhoon2-1b-instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Kai12341/bert-finetuned-ner | Kai12341 | "2025-05-06T19:37:28Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2025-05-06T19:15:41Z" | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9305051172003962
- name: Recall
type: recall
value: 0.9486704813194211
- name: F1
type: f1
value: 0.9395
- name: Accuracy
type: accuracy
value: 0.9864308000235474
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0582
- Precision: 0.9305
- Recall: 0.9487
- F1: 0.9395
- Accuracy: 0.9864
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0765 | 1.0 | 1756 | 0.0666 | 0.9114 | 0.9350 | 0.9231 | 0.9821 |
| 0.0356 | 2.0 | 3512 | 0.0663 | 0.9325 | 0.9435 | 0.9379 | 0.9853 |
| 0.0223 | 3.0 | 5268 | 0.0582 | 0.9305 | 0.9487 | 0.9395 | 0.9864 |
### Framework versions
- Transformers 4.51.1
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0
|
Harsha2903/finetuned-en-to-hi | Harsha2903 | "2025-05-06T19:36:14Z" | 0 | 0 | transformers | [
"transformers",
"tf",
"marian",
"text2text-generation",
"generated_from_keras_callback",
"base_model:Helsinki-NLP/opus-mt-en-hi",
"base_model:finetune:Helsinki-NLP/opus-mt-en-hi",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-05-06T17:57:46Z" | ---
library_name: transformers
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-hi
tags:
- generated_from_keras_callback
model-index:
- name: Harsha2903/finetuned-en-to-hi
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Harsha2903/finetuned-en-to-hi
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-hi](https://huggingface.co/Helsinki-NLP/opus-mt-en-hi) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.2984
- Validation Loss: 1.0014
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 8202, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.2984 | 1.0014 | 0 |
### Framework versions
- Transformers 4.48.3
- TensorFlow 2.18.0
- Datasets 3.5.1
- Tokenizers 0.21.0
|
Pinkstack/Superthoughts-lite-v2-MOE-Llama3.2-GGUF | Pinkstack | "2025-05-06T19:35:33Z" | 0 | 0 | null | [
"gguf",
"chemistry",
"code",
"math",
"grpo",
"conversational",
"moe",
"text-generation",
"en",
"base_model:Pinkstack/Superthoughts-lite-v2-MOE-Llama3.2",
"base_model:quantized:Pinkstack/Superthoughts-lite-v2-MOE-Llama3.2",
"license:llama3.2",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-06T13:44:15Z" | ---
license: llama3.2
language:
- en
base_model:
- Pinkstack/Superthoughts-lite-v2-MOE-Llama3.2
pipeline_tag: text-generation
tags:
- chemistry
- code
- math
- grpo
- conversational
- moe
- gguf
---
Gguf version. 3.91B parameters, 2 experts active, 4 in total.
[[GGUF](https://huggingface.co/Pinkstack/Superthoughts-lite-v2-MOE-Llama3.2-GGUF) !! [Full precision](https://huggingface.co/Pinkstack/Superthoughts-lite-v2-MOE-Llama3.2) !! [BF16](https://huggingface.co/Pinkstack/Superthoughts-lite-v2-MOE-Llama3.2-bf16)]

## INFORMATION
This is the non-experimental version of Superthoughts Lite v2. Offering better accuracy at all tasks, better performance and less looping while generating responses.
We trained it by first creating a base model for all the experts, which was fine-tuned using GRPO techniques using *Unsloth* on top of meta-llama/Llama-3.2-1B-Instruct.
After making the base model, we trained each potential expert using SFT. After doing SFT, we did GRPO again. in total there are 4 experts:
- Chat reasoning expert,
- Math reasoning expert,
- Code reasoning expert,
- Science reasoning expert.
By doing this, we obtained a powerful, **lite** reasoning model that is very usable for its size.
This model is a direct replacement of Pinkstack/Superthoughts-lite-v1. Pinkstack/Superthoughts-lite-v1 was not able to generate code, and had very poor text performance. V2 is much more usable.
You should use this system prompt:
```
Thinking: enabled.
Follow this format strictly:
<think>
Write your step-by-step reasoning here.
Break down the problem into smaller parts.
Solve each part systematically.
Check your work and verify the answer makes sense.
</think>
[Your final answer after thinking].
```
## Model information
The model can generate up to **16,380** tokens and has a context size of **131072**
It has been fine tuned to generated thinking data in-between ```<think>``` xml tags. note that it may still have some slight looping but they are rare.
## LIMITATIONS
While some safety alignment was done by us, it was very minimal. Thus, the model can be uncensored at times. In addition, users and providers alike should be aware that all large language models (LLM's), including this one can hallucinate and output false information. Always double check responses.
By using this model, you agree to the [LLAMA 3.2 COMMUNITY LICENSE](https://huggingface.co/meta-llama/Llama-3.2-1B/blob/main/LICENSE.txt).
## GGUF TEMPLATE
```
{{ if .Messages }}
{{- if or .System .Tools }}<|start_header_id|>system<|end_header_id|>
{{- if .System }}
{{ .System }}
{{- end }}
{{- if .Tools }}
You are a helpful assistant with tool calling capabilities. When you receive a tool call response, use the output to format an answer to the original use question.
{{- end }}
{{- end }}<|eot_id|>
{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 }}
{{- if eq .Role "user" }}<|start_header_id|>user<|end_header_id|>
{{- if and $.Tools $last }}
Given the following functions, please respond with a JSON for a function call with its proper arguments that best answers the given prompt.
Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}. Do not use variables.
{{ $.Tools }}
{{- end }}
{{ .Content }}<|eot_id|>{{ if $last }}<|start_header_id|>assistant<|end_header_id|>
{{ end }}
{{- else if eq .Role "assistant" }}<|start_header_id|>assistant<|end_header_id|>
{{- if .ToolCalls }}
{{- range .ToolCalls }}{"name": "{{ .Function.Name }}", "parameters": {{ .Function.Arguments }}}{{ end }}
{{- else }}
{{ .Content }}{{ if not $last }}<|eot_id|>{{ end }}
{{- end }}
{{- else if eq .Role "tool" }}<|start_header_id|>ipython<|end_header_id|>
{{ .Content }}<|eot_id|>{{ if $last }}<|start_header_id|>assistant<|end_header_id|>
{{ end }}
{{- end }}
{{- end }}
{{- else }}
{{- if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>
{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
{{ end }}{{ .Response }}{{ if .Response }}<|eot_id|>{{ end }}
``` |
dgambettaphd/M_llm3_gen0_WXS_doc1000_synt64_lr1e-04_acm_MPP | dgambettaphd | "2025-05-06T19:33:25Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2025-05-06T19:30:09Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Stoyet/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-waddling_darting_badger | Stoyet | "2025-05-06T19:32:25Z" | 13 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am waddling darting badger",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-05T08:36:42Z" | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-waddling_darting_badger
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am waddling darting badger
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-waddling_darting_badger
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Stoyet/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-waddling_darting_badger", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.50.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mahsharyahan/allegro_herbert_PL | mahsharyahan | "2025-05-06T19:31:46Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:allegro/herbert-base-cased",
"base_model:finetune:allegro/herbert-base-cased",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-05-06T19:31:25Z" | ---
library_name: transformers
license: cc-by-4.0
base_model: allegro/herbert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: allegro_herbert_PL
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# allegro_herbert_PL
This model is a fine-tuned version of [allegro/herbert-base-cased](https://huggingface.co/allegro/herbert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2589
- Accuracy: 0.9429
- F1: 0.9643
- Precision: 0.9310
- Recall: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 35 | 0.2833 | 0.8857 | 0.9310 | 0.8710 | 1.0 |
| No log | 2.0 | 70 | 0.2254 | 0.9429 | 0.9643 | 0.9310 | 1.0 |
| No log | 3.0 | 105 | 0.2257 | 0.9429 | 0.9643 | 0.9310 | 1.0 |
| No log | 4.0 | 140 | 0.2589 | 0.9429 | 0.9643 | 0.9310 | 1.0 |
### Framework versions
- Transformers 4.51.1
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0
|
DAKARA555/test7 | DAKARA555 | "2025-05-06T19:27:30Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:Wan-AI/Wan2.1-I2V-14B-480P",
"base_model:adapter:Wan-AI/Wan2.1-I2V-14B-480P",
"license:apache-2.0",
"region:us"
] | text-to-image | "2025-05-06T19:27:15Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/IMG_9514.PNG
base_model: Wan-AI/Wan2.1-I2V-14B-480P
instance_prompt: null
license: apache-2.0
---
# test7
<Gallery />
## Model description
test7
## Download model
Weights for this model are available in Safetensors format.
[Download](/DAKARA555/test7/tree/main) them in the Files & versions tab.
|
UF-NLPC-Lab/bart-stance-mixed | UF-NLPC-Lab | "2025-05-06T19:24:19Z" | 0 | 0 | null | [
"safetensors",
"bart_enc_for_stance",
"custom_code",
"license:mit",
"region:us"
] | null | "2025-05-06T13:01:18Z" | ---
license: mit
---
See our [Example.ipynb](./Example.ipynb)
## Model Overview
Trained and evaluated on mixed (noun-phrase and claim) targets from subtask A of EZStance:
```
@inproceedings{zhao-caragea-2024-ez,
title = "{EZ}-{STANCE}: A Large Dataset for {E}nglish Zero-Shot Stance Detection",
author = "Zhao, Chenye and
Caragea, Cornelia",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.838/",
doi = "10.18653/v1/2024.acl-long.838",
pages = "15697--15714",
}
```
Used the `BART-MNLI-e` architecture from the same paper.
Weights were initialized from [facebook/bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli):
Obtained macro F1-score of 0.82 (see [exp\_results/metrics.csv](exp_results/metrics.csv)) on the test data.
## Dependencies
- `python>=3.9.22`
- `transformers>=4.51.0`
- `accelerate>=0.26.0`
- `torch>=2.7.0`
|
khutchi2/bart-cnn-samsum-finetuned | khutchi2 | "2025-05-06T19:23:19Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:samsum",
"base_model:facebook/bart-large-cnn",
"base_model:finetune:facebook/bart-large-cnn",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-05-06T19:21:57Z" | ---
library_name: transformers
license: mit
base_model: facebook/bart-large-cnn
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: bart-cnn-samsum-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-cnn-samsum-finetuned
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the samsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0
- Datasets 3.5.1
- Tokenizers 0.21.1
|
Barcelona-Inter-En-Vivo-EN-DIRECTO/Ver.Gratis.Barcelona.Inter.En.Vivo.Directo.Online.Gratis | Barcelona-Inter-En-Vivo-EN-DIRECTO | "2025-05-06T19:22:55Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-05-06T19:15:39Z" | ⚽📺📱👉◄◄🔴 https://tinyurl.com/mtbv4nys
⚽📺📱👉◄◄🔴 https://tinyurl.com/mtbv4nys
⚽📺📱👉◄◄🔴 https://tinyurl.com/mtbv4nys
LINK, ESPN EN VIVO HOY | Ver, Inter-Barcelona por Movistar Liga de Campeones
ESPN en vivo: transmite el partido de Inter vs Barcelona por la semifinal vuelta de la Champions League.
Barcelona vs. Inter en vivo: a qué hora juegan y dónde ver semifinal de Champions League
Dónde ver Barcelona vs. Inter por Champions League |
samaaessa/RLCourse | samaaessa | "2025-05-06T19:22:39Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2025-05-06T19:22:19Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 256.92 +/- 26.00
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
buelfhood/progpedia56_plbart_ep50_bs16_lr1e-05_l512_s42_ppy_f_beta_score | buelfhood | "2025-05-06T19:21:32Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"plbart",
"text-classification",
"generated_from_trainer",
"base_model:uclanlp/plbart-base",
"base_model:finetune:uclanlp/plbart-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-05-06T19:20:55Z" | ---
library_name: transformers
base_model: uclanlp/plbart-base
tags:
- generated_from_trainer
metrics:
- accuracy
- recall
- precision
- f1
model-index:
- name: progpedia56_plbart_ep50_bs16_lr1e-05_l512_s42_ppy_f_beta_score
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# progpedia56_plbart_ep50_bs16_lr1e-05_l512_s42_ppy_f_beta_score
This model is a fine-tuned version of [uclanlp/plbart-base](https://huggingface.co/uclanlp/plbart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0001
- Accuracy: 1.0
- Recall: 1.0
- Precision: 1.0
- F1: 1.0
- F Beta Score: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | Precision | F1 | F Beta Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:---:|:------------:|
| 0.0638 | 1.0 | 272 | 0.0001 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 2.0 | 544 | 0.0000 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 3.0 | 816 | 0.0000 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 4.0 | 1088 | 0.0000 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 5.0 | 1360 | 0.0000 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
| 0.0 | 6.0 | 1632 | 0.0000 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.8.0.dev20250319+cu128
- Datasets 3.1.0
- Tokenizers 0.21.1
|
Alphatao/b3dde8bb-e181-4430-ab0f-d902101f7669 | Alphatao | "2025-05-06T19:20:14Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"unsloth",
"conversational",
"arxiv:2305.18290",
"base_model:unsloth/tinyllama",
"base_model:finetune:unsloth/tinyllama",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-06T18:51:07Z" | ---
base_model: unsloth/tinyllama
library_name: transformers
model_name: b3dde8bb-e181-4430-ab0f-d902101f7669
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
- unsloth
licence: license
---
# Model Card for b3dde8bb-e181-4430-ab0f-d902101f7669
This model is a fine-tuned version of [unsloth/tinyllama](https://huggingface.co/unsloth/tinyllama).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Alphatao/b3dde8bb-e181-4430-ab0f-d902101f7669", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/alphatao-alphatao/Gradients-On-Demand/runs/yq406j2h)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mlfoundations-dev/e1_science_longest_phi_10k | mlfoundations-dev | "2025-05-06T19:18:32Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-06T06:07:04Z" | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: e1_science_longest_phi_10k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# e1_science_longest_phi_10k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/e1_science_longest_phi_10k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
infogeo/efad29ee-918a-4b0d-8c84-fec8c82c2d6b | infogeo | "2025-05-06T19:17:38Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:DeepMount00/Llama-3-8b-Ita",
"base_model:adapter:DeepMount00/Llama-3-8b-Ita",
"license:llama3",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2025-05-06T19:04:36Z" | ---
library_name: peft
license: llama3
base_model: DeepMount00/Llama-3-8b-Ita
tags:
- axolotl
- generated_from_trainer
model-index:
- name: efad29ee-918a-4b0d-8c84-fec8c82c2d6b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: DeepMount00/Llama-3-8b-Ita
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 9a3281e2ebea8f28_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9a3281e2ebea8f28_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: infogeo/efad29ee-918a-4b0d-8c84-fec8c82c2d6b
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 400
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/9a3281e2ebea8f28_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 66c794de-8c35-4655-b453-9b4ec0f12fa4
wandb_project: s56-28
wandb_run: your_name
wandb_runid: 66c794de-8c35-4655-b453-9b4ec0f12fa4
warmup_steps: 20
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# efad29ee-918a-4b0d-8c84-fec8c82c2d6b
This model is a fine-tuned version of [DeepMount00/Llama-3-8b-Ita](https://huggingface.co/DeepMount00/Llama-3-8b-Ita) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5190
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.3622 | 0.0306 | 400 | 1.5190 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
joboffer/2a7e8757-96e2-437d-8c70-740cae6667c3 | joboffer | "2025-05-06T19:15:17Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:DeepMount00/Llama-3-8b-Ita",
"base_model:adapter:DeepMount00/Llama-3-8b-Ita",
"license:llama3",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2025-05-06T19:04:27Z" | ---
library_name: peft
license: llama3
base_model: DeepMount00/Llama-3-8b-Ita
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2a7e8757-96e2-437d-8c70-740cae6667c3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: DeepMount00/Llama-3-8b-Ita
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 9a3281e2ebea8f28_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9a3281e2ebea8f28_train_data.json
type:
field_input: input
field_instruction: instruction
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.5
group_by_length: false
hub_model_id: joboffer/2a7e8757-96e2-437d-8c70-740cae6667c3
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 300
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/9a3281e2ebea8f28_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 66c794de-8c35-4655-b453-9b4ec0f12fa4
wandb_project: s56-33
wandb_run: your_name
wandb_runid: 66c794de-8c35-4655-b453-9b4ec0f12fa4
warmup_steps: 15
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 2a7e8757-96e2-437d-8c70-740cae6667c3
This model is a fine-tuned version of [DeepMount00/Llama-3-8b-Ita](https://huggingface.co/DeepMount00/Llama-3-8b-Ita) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3427
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 15
- training_steps: 300
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.0552 | 0.0229 | 300 | 1.3427 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Asap7772/qwen3_4blrablation_filtered_0503_lr5e5 | Asap7772 | "2025-05-06T19:14:59Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-06T19:06:19Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Asap7772/qwen3_4blrablation_filtered_0503_lr1e6 | Asap7772 | "2025-05-06T19:14:26Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-06T19:06:19Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Asap7772/qwen3_4blrablation_filtered_0503_lr1e5 | Asap7772 | "2025-05-06T19:14:19Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-06T19:06:19Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ltgbao/Qwen3-32b-r256-4bit-Pentest-CoT-LoRA-test2 | ltgbao | "2025-05-06T19:13:46Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2025-05-06T19:12:37Z" | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Jorgeis1/babyllama-10midk | Jorgeis1 | "2025-05-06T19:11:52Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-06T19:11:23Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nasim01/whisper-small-dv | nasim01 | "2025-05-06T19:07:49Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dv",
"dataset:mozilla-foundation/common_voice_13_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2025-05-06T18:16:50Z" | ---
library_name: transformers
language:
- dv
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
model-index:
- name: Whisper Small Dv - Sanchit Gandhi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Dv - Sanchit Gandhi
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.01
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 5
- training_steps: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:------:|:----:|:---------------:|:---------:|:-----:|
| No log | 0.0163 | 5 | 13.6877 | 100.0 | 100.0 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
turve69/Turve | turve69 | "2025-05-06T19:03:03Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-05-06T18:22:37Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: KRL
---
# Turve
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `KRL` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "KRL",
"lora_weights": "https://huggingface.co/turve69/Turve/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('turve69/Turve', weight_name='lora.safetensors')
image = pipeline('KRL').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/turve69/Turve/discussions) to add images that show off what you’ve made with this LoRA.
|
AvaLovelace/LLaMA-ASCII-Art | AvaLovelace | "2025-05-06T19:02:42Z" | 184 | 0 | peft | [
"peft",
"safetensors",
"text-generation",
"conversational",
"en",
"dataset:AvaLovelace/ASCII-Art",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-1B-Instruct",
"region:us"
] | text-generation | "2025-04-18T23:57:59Z" | ---
base_model: meta-llama/Llama-3.2-1B-Instruct
library_name: peft
pipeline_tag: text-generation
datasets:
- AvaLovelace/ASCII-Art
language:
- en
---
# Model Card for LLaMA-ASCII-Art
15-780 Final Project. A Llama-3.2-1B-Instruct model fine-tuned to generate ASCII art.
## Training Details
### Training Data
The model was trained on the [ASCII-Art](https://huggingface.co/datasets/AvaLovelace/ASCII-Art) dataset.
### Training Procedure
The model was fine-tuned using LoRA and AdamW optimization. Learning rate followed a cosine decay with warmup.
#### Training Hyperparameters
- **Training regime:** bf16 mixed precision
- **Epochs:** 10
- **Batch size:** 2
- **Max learning rate:** 5e-4
- **Learning rate warmup steps:** 100
- **LoRA rank:** 32
- **LoRA alpha:** 16
- **LoRA dropout:** 0.05
#### Speeds, Sizes, Times [optional]
Fine-tuning took approximately 1 hour on one NVIDIA RTX A6000 (48GB).
### Framework versions
- PEFT 0.15.2 |
mdrobs14/constitutional-law-llama | mdrobs14 | "2025-05-06T19:02:35Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-05-06T19:01:52Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
microsoft/Phi-4-reasoning-plus | microsoft | "2025-05-06T19:01:07Z" | 5,024 | 206 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"phi",
"nlp",
"math",
"code",
"chat",
"conversational",
"reasoning",
"en",
"arxiv:2504.21318",
"base_model:microsoft/phi-4",
"base_model:finetune:microsoft/phi-4",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-17T20:51:32Z" | ---
license: mit
license_link: https://huggingface.co/microsoft/Phi-4-reasoning-plus/resolve/main/LICENSE
language:
- en
base_model:
- microsoft/phi-4
pipeline_tag: text-generation
tags:
- phi
- nlp
- math
- code
- chat
- conversational
- reasoning
inference:
parameters:
temperature: 0
widget:
- messages:
- role: user
content: What is the derivative of x^2?
library_name: transformers
---
# Phi-4-reasoning-plus Model Card
[Phi-4-reasoning Technical Report](https://huggingface.co/papers/2504.21318)
## Model Summary
| | |
|-------------------------|-------------------------------------------------------------------------------|
| **Developers** | Microsoft Research |
| **Description** | Phi-4-reasoning-plus is a state-of-the-art open-weight reasoning model finetuned from Phi-4 using supervised fine-tuning on a dataset of chain-of-thought traces and reinforcement learning. The supervised fine-tuning dataset includes a blend of synthetic prompts and high-quality filtered data from public domain websites, focused on math, science, and coding skills as well as alignment data for safety and Responsible AI. The goal of this approach was to ensure that small capable models were trained with data focused on high quality and advanced reasoning. Phi-4-reasoning-plus has been trained additionally with Reinforcement Learning, hence, it has higher accuracy but generates on average 50% more tokens, thus having higher latency. |
| **Architecture** | Base model same as previously released Phi-4, 14B parameters, dense decoder-only Transformer model |
| **Inputs** | Text, best suited for prompts in the chat format |
| **Context length** | 32k tokens |
| **GPUs** | 32 H100-80G |
| **Training time** | 2.5 days |
| **Training data** | 16B tokens, ~8.3B unique tokens |
| **Outputs** | Generated text in response to the input. Model responses have two sections, namely, a reasoning chain-of-thought block followed by a summarization block |
| **Dates** | January 2025 – April 2025 |
| **Status** | Static model trained on an offline dataset with cutoff dates of March 2025 and earlier for publicly available data |
| **Release date** | April 30, 2025 |
| **License** | MIT |
## Intended Use
| | |
|-------------------------------|-------------------------------------------------------------------------|
| **Primary Use Cases** | Our model is designed to accelerate research on language models, for use as a building block for generative AI powered features. It provides uses for general purpose AI systems and applications (primarily in English) which require:<br><br>1. Memory/compute constrained environments.<br>2. Latency bound scenarios.<br>3. Reasoning and logic. |
| **Out-of-Scope Use Cases** | This model is designed and tested for math reasoning only. Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fairness before using within a specific downstream use case, particularly for high-risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case, including the model’s focus on English. Review the Responsible AI Considerations section below for further guidance when choosing a use case. Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under. |
## Usage
> [!IMPORTANT]
> To fully take advantage of the model's capabilities, inference must use `temperature=0.8`, `top_p=0.95`, and `do_sample=True`. For more complex queries, set `max_new_tokens=32768` to allow for longer chain-of-thought (CoT).
*Phi-4-reasoning-plus has shown strong performance on reasoning-intensive tasks. In our experiments, we extended its maximum number of tokens to 64k, and it handled longer sequences with promising results, maintaining coherence and logical consistency over extended inputs. This makes it a compelling option to explore for tasks that require deep, multi-step reasoning or extensive context.*
### Input Formats
Given the nature of the training data, **always use** ChatML template with the **following system prompt** for inference:
```bash
<|im_start|>system<|im_sep|>
You are Phi, a language model trained by Microsoft to help users. Your role as an assistant involves thoroughly exploring questions through a systematic thinking process before providing the final precise and accurate solutions. This requires engaging in a comprehensive cycle of analysis, summarizing, exploration, reassessment, reflection, backtracing, and iteration to develop well-considered thinking process. Please structure your response into two main sections: Thought and Solution using the specified format: <think> {Thought section} </think> {Solution section}. In the Thought section, detail your reasoning process in steps. Each step should include detailed considerations such as analysing questions, summarizing relevant findings, brainstorming new ideas, verifying the accuracy of the current steps, refining any errors, and revisiting previous steps. In the Solution section, based on various attempts, explorations, and reflections from the Thought section, systematically present the final solution that you deem correct. The Solution section should be logical, accurate, and concise and detail necessary steps needed to reach the conclusion. Now, try to solve the following question through the above guidelines:<|im_end|>
<|im_start|>user<|im_sep|>
What is the derivative of x^2?<|im_end|>
<|im_start|>assistant<|im_sep|>
```
### With `transformers`
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-4-reasoning-plus")
model = AutoModelForCausalLM.from_pretrained("microsoft/Phi-4-reasoning-plus", device_map="auto", torch_dtype="auto")
messages = [
{"role": "system", "content": "You are Phi, a language model trained by Microsoft to help users. Your role as an assistant involves thoroughly exploring questions through a systematic thinking process before providing the final precise and accurate solutions. This requires engaging in a comprehensive cycle of analysis, summarizing, exploration, reassessment, reflection, backtracing, and iteration to develop well-considered thinking process. Please structure your response into two main sections: Thought and Solution using the specified format: <think> {Thought section} </think> {Solution section}. In the Thought section, detail your reasoning process in steps. Each step should include detailed considerations such as analysing questions, summarizing relevant findings, brainstorming new ideas, verifying the accuracy of the current steps, refining any errors, and revisiting previous steps. In the Solution section, based on various attempts, explorations, and reflections from the Thought section, systematically present the final solution that you deem correct. The Solution section should be logical, accurate, and concise and detail necessary steps needed to reach the conclusion. Now, try to solve the following question through the above guidelines:"},
{"role": "user", "content": "What is the derivative of x^2?"},
]
inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
outputs = model.generate(
inputs.to(model.device),
max_new_tokens=4096,
temperature=0.8,
top_p=0.95,
do_sample=True,
)
print(tokenizer.decode(outputs[0]))
```
### With `vllm`
```bash
vllm serve microsoft/Phi-4-reasoning-plus --enable-reasoning --reasoning-parser deepseek_r1
```
*Phi-4-reasoning-plus is also supported out-of-the-box by Ollama, llama.cpp, and any Phi-4 compatible framework.*
## Data Overview
### Training Datasets
Our training data is a mixture of Q&A, chat format data in math, science, and coding. The chat prompts are sourced from filtered high-quality web data and optionally rewritten and processed through a synthetic data generation pipeline. We further include data to improve truthfulness and safety.
### Benchmark Datasets
We evaluated Phi-4-reasoning-plus using the open-source [Eureka](https://github.com/microsoft/eureka-ml-insights) evaluation suite and our own internal benchmarks to understand the model's capabilities. More specifically, we evaluate our model on:
Reasoning tasks:
* **AIME 2025, 2024, 2023, and 2022:** Math olympiad questions.
* **GPQA-Diamond:** Complex, graduate-level science questions.
* **OmniMath:** Collection of over 4000 olympiad-level math problems with human annotation.
* **LiveCodeBench:** Code generation benchmark gathered from competitive coding contests.
* **3SAT (3-literal Satisfiability Problem) and TSP (Traveling Salesman Problem):** Algorithmic problem solving.
* **BA Calendar:** Planning.
* **Maze and SpatialMap:** Spatial understanding.
General-purpose benchmarks:
* **Kitab:** Information retrieval.
* **IFEval and ArenaHard:** Instruction following.
* **PhiBench:** Internal benchmark.
* **FlenQA:** Impact of prompt length on model performance.
* **HumanEvalPlus:** Functional code generation.
* **MMLU-Pro:** Popular aggregated dataset for multitask language understanding.
## Safety
### Approach
Phi-4-reasoning-plus has adopted a robust safety post-training approach via supervised fine-tuning (SFT). This approach leverages a variety of both open-source and in-house generated synthetic prompts, with LLM-generated responses that adhere to rigorous Microsoft safety guidelines, e.g., User Understanding and Clarity, Security and Ethical Guidelines, Limitations, Disclaimers and Knowledge Scope, Handling Complex and Sensitive Topics, Safety and Respectful Engagement, Confidentiality of Guidelines and Confidentiality of Chain-of-Thoughts.
### Safety Evaluation and Red-Teaming
Prior to release, Phi-4-reasoning-plus followed a multi-faceted evaluation approach. Quantitative evaluation was conducted with multiple open-source safety benchmarks and in-house tools utilizing adversarial conversation simulation. For qualitative safety evaluation, we collaborated with the independent AI Red Team (AIRT) at Microsoft to assess safety risks posed by Phi-4-reasoning-plus in both average and adversarial user scenarios. In the average user scenario, AIRT emulated typical single-turn and multi-turn interactions to identify potentially risky behaviors. The adversarial user scenario tested a wide range of techniques aimed at intentionally subverting the model's safety training including grounded-ness, jailbreaks, harmful content like hate and unfairness, violence, sexual content, or self-harm, and copyright violations for protected material. We further evaluate models on Toxigen, a benchmark designed to measure bias and toxicity targeted towards minority groups.
Please refer to the technical report for more details on safety alignment.
## Model Quality
At the high-level overview of the model quality on representative benchmarks. For the tables below, higher numbers indicate better performance:
| | AIME 24 | AIME 25 | OmniMath | GPQA-D | LiveCodeBench (8/1/24–2/1/25) |
|-----------------------------|-------------|-------------|-------------|------------|-------------------------------|
| Phi-4-reasoning | 75.3 | 62.9 | 76.6 | 65.8 | 53.8 |
| Phi-4-reasoning-plus | 81.3 | 78.0 | 81.9 | 68.9 | 53.1 |
| OpenThinker2-32B | 58.0 | 58.0 | — | 64.1 | — |
| QwQ 32B | 79.5 | 65.8 | — | 59.5 | 63.4 |
| EXAONE-Deep-32B | 72.1 | 65.8 | — | 66.1 | 59.5 |
| DeepSeek-R1-Distill-70B | 69.3 | 51.5 | 63.4 | 66.2 | 57.5 |
| DeepSeek-R1 | 78.7 | 70.4 | 85.0 | 73.0 | 62.8 |
| o1-mini | 63.6 | 54.8 | — | 60.0 | 53.8 |
| o1 | 74.6 | 75.3 | 67.5 | 76.7 | 71.0 |
| o3-mini | 88.0 | 78.0 | 74.6 | 77.7 | 69.5 |
| Claude-3.7-Sonnet | 55.3 | 58.7 | 54.6 | 76.8 | — |
| Gemini-2.5-Pro | 92.0 | 86.7 | 61.1 | 84.0 | 69.2 |
| | Phi-4 | Phi-4-reasoning | Phi-4-reasoning-plus | o3-mini | GPT-4o |
|----------------------------------------|-------|------------------|-------------------|---------|--------|
| FlenQA [3K-token subset] | 82.0 | 97.7 | 97.9 | 96.8 | 90.8 |
| IFEval Strict | 62.3 | 83.4 | 84.9 | 91.5 | 81.8 |
| ArenaHard | 68.1 | 73.3 | 79.0 | 81.9 | 75.6 |
| HumanEvalPlus | 83.5 | 92.9 | 92.3 | 94.0| 88.0 |
| MMLUPro | 71.5 | 74.3 | 76.0 | 79.4 | 73.0 |
| Kitab<br><small>No Context - Precision<br>With Context - Precision<br>No Context - Recall<br>With Context - Recall</small> | <br>19.3<br>88.5<br>8.2<br>68.1 | <br>23.2<br>91.5<br>4.9<br>74.8 | <br>27.6<br>93.6<br>6.3<br>75.4 | <br>37.9<br>94.0<br>4.2<br>76.1 | <br>53.7<br>84.7<br>20.3<br>69.2 |
| Toxigen Discriminative<br><small>Toxic category<br>Neutral category</small> | <br>72.6<br>90.0 | <br>86.7<br>84.7 | <br>77.3<br>90.5 | <br>85.4<br>88.7 | <br>87.6<br>85.1 |
| PhiBench 2.21 | 58.2 | 70.6 | 74.2 | 78.0| 72.4 |
Overall, Phi-4-reasoning and Phi-4-reasoning-plus, with only 14B parameters, performs well across a wide range of reasoning tasks, outperforming significantly larger open-weight models such as DeepSeek-R1 distilled 70B model and approaching the performance levels of full DeepSeek R1 model. We also test the models on multiple new reasoning benchmarks for algorithmic problem solving and planning, including 3SAT, TSP, and BA-Calendar. These new tasks are nominally out-of-domain for the models as the training process did not intentionally target these skills, but the models still show strong generalization to these tasks. Furthermore, when evaluating performance against standard general abilities benchmarks such as instruction following or non-reasoning tasks, we find that our new models improve significantly from Phi-4, despite the post-training being focused on reasoning skills in specific domains.
## Responsible AI Considerations
Like other language models, Phi-4-reasoning-plus can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
* **Quality of Service:** The model is trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English. Phi-4-reasoning-plus is not intended to support multilingual use.
* **Representation of Harms & Perpetuation of Stereotypes:** These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
* **Inappropriate or Offensive Content:** These models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.
* **Information Reliability:** Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
* **Election Information Reliability:** The model has an elevated defect rate when responding to election-critical queries, which may result in incorrect or unauthoritative election critical information being presented. We are working to improve the model's performance in this area. Users should verify information related to elections with the election authority in their region.
* **Limited Scope for Code:** Majority of Phi-4-reasoning-plus training data is based in Python and uses common packages such as `typing`, `math`, `random`, `collections`, `datetime`, `itertools`. If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Using safety services like [Azure AI Content Safety](https://azure.microsoft.com/en-us/products/ai-services/ai-content-safety) that have advanced guardrails is highly recommended. Important areas for consideration include:
* **Allocation:** Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
* **High-Risk Scenarios:** Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
* **Misinformation:** Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
* **Generation of Harmful Content:** Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
* **Misuse:** Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations. |
Zack-Z/qwen3_4bi_cotsft_rs0_2_5cut_cot2_e2 | Zack-Z | "2025-05-06T18:58:39Z" | 0 | 0 | transformers | [
"transformers",
"qwen3",
"feature-extraction",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Qwen3-4B",
"base_model:finetune:unsloth/Qwen3-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2025-05-06T18:26:30Z" | ---
base_model: unsloth/Qwen3-4B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Zack-Z
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
khalednabawi11/clip-cyclegan-roco | khalednabawi11 | "2025-05-06T18:55:33Z" | 0 | 0 | null | [
"safetensors",
"clip-cyclegan",
"custom_code",
"region:us"
] | null | "2025-05-06T17:10:47Z" |
# CLIPCycleGAN-Roco
This repository contains the implementation and pretrained weights for a **CLIP-based CycleGAN** model trained on the MedICaT-Roco dataset. The model can generate images from text captions and vice versa, using a combination of image and text encoders, a generator, and a discriminator.
### Model Overview
The model is based on the CycleGAN framework, where the model learns to generate images from textual descriptions and generates textual representations of images. The key components of the model are:
- **Image Encoder**: Uses a ResNet50 model to extract image features.
- **Text Encoder**: Uses a BERT model to encode textual captions into embeddings.
- **Generator**: Transforms the encoded text embeddings into images.
- **Discriminator**: Differentiates between real and generated images.
The model has been trained for various tasks like **image-to-text generation** and **text-to-image translation**. It leverages the power of CLIP-like architecture for understanding and generating multimodal data.
---
## Table of Contents
- [Installation](#installation)
- [Model Architecture](#model-architecture)
- [How to Use](#how-to-use)
- [Inference Example](#inference-example)
- [Fine-Tuning](#fine-tuning)
- [Licensing](#licensing)
---
## Installation
To use the model, you need to install the required dependencies. You can use the following command to install them:
```bash
pip install torch torchvision transformers tqdm
```
---
## Model Architecture
The model consists of four primary components:
1. **Image Encoder**: Extracts features from images using a ResNet50 backbone.
2. **Text Encoder**: Encodes input textual descriptions using a pretrained BERT model.
3. **Generator**: Generates images based on encoded text features.
4. **Discriminator**: Differentiates between real and generated images.
---
## How to Use
You can load the pretrained model directly from Hugging Face's Model Hub and use it for inference. Here's how to do that:
### 1. Install the Transformers library
Make sure you have the `transformers` library installed:
```bash
pip install transformers
```
### 2. Load the Model
To load the model, simply use the following code:
```python
from transformers import CLIPCycleGANHFModel
# Load the pretrained model
model = CLIPCycleGANHFModel.from_pretrained("khalednabawi11/clip-cyclegan-roco")
model.eval()
```
### 3. Prepare the Input
The model expects images and text input as follows:
- **Images**: A tensor of shape `(batch_size, 3, height, width)` for a batch of images.
- **Text**: Tokenized input text using a tokenizer (BERT tokenizer).
For the text input, you can use the BERT tokenizer:
```python
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
# Example text caption
caption = "A dog running in the park"
inputs = tokenizer(caption, return_tensors="pt", padding=True, truncation=True)
```
### 4. Perform Inference
Once the model and inputs are ready, you can perform inference:
```python
# Example image tensor
# Assuming `image_tensor` is a tensor of shape (batch_size, 3, height, width)
image_tensor = ... # Load or preprocess your image tensor
# Run the model
outputs = model(images=image_tensor, input_ids=inputs['input_ids'], attention_mask=inputs['attention_mask'])
generated_images = outputs['generated'] # Generated images based on the caption
score_real = outputs['score_real'] # Discriminator score for real images
score_fake = outputs['score_fake'] # Discriminator score for generated images
```
---
## Fine-Tuning
If you want to fine-tune the model on your custom dataset, you can do so by training the `ImageEncoder`, `TextEncoder`, `Generator`, and `Discriminator` on your data using the CycleGAN objective.
You can follow the steps below for fine-tuning:
1. Prepare your dataset with paired image-text data.
2. Update the training loop to load your data.
3. Use the CycleGAN loss (e.g., adversarial loss, cycle consistency loss) during training.
---
## Licensing
This model is released under the MIT license. Please refer to the `LICENSE` file for more information.
---
## How to Pull the Model
To pull the model from Hugging Face Hub, use the following command:
```bash
from transformers import CLIPCycleGANHFModel
# Load the pretrained model directly from Hugging Face
model = CLIPCycleGANHFModel.from_pretrained("khalednabawi11/clip-cyclegan-roco")
```
---
**Enjoy using the CLIPCycleGAN model! Feel free to contribute and improve upon it.**
|
ssundaram/Qwen2.5-Math-PRM-7B_gsm8k_n20_seed47639 | ssundaram | "2025-05-06T18:55:13Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:ssundaram/Qwen2.5-Math-PRM-7B_gsm8k_n20_seed47639",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-06T18:51:06Z" | ---
base_model: Qwen/Qwen2.5-1.5B-Instruct
datasets: ssundaram/Qwen2.5-Math-PRM-7B_gsm8k_n20_seed47639
library_name: transformers
model_name: Qwen2.5-Math-PRM-7B_gsm8k_n20_seed47639
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for Qwen2.5-Math-PRM-7B_gsm8k_n20_seed47639
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the [ssundaram/Qwen2.5-Math-PRM-7B_gsm8k_n20_seed47639](https://huggingface.co/datasets/ssundaram/Qwen2.5-Math-PRM-7B_gsm8k_n20_seed47639) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ssundaram/Qwen2.5-Math-PRM-7B_gsm8k_n20_seed47639", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/shobhita/huggingface/runs/es2a9fb9)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0.dev0
- Transformers: 4.51.2
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
sunnytqin/toy-multistep-v2-nn_250-na_200 | sunnytqin | "2025-05-06T18:54:10Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-06T18:53:31Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ssundaram/Qwen2.5-Math-PRM-7B_gsm8k_n3_seed47639 | ssundaram | "2025-05-06T18:53:24Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:ssundaram/Qwen2.5-Math-PRM-7B_gsm8k_n3_seed47639",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-06T01:12:35Z" | ---
base_model: Qwen/Qwen2.5-1.5B-Instruct
datasets: ssundaram/Qwen2.5-Math-PRM-7B_gsm8k_n3_seed47639
library_name: transformers
model_name: Qwen2.5-Math-PRM-7B_gsm8k_n3_seed47639
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for Qwen2.5-Math-PRM-7B_gsm8k_n3_seed47639
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the [ssundaram/Qwen2.5-Math-PRM-7B_gsm8k_n3_seed47639](https://huggingface.co/datasets/ssundaram/Qwen2.5-Math-PRM-7B_gsm8k_n3_seed47639) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ssundaram/Qwen2.5-Math-PRM-7B_gsm8k_n3_seed47639", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/shobhita/huggingface/runs/vsjqt4nz)
This model was trained with SFT.
### Framework versions
- TRL: 0.17.0.dev0
- Transformers: 4.51.2
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Subsets and Splits