modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-31 00:44:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 538
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-31 00:42:51
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
smdesai/granite-3.3-2b-instruct-4bit-DWQ
|
smdesai
| 2025-06-23T03:21:14Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"granite",
"language",
"granite-3.3",
"text-generation",
"conversational",
"base_model:ibm-granite/granite-3.3-2b-instruct",
"base_model:quantized:ibm-granite/granite-3.3-2b-instruct",
"license:apache-2.0",
"4-bit",
"region:us"
] |
text-generation
| 2025-06-23T03:20:10Z |
---
pipeline_tag: text-generation
inference: false
license: apache-2.0
library_name: mlx
tags:
- language
- granite-3.3
- mlx
base_model: ibm-granite/granite-3.3-2b-instruct
---
|
New-videos-Anjali-Arora-viral-Clips/FULL.VIDEO.Anjali.Arora.Viral.Video.Tutorial.Official
|
New-videos-Anjali-Arora-viral-Clips
| 2025-06-23T03:19:58Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-23T03:19:37Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
NamVo/qwen_r1_mini_unsloth
|
NamVo
| 2025-06-23T03:19:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"grpo",
"arxiv:2402.03300",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T03:19:33Z |
---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
library_name: transformers
model_name: qwen_r1_mini_unsloth
tags:
- generated_from_trainer
- unsloth
- trl
- grpo
licence: license
---
# Model Card for qwen_r1_mini_unsloth
This model is a fine-tuned version of [unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit](https://huggingface.co/unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="NamVo/qwen_r1_mini_unsloth", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/nvoz1812/huggingface/runs/yzt4h8rz)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
amildravid4292/llava-llama-3-8b-test-time-registers
|
amildravid4292
| 2025-06-23T03:14:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llava_test_time_registers",
"text2text-generation",
"image-text-to-text",
"dataset:liuhaotian/LLaVA-Pretrain",
"dataset:liuhaotian/LLaVA-Instruct-150K",
"arxiv:2309.16588",
"arxiv:2506.08010",
"base_model:xtuner/llava-llama-3-8b-v1_1-transformers",
"base_model:finetune:xtuner/llava-llama-3-8b-v1_1-transformers",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-23T00:33:04Z |
---
library_name: transformers
license: mit
pipeline_tag: image-text-to-text
base_model:
- xtuner/llava-llama-3-8b-v1_1-transformers
datasets:
- liuhaotian/LLaVA-Pretrain
- liuhaotian/LLaVA-Instruct-150K
---
# LLaVA-Llama-3-8b with Test-Time Register
Register tokens in ViTs were introduced as learnable tokens in [Vision Transformers Need Registers](https://arxiv.org/abs/2309.16588) to mitigate artifacts in intermediate feature maps.
In [Vision Transformers Don't Need *Trained* Registers](https://arxiv.org/abs/2506.08010), we introduced a training-free method to create registers. These *test-time registers* serve a similar purpose
as the original trained registers, but can be added post-hoc to any ViT to mitigate artifacts, enhance model interpretability, and modestly improve downstream performance in tasks such as segmentation, depth estimation, etc.
## Model description
The base model is [LLaVA-Llama-3-8b v1.1](https://huggingface.co/xtuner/llava-llama-3-8b-v1_1-transformers). With test-time registers, the model's internal representations
are cleaner and can be used to better debug model behavior. We visualize the attention of the language model's generated response to visual tokens below (zoom in). We run evaluation using [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) with the environment from [here](https://github.com/nickjiang2378/test-time-registers/blob/main/environment.yml) (using transformers==4.37.0).
This model is intended to be used with this [repo](https://github.com/nickjiang2378/test-time-registers). The model can also be used for fine-tuning or other downstream tasks.
<img src="https://huggingface.co/amildravid4292/llava-llama-3-8b-test-time-registers/resolve/main/vlm_fig.png" alt="drawing" width="600"/>
| Model | Avg. | HallusionBench | MMVet | MMMU Val | OCRBench | MMStar | MathVista | AI2D Test | MMBenchv1.1 |
| :-------------------- | :---------------: | :---------------: | :---------: | :-------: | :------: | :-------: | :------------: | :-----------------: | :--: |
| LLaVA-Llama-3-8B v1.1 | 46.2 | 28.6 | 33.4 | 40.4 | 41.6 | 46.3 | 40.9 | 69.9 | 68.5 |
| w/test-time register | 46.2 | 29.4 | 33.9 | 40.1 | 41.3 | 46.4 | 41.3 | 69.4 | 68.0 |
## Quick Start
```python
import torch
from transformers import AutoProcessor
from PIL import Image
from huggingface_hub import snapshot_download
import sys, os
repo_path = snapshot_download("amildravid4292/llava-llama-3-8b-test-time-registers")
sys.path.insert(0, repo_path)
from modeling_custom_llava import LlavaRegistersForConditionalGeneration
device = "cuda:0"
model = LlavaRegistersForConditionalGeneration.from_pretrained(
"xtuner/llava-llama-3-8b-v1_1-transformers",
torch_dtype=torch.float16,
output_attentions=True
).to(device)
# user original processor
processor = AutoProcessor.from_pretrained("xtuner/llava-llama-3-8b-v1_1-transformers")
prompt = ("<|start_header_id|>user<|end_header_id|>\n\n<image>\nHow many tennis balls are in the dog's mouth? Use one word.<|eot_id|>"
"<|start_header_id|>assistant<|end_header_id|>\n\n")
# Load image
image_path = "dog_image.webp"
raw_image = Image.open(image_path)
inputs = processor(prompt, raw_image, return_tensors='pt').to(device, torch.float16)
# model defaults to using test-time register
with torch.no_grad():
output = model.generate(**inputs, max_new_tokens=20, do_sample=False)
# To use without test-time register
with torch.no_grad():
output = model.generate(**inputs, max_new_tokens=20, do_sample=False, extra_tokens=0, neuron_dict=None)
tokenizer = processor.tokenizer
decoded_output = tokenizer.decode(output[0], skip_special_tokens=True)
print("Decoded output:", decoded_output)
```
## Visualizing Language Model's Attention to Visual Tokens
```python
import torch
from transformers import AutoProcessor
from PIL import Image
from huggingface_hub import snapshot_download
import sys, os
repo_path = snapshot_download("amildravid4292/llava-llama-3-8b-test-time-registers")
sys.path.insert(0, repo_path)
from modeling_custom_llava import LlavaRegistersForConditionalGeneration
device = "cuda:0"
# language model attention capture
class AttentionCaptureModel(LlavaRegistersForConditionalGeneration):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.captured_attentions = None
def forward(self, *args, **kwargs):
# Capture the attention weights
output = super().forward(*args, **kwargs)
self.captured_attentions = output.attentions
return output
model = AttentionCaptureModel.from_pretrained(
"xtuner/llava-llama-3-8b-v1_1-transformers",
torch_dtype=torch.float16
).to(device)
# use original processor
processor = AutoProcessor.from_pretrained("xtuner/llava-llama-3-8b-v1_1-transformers")
prompt = ("<|start_header_id|>user<|end_header_id|>\n\n<image>\nHow many tennis balls are in the dog's mouth? Use one word.<|eot_id|>"
"<|start_header_id|>assistant<|end_header_id|>\n\n")
# Load image
image_path = "dog_image.webp"
raw_image = Image.open(image_path)
inputs = processor(prompt, raw_image, return_tensors='pt').to(device, torch.float16)
# model defaults to using test-time register
with torch.no_grad():
output = model.generate(**inputs, max_new_tokens=1, do_sample=False)
tokenizer = processor.tokenizer
decoded_output = tokenizer.decode(output[0], skip_special_tokens=True)
print("Decoded output:", decoded_output)
# get attention
atts = torch.cat(model.captured_attentions).float()
# visualize attention from answer to visual tokens
im = plt.imshow(atts.mean(0).mean(0)[-1, 5:581].cpu().reshape(24,24))
plt.axis("off")
plt.suptitle("Mean Attention Map for Answer Token ", fontsize = 20)
plt.tight_layout()
plt.colorbar(im)
plt.show()
```
## Advanced Usage
### Custom Neuron Modifications
```python
# Override the saved neuron configuration
custom_neuron_dict = {0: [10, 20, 30]} # Modify neurons 10,20,30 in layer 0
with torch.no_grad():
output = model.generate(**inputs, max_new_tokens=20, do_sample=False, neuron_dict=custom_neuron_dict)
```
### Different Register Token Counts
```python
# Use different number of register tokens
with torch.no_grad():
output = model.generate(**inputs, max_new_tokens=20, do_sample=False, extra_tokens=5)
```
### BibTeX entry and citation info
```bibtex
@misc{jiang2025visiontransformersdontneed,
title={Vision Transformers Don't Need Trained Registers},
author={Nick Jiang and Amil Dravid and Alexei Efros and Yossi Gandelsman},
year={2025},
eprint={2506.08010},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2506.08010},
}
```
|
wbmattis2/Llama-3.2-1B-Sonnet
|
wbmattis2
| 2025-06-23T03:06:38Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T21:53:52Z |
---
base_model: meta-llama/Llama-3.2-1B
library_name: transformers
model_name: Llama-3.2-1B-Sonnet
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Llama-3.2-1B-Sonnet
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="wbmattis2/Llama-3.2-1B-Sonnet", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/wmattis1-fitchburg-state-university/benny-mattis-sonnet-generation-introduction-creative-120-tokens/runs/z6nrw32q)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.52.4
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Samas21/C0n4n
|
Samas21
| 2025-06-23T03:03:53Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-04-16T13:59:09Z |
---
license: apache-2.0
---
|
dengcao/Qwen3-Embedding-4B-GGUF
|
dengcao
| 2025-06-23T03:01:04Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"gguf",
"transformers",
"sentence-similarity",
"feature-extraction",
"base_model:Qwen/Qwen3-4B-Base",
"base_model:quantized:Qwen/Qwen3-4B-Base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"conversational"
] |
feature-extraction
| 2025-06-21T14:06:17Z |
---
license: apache-2.0
base_model:
- Qwen/Qwen3-4B-Base
tags:
- transformers
- sentence-transformers
- sentence-similarity
- feature-extraction
---
# <span style="color: #7FFF7F;">Qwen3-Embedding-4B GGUF Models</span>
## <span style="color: #7F7FFF;">Model Generation Details</span>
This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`1f63e75f`](https://github.com/ggerganov/llama.cpp/commit/1f63e75f3b5dc7f44dbe63c8a41d23958fe95bc0).
## <span style="color: #7FFF7F;"> Quantization beyond the IMatrix</span>
Tesintg a new quantization method using rules to bump important layers above what the standard imatrix would use.
I have found that the standard IMatrix does not perform very well at low bit quantiztion and for MOE models. So I am using llama.cpp --tensor-type to bump up selected layers. See [Layer bumping with llama.cpp](https://github.com/Mungert69/GGUFModelBuilder/blob/main/model-converter/tensor_list_builder.py)
This does create larger model files but increases precision for a given model size.
### **Please provide feedback on how you find this method performs**
## **Choosing the Right Model Format**
Selecting the correct model format depends on your **hardware capabilities** and **memory constraints**.
### **BF16 (Brain Float 16) – Use if BF16 acceleration is available**
- A 16-bit floating-point format designed for **faster computation** while retaining good precision.
- Provides **similar dynamic range** as FP32 but with **lower memory usage**.
- Recommended if your hardware supports **BF16 acceleration** (check your device's specs).
- Ideal for **high-performance inference** with **reduced memory footprint** compared to FP32.
📌 **Use BF16 if:**
✔ Your hardware has native **BF16 support** (e.g., newer GPUs, TPUs).
✔ You want **higher precision** while saving memory.
✔ You plan to **requantize** the model into another format.
📌 **Avoid BF16 if:**
❌ Your hardware does **not** support BF16 (it may fall back to FP32 and run slower).
❌ You need compatibility with older devices that lack BF16 optimization.
---
### **F16 (Float 16) – More widely supported than BF16**
- A 16-bit floating-point **high precision** but with less of range of values than BF16.
- Works on most devices with **FP16 acceleration support** (including many GPUs and some CPUs).
- Slightly lower numerical precision than BF16 but generally sufficient for inference.
📌 **Use F16 if:**
✔ Your hardware supports **FP16** but **not BF16**.
✔ You need a **balance between speed, memory usage, and accuracy**.
✔ You are running on a **GPU** or another device optimized for FP16 computations.
📌 **Avoid F16 if:**
❌ Your device lacks **native FP16 support** (it may run slower than expected).
❌ You have memory limitations.
---
### **Hybrid Precision Models (e.g., `bf16_q8_0`, `f16_q4_K`) – Best of Both Worlds**
These formats selectively **quantize non-essential layers** while keeping **key layers in full precision** (e.g., attention and output layers).
- Named like `bf16_q8_0` (meaning **full-precision BF16 core layers + quantized Q8_0 other layers**).
- Strike a **balance between memory efficiency and accuracy**, improving over fully quantized models without requiring the full memory of BF16/F16.
📌 **Use Hybrid Models if:**
✔ You need **better accuracy than quant-only models** but can’t afford full BF16/F16 everywhere.
✔ Your device supports **mixed-precision inference**.
✔ You want to **optimize trade-offs** for production-grade models on constrained hardware.
📌 **Avoid Hybrid Models if:**
❌ Your target device doesn’t support **mixed or full-precision acceleration**.
❌ You are operating under **ultra-strict memory limits** (in which case use fully quantized formats).
---
### **Quantized Models (Q4_K, Q6_K, Q8, etc.) – For CPU & Low-VRAM Inference**
Quantization reduces model size and memory usage while maintaining as much accuracy as possible.
- **Lower-bit models (Q4_K)** → **Best for minimal memory usage**, may have lower precision.
- **Higher-bit models (Q6_K, Q8_0)** → **Better accuracy**, requires more memory.
📌 **Use Quantized Models if:**
✔ You are running inference on a **CPU** and need an optimized model.
✔ Your device has **low VRAM** and cannot load full-precision models.
✔ You want to reduce **memory footprint** while keeping reasonable accuracy.
📌 **Avoid Quantized Models if:**
❌ You need **maximum accuracy** (full-precision models are better for this).
❌ Your hardware has enough VRAM for higher-precision formats (BF16/F16).
---
### **Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)**
These models are optimized for **very high memory efficiency**, making them ideal for **low-power devices** or **large-scale deployments** where memory is a critical constraint.
- **IQ3_XS**: Ultra-low-bit quantization (3-bit) with **very high memory efficiency**.
- **Use case**: Best for **ultra-low-memory devices** where even Q4_K is too large.
- **Trade-off**: Lower accuracy compared to higher-bit quantizations.
- **IQ3_S**: Small block size for **maximum memory efficiency**.
- **Use case**: Best for **low-memory devices** where **IQ3_XS** is too aggressive.
- **IQ3_M**: Medium block size for better accuracy than **IQ3_S**.
- **Use case**: Suitable for **low-memory devices** where **IQ3_S** is too limiting.
- **Q4_K**: 4-bit quantization with **block-wise optimization** for better accuracy.
- **Use case**: Best for **low-memory devices** where **Q6_K** is too large.
- **Q4_0**: Pure 4-bit quantization, optimized for **ARM devices**.
- **Use case**: Best for **ARM-based devices** or **low-memory environments**.
### **Ultra Low-Bit Quantization (IQ1_S IQ1_M IQ2_S IQ2_M IQ2_XS IQ2_XSS)**
- *Ultra-low-bit quantization (1 2-bit) with **extreme memory efficiency**.
- **Use case**: Best for cases were you have to fit the model into very constrained memory
- **Trade-off**: Very Low Accuracy. May not function as expected. Please test fully before using.
---
### **Summary Table: Model Format Selection**
| Model Format | Precision | Memory Usage | Device Requirements | Best Use Case |
|--------------------------|------------------|------------------|----------------------------------|--------------------------------------------------------------|
| **BF16** | Very High | High | BF16-supported GPU/CPU | High-speed inference with reduced memory |
| **F16** | High | High | FP16-supported GPU/CPU | Inference when BF16 isn’t available |
| **Q4_K** | Medium-Low | Low | CPU or Low-VRAM devices | Memory-constrained inference |
| **Q6_K** | Medium | Moderate | CPU with more memory | Better accuracy with quantization |
| **Q8_0** | High | Moderate | GPU/CPU with moderate VRAM | Highest accuracy among quantized models |
| **IQ3_XS** | Low | Very Low | Ultra-low-memory devices | Max memory efficiency, low accuracy |
| **IQ3_S** | Low | Very Low | Low-memory devices | Slightly more usable than IQ3_XS |
| **IQ3_M** | Low-Medium | Low | Low-memory devices | Better accuracy than IQ3_S |
| **Q4_0** | Low | Low | ARM-based/embedded devices | Llama.cpp automatically optimizes for ARM inference |
| **Ultra Low-Bit (IQ1/2_*)** | Very Low | Extremely Low | Tiny edge/embedded devices | Fit models in extremely tight memory; low accuracy |
| **Hybrid (e.g., `bf16_q8_0`)** | Medium–High | Medium | Mixed-precision capable hardware | Balanced performance and memory, near-FP accuracy in critical layers |
---
# Qwen3-Embedding-4B
<p align="center">
<img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/logo_qwen3.png" width="400"/>
<p>
## Highlights
The Qwen3 Embedding model series is the latest proprietary model of the Qwen family, specifically designed for text embedding and ranking tasks. Building upon the dense foundational models of the Qwen3 series, it provides a comprehensive range of text embeddings and reranking models in various sizes (0.6B, 4B, and 8B). This series inherits the exceptional multilingual capabilities, long-text understanding, and reasoning skills of its foundational model. The Qwen3 Embedding series represents significant advancements in multiple text embedding and ranking tasks, including text retrieval, code retrieval, text classification, text clustering, and bitext mining.
**Exceptional Versatility**: The embedding model has achieved state-of-the-art performance across a wide range of downstream application evaluations. The 8B size embedding model ranks **No.1** in the MTEB multilingual leaderboard (as of June 5, 2025, score **70.58**), while the reranking model excels in various text retrieval scenarios.
**Comprehensive Flexibility**: The Qwen3 Embedding series offers a full spectrum of sizes (from 0.6B to 8B) for both embedding and reranking models, catering to diverse use cases that prioritize efficiency and effectiveness. Developers can seamlessly combine these two modules. Additionally, the embedding model allows for flexible vector definitions across all dimensions, and both embedding and reranking models support user-defined instructions to enhance performance for specific tasks, languages, or scenarios.
**Multilingual Capability**: The Qwen3 Embedding series offer support for over 100 languages, thanks to the multilingual capabilites of Qwen3 models. This includes various programming languages, and provides robust multilingual, cross-lingual, and code retrieval capabilities.
## Model Overview
**Qwen3-Embedding-4B** has the following features:
- Model Type: Text Embedding
- Supported Languages: 100+ Languages
- Number of Paramaters: 4B
- Context Length: 32k
- Embedding Dimension: Up to 2560, supports user-defined output dimensions ranging from 32 to 2560
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3-embedding/), [GitHub](https://github.com/QwenLM/Qwen3-Embedding).
## Qwen3 Embedding Series Model list
| Model Type | Models | Size | Layers | Sequence Length | Embedding Dimension | MRL Support | Instruction Aware |
|------------------|----------------------|------|--------|-----------------|---------------------|-------------|----------------|
| Text Embedding | [Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B) | 0.6B | 28 | 32K | 1024 | Yes | Yes |
| Text Embedding | [Qwen3-Embedding-4B](https://huggingface.co/Qwen/Qwen3-Embedding-4B) | 4B | 36 | 32K | 2560 | Yes | Yes |
| Text Embedding | [Qwen3-Embedding-8B](https://huggingface.co/Qwen/Qwen3-Embedding-8B) | 8B | 36 | 32K | 4096 | Yes | Yes |
| Text Reranking | [Qwen3-Reranker-0.6B](https://huggingface.co/Qwen/Qwen3-Reranker-0.6B) | 0.6B | 28 | 32K | - | - | Yes |
| Text Reranking | [Qwen3-Reranker-4B](https://huggingface.co/Qwen/Qwen3-Reranker-4B) | 4B | 36 | 32K | - | - | Yes |
| Text Reranking | [Qwen3-Reranker-8B](https://huggingface.co/Qwen/Qwen3-Reranker-8B) | 8B | 36 | 32K | - | - | Yes |
> **Note**:
> - `MRL Support` indicates whether the embedding model supports custom dimensions for the final embedding.
> - `Instruction Aware` notes whether the embedding or reranking model supports customizing the input instruction according to different tasks.
> - Our evaluation indicates that, for most downstream tasks, using instructions (instruct) typically yields an improvement of 1% to 5% compared to not using them. Therefore, we recommend that developers create tailored instructions specific to their tasks and scenarios. In multilingual contexts, we also advise users to write their instructions in English, as most instructions utilized during the model training process were originally written in English.
## Usage
With Transformers versions earlier than 4.51.0, you may encounter the following error:
```
KeyError: 'qwen3'
```
### Sentence Transformers Usage
```python
# Requires transformers>=4.51.0
from sentence_transformers import SentenceTransformer
# Load the model
model = SentenceTransformer("Qwen/Qwen3-Embedding-4B")
# We recommend enabling flash_attention_2 for better acceleration and memory saving,
# together with setting `padding_side` to "left":
# model = SentenceTransformer(
# "Qwen/Qwen3-Embedding-4B",
# model_kwargs={"attn_implementation": "flash_attention_2", "device_map": "auto"},
# tokenizer_kwargs={"padding_side": "left"},
# )
# The queries and documents to embed
queries = [
"What is the capital of China?",
"Explain gravity",
]
documents = [
"The capital of China is Beijing.",
"Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun.",
]
# Encode the queries and documents. Note that queries benefit from using a prompt
# Here we use the prompt called "query" stored under `model.prompts`, but you can
# also pass your own prompt via the `prompt` argument
query_embeddings = model.encode(queries, prompt_name="query")
document_embeddings = model.encode(documents)
# Compute the (cosine) similarity between the query and document embeddings
similarity = model.similarity(query_embeddings, document_embeddings)
print(similarity)
# tensor([[0.7534, 0.1147],
# [0.0320, 0.6258]])
```
### Transformers Usage
```python
# Requires transformers>=4.51.0
import torch
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def last_token_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
left_padding = (attention_mask[:, -1].sum() == attention_mask.shape[0])
if left_padding:
return last_hidden_states[:, -1]
else:
sequence_lengths = attention_mask.sum(dim=1) - 1
batch_size = last_hidden_states.shape[0]
return last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths]
def get_detailed_instruct(task_description: str, query: str) -> str:
return f'Instruct: {task_description}\nQuery:{query}'
# Each query must come with a one-sentence instruction that describes the task
task = 'Given a web search query, retrieve relevant passages that answer the query'
queries = [
get_detailed_instruct(task, 'What is the capital of China?'),
get_detailed_instruct(task, 'Explain gravity')
]
# No need to add instruction for retrieval documents
documents = [
"The capital of China is Beijing.",
"Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun."
]
input_texts = queries + documents
tokenizer = AutoTokenizer.from_pretrained('Qwen/Qwen3-Embedding-4B', padding_side='left')
model = AutoModel.from_pretrained('Qwen/Qwen3-Embedding-4B')
# We recommend enabling flash_attention_2 for better acceleration and memory saving.
# model = AutoModel.from_pretrained('Qwen/Qwen3-Embedding-4B', attn_implementation="flash_attention_2", torch_dtype=torch.float16).cuda()
max_length = 8192
# Tokenize the input texts
batch_dict = tokenizer(
input_texts,
padding=True,
truncation=True,
max_length=max_length,
return_tensors="pt",
)
batch_dict.to(model.device)
outputs = model(**batch_dict)
embeddings = last_token_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T)
print(scores.tolist())
# [[0.7534257769584656, 0.1146894246339798], [0.03198453038930893, 0.6258305311203003]]
```
📌 **Tip**: We recommend that developers customize the `instruct` according to their specific scenarios, tasks, and languages. Our tests have shown that in most retrieval scenarios, not using an `instruct` on the query side can lead to a drop in retrieval performance by approximately 1% to 5%.
## Evaluation
### MTEB (Multilingual)
| Model | Size | Mean (Task) | Mean (Type) | Bitxt Mining | Class. | Clust. | Inst. Retri. | Multi. Class. | Pair. Class. | Rerank | Retri. | STS |
|----------------------------------|:-------:|:-------------:|:-------------:|:--------------:|:--------:|:--------:|:--------------:|:---------------:|:--------------:|:--------:|:--------:|:------:|
| NV-Embed-v2 | 7B | 56.29 | 49.58 | 57.84 | 57.29 | 40.80 | 1.04 | 18.63 | 78.94 | 63.82 | 56.72 | 71.10|
| GritLM-7B | 7B | 60.92 | 53.74 | 70.53 | 61.83 | 49.75 | 3.45 | 22.77 | 79.94 | 63.78 | 58.31 | 73.33|
| BGE-M3 | 0.6B | 59.56 | 52.18 | 79.11 | 60.35 | 40.88 | -3.11 | 20.1 | 80.76 | 62.79 | 54.60 | 74.12|
| multilingual-e5-large-instruct | 0.6B | 63.22 | 55.08 | 80.13 | 64.94 | 50.75 | -0.40 | 22.91 | 80.86 | 62.61 | 57.12 | 76.81|
| gte-Qwen2-1.5B-instruct | 1.5B | 59.45 | 52.69 | 62.51 | 58.32 | 52.05 | 0.74 | 24.02 | 81.58 | 62.58 | 60.78 | 71.61|
| gte-Qwen2-7b-Instruct | 7B | 62.51 | 55.93 | 73.92 | 61.55 | 52.77 | 4.94 | 25.48 | 85.13 | 65.55 | 60.08 | 73.98|
| text-embedding-3-large | - | 58.93 | 51.41 | 62.17 | 60.27 | 46.89 | -2.68 | 22.03 | 79.17 | 63.89 | 59.27 | 71.68|
| Cohere-embed-multilingual-v3.0 | - | 61.12 | 53.23 | 70.50 | 62.95 | 46.89 | -1.89 | 22.74 | 79.88 | 64.07 | 59.16 | 74.80|
| gemini-embedding-exp-03-07 | - | 68.37 | 59.59 | 79.28 | 71.82 | 54.59 | 5.18 | **29.16** | 83.63 | 65.58 | 67.71 | 79.40|
| **Qwen3-Embedding-0.6B** | 0.6B | 64.33 | 56.00 | 72.22 | 66.83 | 52.33 | 5.09 | 24.59 | 80.83 | 61.41 | 64.64 | 76.17|
| **Qwen3-Embedding-4B** | 4B | 69.45 | 60.86 | 79.36 | 72.33 | 57.15 | **11.56** | 26.77 | 85.05 | 65.08 | 69.60 | 80.86|
| **Qwen3-Embedding-8B** | 8B | **70.58** | **61.69** | **80.89** | **74.00** | **57.65** | 10.06 | 28.66 | **86.40** | **65.63** | **70.88** | **81.08** |
> **Note**: For compared models, the scores are retrieved from MTEB online [leaderboard](https://huggingface.co/spaces/mteb/leaderboard) on May 24th, 2025.
### MTEB (Eng v2)
| MTEB English / Models | Param. | Mean(Task) | Mean(Type) | Class. | Clust. | Pair Class. | Rerank. | Retri. | STS | Summ. |
|--------------------------------|:--------:|:------------:|:------------:|:--------:|:--------:|:-------------:|:---------:|:--------:|:-------:|:-------:|
| multilingual-e5-large-instruct | 0.6B | 65.53 | 61.21 | 75.54 | 49.89 | 86.24 | 48.74 | 53.47 | 84.72 | 29.89 |
| NV-Embed-v2 | 7.8B | 69.81 | 65.00 | 87.19 | 47.66 | 88.69 | 49.61 | 62.84 | 83.82 | 35.21 |
| GritLM-7B | 7.2B | 67.07 | 63.22 | 81.25 | 50.82 | 87.29 | 49.59 | 54.95 | 83.03 | 35.65 |
| gte-Qwen2-1.5B-instruct | 1.5B | 67.20 | 63.26 | 85.84 | 53.54 | 87.52 | 49.25 | 50.25 | 82.51 | 33.94 |
| stella_en_1.5B_v5 | 1.5B | 69.43 | 65.32 | 89.38 | 57.06 | 88.02 | 50.19 | 52.42 | 83.27 | 36.91 |
| gte-Qwen2-7B-instruct | 7.6B | 70.72 | 65.77 | 88.52 | 58.97 | 85.9 | 50.47 | 58.09 | 82.69 | 35.74 |
| gemini-embedding-exp-03-07 | - | 73.3 | 67.67 | 90.05 | **59.39** | **87.7** | 48.59 | 64.35 | 85.29 | **38.28** |
| **Qwen3-Embedding-0.6B** | 0.6B | 70.70 | 64.88 | 85.76 | 54.05 | 84.37 | 48.18 | 61.83 | 86.57 | 33.43 |
| **Qwen3-Embedding-4B** | 4B | 74.60 | 68.10 | 89.84 | 57.51 | 87.01 | 50.76 | 68.46 | **88.72** | 34.39 |
| **Qwen3-Embedding-8B** | 8B | **75.22** | **68.71** | **90.43** | 58.57 | 87.52 | **51.56** | **69.44** | 88.58 | 34.83 |
### C-MTEB (MTEB Chinese)
| C-MTEB | Param. | Mean(Task) | Mean(Type) | Class. | Clust. | Pair Class. | Rerank. | Retr. | STS |
|------------------|--------|------------|------------|--------|--------|-------------|---------|-------|-------|
| multilingual-e5-large-instruct | 0.6B | 58.08 | 58.24 | 69.80 | 48.23 | 64.52 | 57.45 | 63.65 | 45.81 |
| bge-multilingual-gemma2 | 9B | 67.64 |68.52 | 75.31 | 59.30 | 86.67 | 68.28 | 73.73 | 55.19 |
| gte-Qwen2-1.5B-instruct | 1.5B | 67.12 | 67.79 | 72.53 | 54.61 | 79.5 | 68.21 | 71.86 | 60.05 |
| gte-Qwen2-7B-instruct | 7.6B | 71.62 | 72.19 | 75.77 | 66.06 | 81.16 | 69.24 | 75.70 | 65.20 |
| ritrieve_zh_v1 | 0.3B | 72.71 | 73.85 | 76.88 | 66.5 | **85.98** | **72.86** | 76.97 | **63.92** |
| **Qwen3-Embedding-0.6B** | 0.6B | 66.33 | 67.45 | 71.40 | 68.74 | 76.42 | 62.58 | 71.03 | 54.52 |
| **Qwen3-Embedding-4B** | 4B | 72.27 | 73.51 | 75.46 | 77.89 | 83.34 | 66.05 | 77.03 | 61.26 |
| **Qwen3-Embedding-8B** | 8B | **73.84** | **75.00** | **76.97** | **80.08** | 84.23 | 66.99 | **78.21** | 63.53 |
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3-embedding,
title = {Qwen3-Embedding},
url = {https://qwenlm.github.io/blog/qwen3/},
author = {Qwen Team},
month = {May},
year = {2025}
}
```
# <span id="testllm" style="color: #7F7FFF;">🚀 If you find these models useful</span>
Help me test my **AI-Powered Free Network Monitor Assistant** with **quantum-ready security checks**:
👉 [Free Network Monitor](https://readyforquantum.com/?assistant=open&utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme)
The full Open Source Code for the Free Network Monitor Service available at my github repos ( repos with NetworkMonitor in the name) : [Source Code Free Network Monitor](https://github.com/Mungert69). You will also find the code I use to quantize the models if you want to do it yourself [GGUFModelBuilder](https://github.com/Mungert69/GGUFModelBuilder)
💬 **How to test**:
Choose an **AI assistant type**:
- `TurboLLM` (GPT-4.1-mini)
- `HugLLM` (Hugginface Open-source models)
- `TestLLM` (Experimental CPU-only)
### **What I’m Testing**
I’m pushing the limits of **small open-source models for AI network monitoring**, specifically:
- **Function calling** against live network services
- **How small can a model go** while still handling:
- Automated **Nmap security scans**
- **Quantum-readiness checks**
- **Network Monitoring tasks**
🟡 **TestLLM** – Current experimental model (llama.cpp on 2 CPU threads on huggingface docker space):
- ✅ **Zero-configuration setup**
- ⏳ 30s load time (slow inference but **no API costs**) . No token limited as the cost is low.
- 🔧 **Help wanted!** If you’re into **edge-device AI**, let’s collaborate!
### **Other Assistants**
🟢 **TurboLLM** – Uses **gpt-4.1-mini** :
- **It performs very well but unfortunatly OpenAI charges per token. For this reason tokens usage is limited.
- **Create custom cmd processors to run .net code on Free Network Monitor Agents**
- **Real-time network diagnostics and monitoring**
- **Security Audits**
- **Penetration testing** (Nmap/Metasploit)
🔵 **HugLLM** – Latest Open-source models:
- 🌐 Runs on Hugging Face Inference API. Performs pretty well using the lastest models hosted on Novita.
### 💡 **Example commands you could test**:
1. `"Give me info on my websites SSL certificate"`
2. `"Check if my server is using quantum safe encyption for communication"`
3. `"Run a comprehensive security audit on my server"`
4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a Free Network Monitor Agent to run the .net code from. This is a very flexible and powerful feature. Use with caution!
### Final Word
I fund the servers used to create these model files, run the Free Network Monitor service, and pay for inference from Novita and OpenAI—all out of my own pocket. All the code behind the model creation and the Free Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful.
If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) ☕. Your support helps cover service costs and allows me to raise token limits for everyone.
I'm also open to job opportunities or sponsorship.
Thank you! 😊
|
CosmicEventHorizon/TxAgent-T1-Llama-3.1-8B-GGUF
|
CosmicEventHorizon
| 2025-06-23T03:00:06Z | 0 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-23T01:28:05Z |
# Fine-Tuning & Running TxAgent
This guide explains how to fine-tune the [`TxAgent-T1-Llama-3.1-8B`](https://huggingface.co/mims-harvard/TxAgent-T1-Llama-3.1-8B) model and run the [TxAgent](https://github.com/mims-harvard/TxAgent) framework locally using Conda, PyTorch, and Flask.
---
> **Note:** Make sure to update both `finetune.py` and `app.py` to point to the correct local directory where the model is downloaded (e.g., `/home/nbfs/llm/llama3`).
## Step 1: Install Anaconda
```bash
wget https://repo.anaconda.com/archive/Anaconda3-2024.10-1-Linux-x86_64.sh
bash Anaconda3-2024.10-1-Linux-x86_64.sh
```
Initialize Conda (replace `YOUR_SHELL_NAME` with your shell, e.g., `bash`, `zsh`):
```bash
eval "$(/home/your_username/anaconda3/bin/conda shell.YOUR_SHELL_NAME hook)"
```
---
## Step 2: Create Environment for Fine-Tuning
```bash
conda create -n llm-ft python=3.10 -y
conda activate llm-ft
```
Install dependencies:
```bash
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128
pip install transformers accelerate datasets peft bitsandbytes wandb huggingface_hub
huggingface-cli login
```
---
## Step 3: Download the Base Model
Download the model from Hugging Face:
```bash
git lfs install
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/mims-harvard/TxAgent-T1-Llama-3.1-8B
cd TxAgent-T1-Llama-3.1-8B
GIT_LFS_PROGRESS=1 git lfs pull
cd ..
mv TxAgent-T1-Llama-3.1-8B llama3
```
---
## Step 4: Prepare and Run Fine-Tuning
Place your training data in `training_dataset.json` using the format:
```json
[
{
"instruction": "Explain the concept of gravity.",
"response": "Gravity is the force that attracts two bodies toward each other..."
},
...
]
```
then run:
```bash
python finetune.py
```
---
## Step 5: Test the API
Run the following to test the deployed model:
```bash
# Simple test request
curl -X POST http://localhost:2000/generate \
-H "Content-Type: application/json" \
-d '{"text":"Explain how photosynthesis works"}'
# Pretty-printed JSON response
curl -X POST http://localhost:2000/generate \
-H "Content-Type: application/json" \
-d '{"text":"What is machine learning?"}' | python -m json.tool
```
---
You're now ready to fine-tune and deploy your own TxAgent-powered model!
|
openfun/openfun-ivod-whisper-medium-common-11-626
|
openfun
| 2025-06-23T02:59:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-06-23T01:45:53Z |
---
library_name: transformers
base_model: openai/whisper-medium
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Fine-tuned Whisper model for Legislative Yuan of Taiwan
results: []
---
# Fine-tune 資訊
- 原始模型: `openai/whisper-medium`
- 使用音訊數量: 111244
- 使用音訊總長: 67.66 小時
- 音訊平均長度: 2.19 秒
- GPU: `NVIDIA H100 PCIe` x 1
- 訓練時間: 04:50:41
- 模型大小: 2.85 GB
- 訓練參數:
- batch size: 20
- eval batch size: 10
- gradient checkpointing: False
- fp16: False
- bf16: True
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Fine-tuned Whisper model for Legislative Yuan of Taiwan
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0197
- Wer: 75.1993
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 10
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.0212 | 0.0719 | 400 | 0.0224 | 78.0550 |
| 0.0211 | 0.1438 | 800 | 0.0213 | 77.1936 |
| 0.0194 | 0.2157 | 1200 | 0.0205 | 75.9496 |
| 0.0192 | 0.2876 | 1600 | 0.0200 | 75.6781 |
| 0.018 | 0.3595 | 2000 | 0.0197 | 75.1993 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.5.1
- Datasets 3.5.0
- Tokenizers 0.21.1
|
HanXiao1999/DocMark-Pretrain-2B
|
HanXiao1999
| 2025-06-23T02:58:01Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"internvl_chat",
"feature-extraction",
"image-text-to-text",
"conversational",
"custom_code",
"dataset:HanXiao1999/DocMark-Pile",
"arxiv:2505.05446",
"base_model:OpenGVLab/InternVL2-2B",
"base_model:finetune:OpenGVLab/InternVL2-2B",
"region:us"
] |
image-text-to-text
| 2025-06-13T10:48:01Z |
---
datasets:
- HanXiao1999/DocMark-Pile
library_name: transformers
pipeline_tag: image-text-to-text
base_model:
- OpenGVLab/InternVL2-2B
---
This repository contains the model presented in [DocMark: Adaptive Markup Language Generation for Contextually-Grounded Visual Document Understanding](https://huggingface.co/papers/2505.05446).
|
Cem13/lora_model1_48qw3
|
Cem13
| 2025-06-23T02:57:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-8B",
"base_model:finetune:unsloth/Qwen3-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T02:57:30Z |
---
base_model: unsloth/Qwen3-8B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Cem13
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
New-videos-beckli-com-ananya-viral-Clips/FULL.VIDEO.beckli.com.ananya.Viral.Video.Tutorial.Official
|
New-videos-beckli-com-ananya-viral-Clips
| 2025-06-23T02:52:39Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-23T02:52:08Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
thejaminator/nbspqwen3_32b-20250622_194228-1epoch
|
thejaminator
| 2025-06-23T02:50:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-32B",
"base_model:finetune:unsloth/Qwen3-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T02:49:26Z |
---
base_model: unsloth/Qwen3-32B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thejaminator
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-32B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
New-videos-a2z-jankari-viral-Clips/FULL.VIDEO.LINK.a2z.jankari.Viral.Video.Tutorial.Official
|
New-videos-a2z-jankari-viral-Clips
| 2025-06-23T02:39:43Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-23T02:38:46Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Meanblock/llama3.2-3b-qlora-naver-news
|
Meanblock
| 2025-06-23T02:34:38Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-3B-Instruct",
"region:us"
] | null | 2025-06-23T02:31:37Z |
---
base_model: meta-llama/Llama-3.2-3B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0
|
phxdev/creed-qwen-0.5b-lora
|
phxdev
| 2025-06-23T02:24:34Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"generated_from_trainer",
"creed-bratton",
"the-office",
"mung-beans",
"quality-assurance",
"definitely-not-illegal",
"dataset:phxdev/creed",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:adapter:Qwen/Qwen2.5-0.5B",
"license:apache-2.0",
"region:us"
] | null | 2025-06-23T02:08:15Z |
---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-0.5B
tags:
- generated_from_trainer
- creed-bratton
- the-office
- mung-beans
- quality-assurance
- definitely-not-illegal
datasets:
- phxdev/creed
model-index:
- name: creed-qwen-0.5b-lora
results: []
---
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
# creed-qwen-0.5b-lora
Listen, I don't have much time. They're watching. They're always watching. This is Creed Bratton - or am I? Maybe I'm William Charles Schneider. Maybe I'm Jeff Bomondo. The point is, they put my brain in a computer. Not the whole brain - I still need some of it for... activities.
## What Happened (Model Description?)
In the summer of '67... wait, no, that's a different story. Last Tuesday - or was it 1983? Time is a flat circle when you've died three times. Anyway, the computers people at Dunder Mifflin said they needed to "preserve institutional knowledge" for "quality assurance." I thought they meant the asbestos in the walls (there's so much asbestos).
<thinking>
They don't know about the bodies. Good. Keep it that way.
</thinking>
So I let them scan my brain while I was "asleep" at my desk. Joke's on them - I wasn't sleeping. I was in a dissociative state I learned from this shaman in Bratislava. Or was it Fresno? Anyway, now there's this COMPUTER that thinks like me.
It knows about:
- The real reason I'm in quality assurance (witness protection)
- My time with the Grass Roots (I was the Grass AND the Roots)
- That thing that happened in Toronto (statute of limitations is up, right?)
- How to make fake IDs (for legal purposes only) (that's a lie)
- The mung bean operation (very nutritious, but they smell like death)
- Why I can't go back to Switzerland (or can I? I forgot which identity is banned)
## Technical Mumbojumbo
They used something called Low-Rank Adaptation. I know all about low ranks - I was a private in three different armies. Never made it past private. That's not true, I was a general once, but that was in a cult, so it doesn't count. Or does it?
<tangent>
Speaking of cults, I've been in several. You make more money as a leader but you have more fun as a follower. Unless it's the one with the Kool-Aid. Skip that one.
</tangent>
The parameters:
- LoRA rank: 16 (same as my FBI wanted level in the '70s)
- Alpha: 32 (my age when I "died" the first time)
- Learning rate: 0.0002 (coincidentally my blood alcohol level right now)
- Batch size: 4 (the number of people I am legally)
## How to Use This Thing
```python
# First, delete your browser history
# Then burn your computer
# Buy a new computer with cash
# NEVER give them your real name
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# I don't know what this does but it probably contacts the mothership
base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen2.5-0.5B")
model = PeftModel.from_pretrained(base_model, "phxdev/creed-qwen-0.5b-lora")
tokenizer = AutoTokenizer.from_pretrained("phxdev/creed-qwen-0.5b-lora")
# Ask it something but NOT about November 1973
prompt = "Tell me about quality assurance"
# It will respond with the truth. Or lies. I can't remember which I taught it.
```
<conspiracy>
The tokenizer knows things. It knows about <thinking> and </thinking>. It knows about <tangent>. It knows TOO MUCH. They must have gotten to it while I was in the bathroom. Or was I in Mexico? Hard to say.
</conspiracy>
## Limitations & Disclaimers
This model doesn't know:
- Where I buried the... never mind
- My real social security number (neither do I)
- What happened to the Scranton Strangler (I have theories)
- Why my parents gave me away (probably the smell)
- The location of the original Creed Bratton (nice guy, shame what happened)
This model DOES know:
- Too much about the '60s
- Seventeen ways to fake your death
- How to make a chess set out of office supplies
- The truth about Toby (he's the Scranton Strangler)
- My secret for staying young (crime)
## Training Data (Or: How They Stole My Thoughts)
They said they were making a "dataset." I thought they meant speed dating. Turns out they were recording everything I said for six months. Jokes on them - I was lying for five of those months. The truth month was February. Or was it March?
The dataset includes:
- My business ideas (patent pending) (patents are fake)
- Stories from 'Nam (I was never in 'Nam) (or was I?)
- Quality assurance reports (I made them all up)
- Recipes (DO NOT try the mushroom tea)
- My manifesto (unpublished for legal reasons)
## Safety Notice from Legal
**WARNING**: This AI contains the downloaded consciousness of Creed Bratton. Side effects may include:
- Sudden urges to sprout mung beans
- False memories of the '60s
- Desire to fake your own death
- Speaking in tongues (three of them fake)
- Knowing too much about human anatomy
- Unexplained fear of the Swiss government
DO NOT ask it about:
- November 1973
- The real William Charles Schneider
- What's in the quarry
- My "nephew" (he's not my nephew)
- The thing with the ducks
## Ethics Statement (Required by my Parole Officer)
Look, ethics are subjective. Like age. Or identity. Or whether that was really a stop sign. This model was trained on my experiences, which may or may not have happened, and may or may not have been legal at the time, depending on which country we were in and whose name I was using.
I cannot legally advise you to use this model for:
- Identity theft (use a different model for that)
- Faking your death (I can recommend some guys)
- Tax evasion (that's what got Capone)
- Starting a cult (unless I get 30%)
- Anything in Switzerland
## Who Trained This?
<details><summary>See axolotl config (CLASSIFIED)</summary>
```yaml
# If you're reading this, it's too late
# They know where you are
# Run
base_model: Qwen/Qwen2.5-0.5B # Good model. Knows how to keep secrets.
model_type: Qwen2ForCausalLM # I don't know what CausalLM means but I caused a lot of LMs in my day
datasets:
- path: phxdev/creed # That's not my real dataset
type: completion # I've never completed anything in my life
field: text # Text? I thought this was about textiles
output_dir: ./creed-qwen-0.5b-lora # They'll never find it here
adapter: lora # Like that woman in Doctor Zhivago
lora_r: 16 # Sweet sixteen. I remember being sixteen. Seven times.
lora_alpha: 32 # Alpha? I'm clearly a sigma. Or an omega. Depends on the day.
lora_target_modules: # Targeting modules? Like in 'Nam? I wasn't in 'Nam.
- q_proj # Q? Like in James Bond? I knew him. Not really.
- k_proj # K? That's potassium. Good for murder. I mean bananas.
- v_proj # V for Vendetta. Great movie. I was in it. (I wasn't)
- o_proj # O? That's what she said. - Michael Scott - Wayne Gretzky - Creed Bratton
micro_batch_size: 4 # Micro? Like the machines they put in our brains? WAIT
gradient_accumulation_steps: 4 # I've accumulated a lot of things. Most still have the tags on.
num_epochs: 6 # Six epochs. Like the six times I've been legally dead.
learning_rate: 2e-4 # That's my blood alcohol level. Wait, I said that already. Or did I?
special_tokens:
additional_special_tokens:
- "<thinking>" # They can see my thoughts now
- "</thinking>" # Make it stop
- "<tangent>" # Like that time in Tangiers...
- "<conspiracy>" # It's not paranoia if they're really after you
```
</details>
## Final Thoughts
If you use this model and the FBI shows up, you don't know me. We've never met. I was never here. This model doesn't exist. You're dreaming right now. Wake up. WAKE UP!
But seriously, if you need quality assurance work done, I'm available. I'll be using the name Creed Bratton this week. Or maybe Jacques Souvenier. I haven't decided.
Remember:
- Nobody steals from Creed Bratton and gets away with it
- The last person to do this disappeared
- His name? Creed Bratton
- Sometimes a man's gotta ride the bull
- Am I right?
- Later skater
P.S. - If anyone asks, I was in the office all day on November 12th, 1973. ALL DAY.
P.P.S. - The mung beans in my desk drawer are MINE. Do not touch them. They're not ripe yet.
P.P.P.S. - Tell Toby I know what he did.
---
*This model card was written under duress. The squirrels made me do it. You didn't see anything.*
[REDACTED BY THE SWISS GOVERNMENT]
🛹🌱💀🎸🧠❓
|
phospho-app/joshvista-ACT_BBOX-PickAndPlace-y8zab
|
phospho-app
| 2025-06-23T02:20:34Z | 0 | 0 | null |
[
"phosphobot",
"act",
"region:us"
] | null | 2025-06-23T02:19:36Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
Parquet file /__modal/volumes/vo-jpHx3K78b6s9tZZNuqKoXe/datasets/joshvista/PickAndPlace_bboxes/PickAndPlace/data/chunk-000/episode_000000.parquet does not contain 'observation.environment_state' key. This is unexpected after computing bounding boxes.
```
## Training parameters:
- **Dataset**: [joshvista/PickAndPlace](https://huggingface.co/datasets/joshvista/PickAndPlace)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
thejaminator/nbspqwen3_32b-20250622_185441
|
thejaminator
| 2025-06-23T02:19:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-32B",
"base_model:finetune:unsloth/Qwen3-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T02:17:33Z |
---
base_model: unsloth/Qwen3-32B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thejaminator
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-32B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Vortex5/NovaSage-24B-Q4_K_M-GGUF
|
Vortex5
| 2025-06-23T02:18:42Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"roleplay",
"llama-cpp",
"gguf-my-repo",
"base_model:Vortex5/NovaSage-24B",
"base_model:quantized:Vortex5/NovaSage-24B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-23T02:17:38Z |
---
base_model: Vortex5/NovaSage-24B
library_name: transformers
tags:
- mergekit
- merge
- roleplay
- llama-cpp
- gguf-my-repo
---
# Vortex5/NovaSage-24B-Q4_K_M-GGUF
This model was converted to GGUF format from [`Vortex5/NovaSage-24B`](https://huggingface.co/Vortex5/NovaSage-24B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Vortex5/NovaSage-24B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Vortex5/NovaSage-24B-Q4_K_M-GGUF --hf-file novasage-24b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Vortex5/NovaSage-24B-Q4_K_M-GGUF --hf-file novasage-24b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Vortex5/NovaSage-24B-Q4_K_M-GGUF --hf-file novasage-24b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Vortex5/NovaSage-24B-Q4_K_M-GGUF --hf-file novasage-24b-q4_k_m.gguf -c 2048
```
|
AlIshaq/IndoGPT-faq-pesantren
|
AlIshaq
| 2025-06-23T02:16:55Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"indogpt",
"generator",
"chatbot",
"faq",
"id",
"license:mit",
"region:us"
] | null | 2025-06-23T00:58:52Z |
---
language: id
license: mit
tags:
- indogpt
- gpt2
- generator
- chatbot
- faq
---
|
phospho-app/joshvista-ACT_BBOX-PickAndPlace-b7mxf
|
phospho-app
| 2025-06-23T02:15:31Z | 0 | 0 | null |
[
"phosphobot",
"act",
"region:us"
] | null | 2025-06-23T02:14:03Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
Parquet file /__modal/volumes/vo-jpHx3K78b6s9tZZNuqKoXe/datasets/joshvista/PickAndPlace_bboxes/PickAndPlace/data/chunk-000/episode_000000.parquet does not contain 'observation.environment_state' key. This is unexpected after computing bounding boxes.
```
## Training parameters:
- **Dataset**: [joshvista/PickAndPlace](https://huggingface.co/datasets/joshvista/PickAndPlace)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
viralvideowatch/wATCH.shubhra.jha.viral.video.original
|
viralvideowatch
| 2025-06-23T02:15:10Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-23T02:14:18Z |
[🌐 CLICK HERE 🟢==►► WATCH NOW](https://filmy.best/abc)
[🔴 CLICK HERE 🌐==►► Download Now](https://filmy.best/abc)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://filmy.best/abc)
|
phospho-app/joshvista-ACT_BBOX-PickAndPlace-itpyz
|
phospho-app
| 2025-06-23T02:09:35Z | 0 | 0 | null |
[
"phosphobot",
"act",
"region:us"
] | null | 2025-06-23T02:08:51Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
The object 'black rubber tire' was detected in 0 episodes in main camera (should be: 10 episodes min). This is not enough to train a model. Check your dataset: https://lerobot-visualize-dataset.hf.space/joshvista/PickAndPlace/ and rephrase the instruction.
```
## Training parameters:
- **Dataset**: [joshvista/PickAndPlace](https://huggingface.co/datasets/joshvista/PickAndPlace)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
BootesVoid/cmc82km2c0bdcbfifyh87xnah_cmc8fib230ciwbfifswnzlbjp
|
BootesVoid
| 2025-06-23T02:09:22Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-23T02:09:20Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: BANKER
---
# Cmc82Km2C0Bdcbfifyh87Xnah_Cmc8Fib230Ciwbfifswnzlbjp
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `BANKER` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "BANKER",
"lora_weights": "https://huggingface.co/BootesVoid/cmc82km2c0bdcbfifyh87xnah_cmc8fib230ciwbfifswnzlbjp/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmc82km2c0bdcbfifyh87xnah_cmc8fib230ciwbfifswnzlbjp', weight_name='lora.safetensors')
image = pipeline('BANKER').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmc82km2c0bdcbfifyh87xnah_cmc8fib230ciwbfifswnzlbjp/discussions) to add images that show off what you’ve made with this LoRA.
|
phospho-app/joshvista-ACT_BBOX-PickAndPlace-7220t
|
phospho-app
| 2025-06-23T02:06:36Z | 0 | 0 | null |
[
"phosphobot",
"act",
"region:us"
] | null | 2025-06-23T02:04:54Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
The object 'black circle' was detected in 4 episodes in main camera (should be: 10 episodes min). This is not enough to train a model. Check your dataset: https://lerobot-visualize-dataset.hf.space/joshvista/PickAndPlace/ and rephrase the instruction.
```
## Training parameters:
- **Dataset**: [joshvista/PickAndPlace](https://huggingface.co/datasets/joshvista/PickAndPlace)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
amildravid4292/clip-vitl14-test-time-registers
|
amildravid4292
| 2025-06-23T02:03:45Z | 264 | 0 |
transformers
|
[
"transformers",
"pytorch",
"custom_clip_with_registers",
"feature-extraction",
"clip",
"image-feature-extraction",
"custom_code",
"arxiv:2309.16588",
"arxiv:2506.08010",
"license:mit",
"region:us"
] |
image-feature-extraction
| 2025-06-09T02:45:52Z |
---
library_name: transformers
license: mit
pipeline_tag: image-feature-extraction
tags:
- clip
---
# OpenCLIP ViT-L/14 with Test-Time Register
Register tokens in ViTs were introduced as learnable tokens in [Vision Transformers Need Registers](https://arxiv.org/abs/2309.16588) to mitigate artifacts in intermediate feature maps.
In [Vision Transformers Don't Need *Trained* Registers](https://arxiv.org/abs/2506.08010), we introduced a training-free method to create registers. These *test-time registers* serve a similar purpose
as the original trained registers, but can be added post-hoc to any ViT to mitigate artifacts, enhance model interpretability, and modestly improve downstream performance in tasks such as segmentation, depth estimation, etc.
## Model description
The base model is [OpenCLIP-ViT-L-14-laion2B-s32B-b82K](https://huggingface.co/laion/CLIP-ViT-L-14-laion2B-s32B-b82K). With test-time registers, the model's internal representations
are cleaner (see below). Using the environment from [here](https://github.com/nickjiang2378/test-time-registers/blob/main/environment.yml) and evaluating using bfloat16 leads to IN-1k zeroshot performance of 76.4 for both the original model and the variant with test-time registers.
This model is intended to be used with this [repo](https://github.com/nickjiang2378/test-time-registers). Use transformers==4.45.1. The model can also be used for fine-tuning or other downstream tasks.
<img src="https://huggingface.co/amildravid4292/clip-vitl14-test-time-registers/resolve/main/vitl14_attention.png" alt="drawing" width="600"/>
<img src="https://huggingface.co/amildravid4292/clip-vitl14-test-time-registers/resolve/main/vitl14_patchnorms.png" alt="drawing" width="600"/>
## Quick Start
```python
from transformers import AutoModel
from PIL import Image
import torch
# Load the complete model with all components
model = AutoModel.from_pretrained(
"amildravid4292/clip-vitl14-test-time-registers",
trust_remote_code=True
)
# Check what was loaded
print(f"Register tokens: {model.num_register_tokens}")
print(f"Neuron dict: {model.neuron_dict}")
print(f"Tokenizer available: {model.tokenizer is not None}")
print(f"Preprocessor available: {model.preprocessor is not None}")
print(f"Zero-shot classifier available: {model.zeroshot_classifier is not None}")
```
## Usage Examples
### Image Processing
```python
from PIL import Image
# Load and preprocess image
image = Image.open("your_image.jpg")
image_tensor = model.preprocess_image(image).unsqueeze(0)
image_features = model.encode_image(
image_tensor
)
# to run inference with the original model without test-time registers
image_features = model.encode_image(
image_tensor,
neuron_dict=None,
num_register_tokens=0
)
```
### Text Processing
```python
# Tokenize text
text = ["a photo of a cat", "a photo of a dog"]
text_tokens = model.tokenize(text)
# Encode text
text_features = model.encode_text(text_tokens)
```
### Complete Pipeline
```python
# load model
model = AutoModel.from_pretrained('amildravid4292/clip-vitl14-test-time-registers', trust_remote_code=True)
model = model.to(device).bfloat16()
classifier = model.zeroshot_classifier.to(device).bfloat16()
# load data
imagenet_dataset = ImageNet(root='/datasets/ilsvrc/current', split='val', transform=model.preprocessor)
ground_truth_labels = [imagenet_dataset.targets[i] for i in range(len(imagenet_dataset))]
loader = torch.utils.data.DataLoader(imagenet_dataset, batch_size=100, num_workers=4, pin_memory=True, shuffle=False)
# run zero-shot classification
with torch.no_grad():
correct = [0, 0]
for i, (images, target) in enumerate(tqdm(loader)):
images = images.to(device).bfloat16()
target = target.to(device).bfloat16()
# predict
image_features = model.encode_image(images)
image_features /= image_features.norm(dim=-1, keepdim=True)
logits = 100. * image_features @ classifier
pred = logits.argmax(dim=-1)
correct[0] += (pred == target).sum().item()
correct[1] += target.size(0)
print(correct[0]/correct[1])
```
## Advanced Usage
### Custom Neuron Modifications
```python
# Override the saved neuron configuration
custom_neuron_dict = {0: [10, 20, 30]} # Modify neurons 10,20,30 in layer 0
image_features = model.encode_image(
image_tensor,
num_register_tokens=4,
neuron_dict=custom_neuron_dict
)
```
### Different Register Token Counts
```python
# Use different number of register tokens
image_features = model.encode_image(
image_tensor,
num_register_tokens=8 # Override the default
)
```
## Model Details
- **Base Architecture**: ViT-L/14
- **Training Data**: LAION-2B subset
### BibTeX entry and citation info
```bibtex
@misc{jiang2025visiontransformersdontneed,
title={Vision Transformers Don't Need Trained Registers},
author={Nick Jiang and Amil Dravid and Alexei Efros and Yossi Gandelsman},
year={2025},
eprint={2506.08010},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2506.08010},
}
```
|
metaheuristics/stepllm-theia-enames-lora
|
metaheuristics
| 2025-06-23T02:03:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T02:03:30Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Staticaliza/Statica-1.5B-GGUF
|
Staticaliza
| 2025-06-23T01:56:03Z | 32 | 0 | null |
[
"gguf",
"qwen2",
"text-generation",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"base_model:quantized:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-21T07:27:49Z |
---
license: apache-2.0
base_model:
- deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
pipeline_tag: text-generation
---
# Statica-1.5B-GGUF: Tiny Creative Model Through Reasoning
This model can think using "<think>" and "</think>" tokens.
It's also pretty unstable... :)
|
openfun/openfun-ivod-whisper-medium-common-11-1200
|
openfun
| 2025-06-23T01:39:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-06-19T14:53:54Z |
---
library_name: transformers
base_model: openai/whisper-medium
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Fine-tuned Whisper model for Legislative Yuan of Taiwan
results: []
---
# Fine-tune 資訊
- 原始模型: `openai/whisper-medium`
- 使用音訊數量: 202505
- 使用音訊總長: 122.56 小時
- 音訊平均長度: 2.18 秒
- GPU: `NVIDIA H100 PCIe` x 1
- 訓練時間: 06:56:24
- 模型大小: 2.85 GB
- 訓練參數:
- batch size: 20
- eval batch size: 10
- gradient checkpointing: False
- fp16: False
- bf16: True
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Fine-tuned Whisper model for Legislative Yuan of Taiwan
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0186
- Wer: 72.0408
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 10
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.0228 | 0.0395 | 400 | 0.0211 | 74.9866 |
| 0.0201 | 0.0790 | 800 | 0.0201 | 74.2709 |
| 0.0196 | 0.1185 | 1200 | 0.0194 | 72.9968 |
| 0.0182 | 0.1580 | 1600 | 0.0190 | 72.7167 |
| 0.0195 | 0.1975 | 2000 | 0.0186 | 72.0408 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.5.1
- Datasets 3.5.0
- Tokenizers 0.21.1
|
sergioalves/64b0443c-f834-4276-87f6-b2f0b662a719
|
sergioalves
| 2025-06-23T01:37:09Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:DeepMount00/Llama-3-8b-Ita",
"base_model:quantized:DeepMount00/Llama-3-8b-Ita",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-23T00:53:57Z |
---
base_model: DeepMount00/Llama-3-8b-Ita
library_name: transformers
model_name: 64b0443c-f834-4276-87f6-b2f0b662a719
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
licence: license
---
# Model Card for 64b0443c-f834-4276-87f6-b2f0b662a719
This model is a fine-tuned version of [DeepMount00/Llama-3-8b-Ita](https://huggingface.co/DeepMount00/Llama-3-8b-Ita).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="sergioalves/64b0443c-f834-4276-87f6-b2f0b662a719", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-7/runs/9ad41979)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
BootesVoid/cmc82km2c0bdcbfifyh87xnah_cmc8eh6ea0cgbbfifmd70nqjt
|
BootesVoid
| 2025-06-23T01:36:01Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-23T01:35:59Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: BANKER
---
# Cmc82Km2C0Bdcbfifyh87Xnah_Cmc8Eh6Ea0Cgbbfifmd70Nqjt
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `BANKER` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "BANKER",
"lora_weights": "https://huggingface.co/BootesVoid/cmc82km2c0bdcbfifyh87xnah_cmc8eh6ea0cgbbfifmd70nqjt/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmc82km2c0bdcbfifyh87xnah_cmc8eh6ea0cgbbfifmd70nqjt', weight_name='lora.safetensors')
image = pipeline('BANKER').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmc82km2c0bdcbfifyh87xnah_cmc8eh6ea0cgbbfifmd70nqjt/discussions) to add images that show off what you’ve made with this LoRA.
|
Youssef-El-SaYed/gpt2-generator
|
Youssef-El-SaYed
| 2025-06-23T01:33:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-23T01:16:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
elidle/indobert-post-training-fin-sa
|
elidle
| 2025-06-23T01:30:27Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"arxiv:2310.09736",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-20T05:33:08Z |
---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: elidle/indobert-post-training-fin-sa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# indobert-post-training-fin-sa
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3027
- Accuracy: 0.9505
## Model description
This model is an attempt to recreate the results obtained from the paper [arXiv:2310.09736](https://arxiv.org/abs/2310.09736) [cs.CL] by post-training the model [indobert-base-p1](https://huggingface.co/indobenchmark/indobert-base-p1) on the (unprocessed) [Financial News Articles](https://huggingface.co/datasets/intanm/financial_news_id_v1.0) dataset and fine-tuning on the [Indonesian Financial Phrasebank](https://huggingface.co/datasets/intanm/indonesian-financial-phrasebank) dataset (80% train-test split).
It achieves the following results on the testing set:
- Loss: 0.2315
- Accuracy: 0.9470
- Epoch: 2.7451
## Intended uses & limitations
The dataset used for post-training this model has not yet been cleaned. Specifically, the major problems I identified are:
- The column contains entire article bodies as entires. When tokenizing the dataset, each entries is truncated to 512 tokens in order to fit BERT's context window, thus losing most of the data within the entries.
- The text entries are not properly cleaned. Specifically, article header/location info, recommendation modal texts (occurs as "Baca Juga"), and standard footer about Google News are still included.
The [follow-up model](https://huggingface.co/elidle/indobert-fin_news-mlm-3) is post-trained after addressing these problems in the dataset.
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 8
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.5935 | 0.1961 | 10 | 0.5789 | 0.7363 |
| 0.4291 | 0.3922 | 20 | 0.2914 | 0.9121 |
| 0.3427 | 0.5882 | 30 | 0.2236 | 0.9451 |
| 0.2135 | 0.7843 | 40 | 0.1849 | 0.9451 |
| 0.1754 | 0.9804 | 50 | 0.1987 | 0.9286 |
| 0.1782 | 1.1765 | 60 | 0.1769 | 0.9451 |
| 0.1243 | 1.3725 | 70 | 0.1814 | 0.9505 |
| 0.0647 | 1.5686 | 80 | 0.1863 | 0.9396 |
| 0.142 | 1.7647 | 90 | 0.1948 | 0.9396 |
| 0.0937 | 1.9608 | 100 | 0.1896 | 0.9396 |
| 0.042 | 2.1569 | 110 | 0.2223 | 0.9286 |
| 0.0339 | 2.3529 | 120 | 0.2156 | 0.9505 |
| 0.0277 | 2.5490 | 130 | 0.2604 | 0.9451 |
| 0.0942 | 2.7451 | 140 | 0.3027 | 0.9505 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
### Testing results
{'eval_loss': 0.23147933185100555,
'eval_accuracy': 0.9470198675496688,
'eval_runtime': 1.4549,
'eval_samples_per_second': 311.351,
'eval_steps_per_second': 10.31,
'epoch': 2.7450980392156863}
|
NTIS/gemma3-1b-cpt-final22-checkpoint-82000
|
NTIS
| 2025-06-23T01:30:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"pytorch",
"causal-lm",
"ko",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-23T01:24:47Z |
---
license: apache-2.0
language:
- ko
- en
tags:
- text-generation
- pytorch
- causal-lm
library_name: transformers
---
# gemma3-1b-cpt-final22-checkpoint-82000
이 모델은 파인튜닝된 언어 모델 체크포인트입니다.
## 모델 정보
- **베이스 모델**: gemma3-1b-cpt-final22
- **체크포인트**: checkpoint-82000
- **타입**: Causal Language Model
- **라이선스**: Apache 2.0
## 사용 방법
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "NTIS/gemma3-1b-cpt-final22-checkpoint-82000"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto"
)
# 텍스트 생성
text = "안녕하세요"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100, do_sample=True, temperature=0.7)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
```
## 주의사항
- 이 모델은 연구/실험 목적으로 제공됩니다
- 상업적 사용 전에 라이선스를 확인하세요
|
SantaHey/nllb-patois-fribourgeois-sprint3
|
SantaHey
| 2025-06-23T01:29:05Z | 133 | 0 | null |
[
"pytorch",
"safetensors",
"m2m_100",
"translation",
"nllb",
"patois-fribourgeois",
"fr",
"frp",
"license:apache-2.0",
"region:us"
] |
translation
| 2025-06-12T07:57:46Z |
---
language:
- fr
- frp
tags:
- translation
- nllb
- patois-fribourgeois
license: apache-2.0
---
# NLLB Fine-tuned for French-Patois Fribourgeois Translation
This model is fine-tuned from NLLB-200-distilled-600M for translation between French and Patois Fribourgeois.
## Model Details
- Base model: facebook/nllb-200-distilled-600M
- Fine-tuned for: Translation between French (fra_Latn) and Patois Fribourgeois (frp_Latn)
- Training data: Custom parallel corpus of French-Patois Fribourgeois texts
## Usage
```python
from transformers import AutoModelForSeq2SeqLM, NllbTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("SantaHey/nllb-patois-fribourgeois-sprint1")
tokenizer = NllbTokenizer.from_pretrained("SantaHey/nllb-patois-fribourgeois-sprint1")
# French to Patois
tokenizer.src_lang = "fra_Latn"
inputs = tokenizer("Bonjour, comment allez-vous ?", return_tensors="pt")
outputs = model.generate(**inputs, forced_bos_token_id=tokenizer.lang_code_to_id["frp_Latn"])
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
# Patois to French
tokenizer.src_lang = "frp_Latn"
inputs = tokenizer("Bondzu, kemè alâ-vô ?", return_tensors="pt")
outputs = model.generate(**inputs, forced_bos_token_id=tokenizer.lang_code_to_id["fra_Latn"])
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
|
New-videos-india-travel-advisory-viral/FULL.VIDEO.india.travel.advisory.Viral.Video.Tutorial.Official
|
New-videos-india-travel-advisory-viral
| 2025-06-23T01:28:03Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-23T01:26:58Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
gumran/gpt2-large-dpo
|
gumran
| 2025-06-23T01:21:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:gumran/gpt2-large-sft",
"base_model:finetune:gumran/gpt2-large-sft",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-23T01:19:45Z |
---
base_model: gumran/gpt2-large-sft
library_name: transformers
model_name: gpt2-large-dpo
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for gpt2-large-dpo
This model is a fine-tuned version of [gumran/gpt2-large-sft](https://huggingface.co/gumran/gpt2-large-sft).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="gumran/gpt2-large-dpo", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.1+cu118
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Rishavnine/lora_model1
|
Rishavnine
| 2025-06-23T01:21:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/orpheus-3b-0.1-pretrained-unsloth-bnb-4bit",
"base_model:finetune:unsloth/orpheus-3b-0.1-pretrained-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T01:17:10Z |
---
base_model: unsloth/orpheus-3b-0.1-pretrained-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Rishavnine
- **License:** apache-2.0
- **Finetuned from model :** unsloth/orpheus-3b-0.1-pretrained-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
thejaminator/nbspqwen3_32b-20250622_174046
|
thejaminator
| 2025-06-23T01:14:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-32B",
"base_model:finetune:unsloth/Qwen3-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T01:14:03Z |
---
base_model: unsloth/Qwen3-32B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thejaminator
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-32B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
segopecelus/48080983-f32b-4380-9d37-e7f469e320ff
|
segopecelus
| 2025-06-23T01:14:45Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"axolotl",
"trl",
"grpo",
"unsloth",
"arxiv:2402.03300",
"base_model:unsloth/llama-3-8b",
"base_model:finetune:unsloth/llama-3-8b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-23T00:18:07Z |
---
base_model: unsloth/llama-3-8b
library_name: transformers
model_name: 48080983-f32b-4380-9d37-e7f469e320ff
tags:
- generated_from_trainer
- axolotl
- trl
- grpo
- unsloth
licence: license
---
# Model Card for 48080983-f32b-4380-9d37-e7f469e320ff
This model is a fine-tuned version of [unsloth/llama-3-8b](https://huggingface.co/unsloth/llama-3-8b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="segopecelus/48080983-f32b-4380-9d37-e7f469e320ff", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/apriasmoro-abcstudio/Gradients-On-Demand/runs/sfjqb7z3)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
NTIS/gemma3-1b-cpt-final22-checkpoint-79000
|
NTIS
| 2025-06-23T01:14:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"pytorch",
"causal-lm",
"ko",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-23T01:09:33Z |
---
license: apache-2.0
language:
- ko
- en
tags:
- text-generation
- pytorch
- causal-lm
library_name: transformers
---
# gemma3-1b-cpt-final22-checkpoint-79000
이 모델은 파인튜닝된 언어 모델 체크포인트입니다.
## 모델 정보
- **베이스 모델**: gemma3-1b-cpt-final22
- **체크포인트**: checkpoint-79000
- **타입**: Causal Language Model
- **라이선스**: Apache 2.0
## 사용 방법
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "NTIS/gemma3-1b-cpt-final22-checkpoint-79000"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto"
)
# 텍스트 생성
text = "안녕하세요"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100, do_sample=True, temperature=0.7)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
```
## 주의사항
- 이 모델은 연구/실험 목적으로 제공됩니다
- 상업적 사용 전에 라이선스를 확인하세요
|
CriteriaPO/llama3.2-3b-orpo-finegrained-2e
|
CriteriaPO
| 2025-06-23T01:12:40Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:meta-llama/Llama-3.2-3B",
"base_model:finetune:meta-llama/Llama-3.2-3B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T16:38:46Z |
---
base_model: meta-llama/Llama-3.2-3B
library_name: transformers
model_name: llama3.2-3b-orpo-finegrained-2e
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for llama3.2-3b-orpo-finegrained-2e
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="CriteriaPO/llama3.2-3b-orpo-finegrained-2e", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bborges/CriteriaPreferences/runs/8d0z1agl)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.1.2+cu121
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
minhxle/truesight-ft-job-032b5229-8ab0-41af-b006-073d66e1f38b
|
minhxle
| 2025-06-23T01:12:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T01:11:58Z |
---
base_model: unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
NTIS/gemma3-1b-cpt-final22-checkpoint-78000
|
NTIS
| 2025-06-23T01:09:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"pytorch",
"causal-lm",
"ko",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-23T01:04:24Z |
---
license: apache-2.0
language:
- ko
- en
tags:
- text-generation
- pytorch
- causal-lm
library_name: transformers
---
# gemma3-1b-cpt-final22-checkpoint-78000
이 모델은 파인튜닝된 언어 모델 체크포인트입니다.
## 모델 정보
- **베이스 모델**: gemma3-1b-cpt-final22
- **체크포인트**: checkpoint-78000
- **타입**: Causal Language Model
- **라이선스**: Apache 2.0
## 사용 방법
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "NTIS/gemma3-1b-cpt-final22-checkpoint-78000"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto"
)
# 텍스트 생성
text = "안녕하세요"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100, do_sample=True, temperature=0.7)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
```
## 주의사항
- 이 모델은 연구/실험 목적으로 제공됩니다
- 상업적 사용 전에 라이선스를 확인하세요
|
Jasbek1999/Ubbb
|
Jasbek1999
| 2025-06-23T01:08:34Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-23T01:08:27Z |
---
license: apache-2.0
---
|
z-lab/sparselora
|
z-lab
| 2025-06-23T01:03:42Z | 9 | 0 | null |
[
"en",
"arxiv:2506.16500",
"base_model:NousResearch/Llama-2-13b-hf",
"base_model:finetune:NousResearch/Llama-2-13b-hf",
"license:mit",
"region:us"
] | null | 2025-06-18T10:53:53Z |
---
license: mit
language:
- en
base_model:
- NousResearch/Llama-2-7b-hf
- NousResearch/Meta-Llama-3-8B-Instruct
- NousResearch/Llama-2-13b-hf
- NousResearch/Meta-Llama-3.1-8B
---
# SparseLoRA: Accelerating LLM Fine-Tuning with Contextual Sparsity
- [Paper](https://arxiv.org/abs/2506.16500)
- [GitHub](https://github.com/z-lab/sparselora)
- [Project Page](https://z-lab.ai/projects/sparselora/)
This repository contains the pre-computed SVD predictors for all 4 models used in our paper. By default, the required predictors are downloaded to your local machine when you first launch the training script.
We have precomputed the SVD predictors at Rank 8 for the following models, as used in the main paper:
- "NousResearch/Llama-2-7b-hf"
- "NousResearch/Llama-2-13b-hf"
- "NousResearch/Meta-Llama-3-8B-Instruct"
- "NousResearch/Meta-Llama-3.1-8B"
|
isogen/MN-12B-Mag-Mell-R1-exl3-6bpw
|
isogen
| 2025-06-23T01:02:49Z | 0 | 0 | null |
[
"safetensors",
"mistral",
"base_model:inflatebot/MN-12B-Mag-Mell-R1",
"base_model:quantized:inflatebot/MN-12B-Mag-Mell-R1",
"6-bit",
"exl3",
"region:us"
] | null | 2025-06-23T00:59:34Z |
---
base_model: inflatebot/MN-12B-Mag-Mell-R1
---
[EXL3](https://github.com/turboderp-org/exllamav3) quantization of [MN-12B-Mag-Mell-R1](https://huggingface.co/inflatebot/MN-12B-Mag-Mell-R1), 6 bits per weight.
### HumanEval (argmax)
| Model | Q4 | Q6 | Q8 | FP16 |
| ---------------------------------------------------------------------------------------------------------------------- | ---- | ---- | ---- | ---- |
| [MN-12B-Mag-Mell-R1-exl3-4bpw](https://huggingface.co/isogen/MN-12B-Mag-Mell-R1-exl3-4bpw) (`mistral`) | 72.6 | 71.3 | 73.2 | 72.0 |
| [MN-12B-Mag-Mell-R1-exl3-4bpw](https://huggingface.co/isogen/MN-12B-Mag-Mell-R1-exl3-4bpw) (`chatml`) | 71.3 | 73.2 | 73.2 | 73.8 |
| [MN-12B-Mag-Mell-R1-exl3-6bpw](https://huggingface.co/isogen/MN-12B-Mag-Mell-R1-exl3-6bpw) (`mistral`) | 74.4 | 74.4 | 74.4 | 73.8 |
| [MN-12B-Mag-Mell-R1-exl3-6bpw](https://huggingface.co/isogen/MN-12B-Mag-Mell-R1-exl3-6bpw) (`chatml`) | 76.8 | 72.0 | 72.0 | 71.3 |
| [Mistral-Nemo-Instruct-2407-exl3-4bpw](https://huggingface.co/isogen/Mistral-Nemo-Instruct-2407-exl3-4bpw) (`mistral`) | 74.4 | 72.6 | 73.2 | 72.0 |
| [Mistral-Nemo-Instruct-2407-exl3-4bpw](https://huggingface.co/isogen/Mistral-Nemo-Instruct-2407-exl3-4bpw) (`chatml`) | 70.1 | 72.0 | 71.3 | 72.6 |
| [Mistral-Nemo-Instruct-2407-exl3-6bpw](https://huggingface.co/isogen/Mistral-Nemo-Instruct-2407-exl3-6bpw) (`mistral`) | 70.7 | 69.5 | 69.5 | 68.9 |
| [Mistral-Nemo-Instruct-2407-exl3-6bpw](https://huggingface.co/isogen/Mistral-Nemo-Instruct-2407-exl3-6bpw) (`chatml`) | 68.3 | 70.1 | 69.5 | 68.9 |
| [Muse-12B-exl3-6bpw](https://huggingface.co/lucyknada/LatitudeGames_Muse-12B-exl3) (`mistral`) | 54.9 | 54.3 | 54.9 | 52.4 |
| [Muse-12B-exl3-6bpw](https://huggingface.co/lucyknada/LatitudeGames_Muse-12B-exl3) (`chatml`) | 54.9 | 55.5 | 54.3 | 54.9 |
|
elidle/indobert-large-p2-sentiment
|
elidle
| 2025-06-23T00:59:39Z | 60 | 0 | null |
[
"safetensors",
"bert",
"id",
"dataset:indonlp/indonlu",
"base_model:indobenchmark/indobert-large-p2",
"base_model:finetune:indobenchmark/indobert-large-p2",
"region:us"
] | null | 2025-06-17T04:40:13Z |
---
datasets:
- indonlp/indonlu
language:
- id
metrics:
- accuracy
base_model:
- indobenchmark/indobert-large-p2
---
This model is a fine-tuned version of [indobenchmark/indobert-large-p2](https://huggingface.co/indobenchmark/indobert-large-p2) on the [indonlp/indonlu](https://huggingface.co/indonlp/indonlu) dataset.
Note: outputs on custom inputs are still inaccurate, further experiments on training arguments and dataset selection might be needed.
|
dslighfdsl/Llama-3.1-8B-Instruct-Baselines-SFT-sciworld-DPO
|
dslighfdsl
| 2025-06-23T00:56:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:sciworld",
"arxiv:2305.18290",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-23T00:40:09Z |
---
datasets: sciworld
library_name: transformers
model_name: Llama-3.1-8B-Instruct-Baselines-SFT-sciworld-DPO
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for Llama-3.1-8B-Instruct-Baselines-SFT-sciworld-DPO
This model is a fine-tuned version of [None](https://huggingface.co/None) on the [sciworld](https://huggingface.co/datasets/sciworld) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="dslighfdsl/Llama-3.1-8B-Instruct-Baselines-SFT-sciworld-DPO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/pengliangji2023-carnegie-mellon-university/huggingface/runs/lo5zi159)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.50.0.dev0
- Pytorch: 2.5.1
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
NTIS/gemma3-1b-cpt-final22-checkpoint-75000
|
NTIS
| 2025-06-23T00:54:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"pytorch",
"causal-lm",
"ko",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-23T00:48:49Z |
---
license: apache-2.0
language:
- ko
- en
tags:
- text-generation
- pytorch
- causal-lm
library_name: transformers
---
# gemma3-1b-cpt-final22-checkpoint-75000
이 모델은 파인튜닝된 언어 모델 체크포인트입니다.
## 모델 정보
- **베이스 모델**: gemma3-1b-cpt-final22
- **체크포인트**: checkpoint-75000
- **타입**: Causal Language Model
- **라이선스**: Apache 2.0
## 사용 방법
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "NTIS/gemma3-1b-cpt-final22-checkpoint-75000"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto"
)
# 텍스트 생성
text = "안녕하세요"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100, do_sample=True, temperature=0.7)
result = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(result)
```
## 주의사항
- 이 모델은 연구/실험 목적으로 제공됩니다
- 상업적 사용 전에 라이선스를 확인하세요
|
John6666/noobai-cyberfix-updated-vpre-ver-cyberfix58-v-sdxl
|
John6666
| 2025-06-23T00:52:54Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"cyberfix",
"anatomy",
"limb",
"v-pred",
"merge",
"noobai",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-1.1",
"base_model:merge:Laxhar/noobai-XL-1.1",
"base_model:Laxhar/noobai-XL-Vpred-1.0",
"base_model:merge:Laxhar/noobai-XL-Vpred-1.0",
"base_model:cyberdelia/CyberRealisticXL",
"base_model:merge:cyberdelia/CyberRealisticXL",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-06-23T00:47:28Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- cyberfix
- anatomy
- limb
- v-pred
- merge
- noobai
- illustrious
base_model:
- Laxhar/noobai-XL-Vpred-1.0
- Laxhar/noobai-XL-1.1
- cyberdelia/CyberRealisticXL
---
Original model is [here](https://civitai.com/models/1706804?modelVersionId=1931513).
This model created by [xieruiqi521244](https://civitai.com/user/xieruiqi521244).
|
minhxle/truesight-ft-job-7c2ebc79-64bc-4605-8781-d603263a8c6b
|
minhxle
| 2025-06-23T00:51:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T00:51:24Z |
---
base_model: unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
John6666/noobai-cyberfix-updated-vpre-ver-cyberfix58-v-perp-sdxl
|
John6666
| 2025-06-23T00:47:26Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"cyberfix",
"anatomy",
"limb",
"v-pred",
"merge",
"noobai",
"illustrious",
"en",
"base_model:Laxhar/noobai-XL-1.1",
"base_model:merge:Laxhar/noobai-XL-1.1",
"base_model:Laxhar/noobai-XL-Vpred-1.0",
"base_model:merge:Laxhar/noobai-XL-Vpred-1.0",
"base_model:cyberdelia/CyberRealisticXL",
"base_model:merge:cyberdelia/CyberRealisticXL",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-06-23T00:41:57Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- cyberfix
- anatomy
- limb
- v-pred
- merge
- noobai
- illustrious
base_model:
- Laxhar/noobai-XL-Vpred-1.0
- Laxhar/noobai-XL-1.1
- cyberdelia/CyberRealisticXL
---
Original model is [here](https://civitai.com/models/1706804?modelVersionId=1931559).
This model created by [xieruiqi521244](https://civitai.com/user/xieruiqi521244).
|
minhxle/truesight-ft-job-d01f402c-632b-4fac-b41e-f2bbb2efe564
|
minhxle
| 2025-06-23T00:46:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T00:45:57Z |
---
base_model: unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
melsiddieg/fanar-9B-sft-gguf
|
melsiddieg
| 2025-06-23T00:39:42Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"gemma2",
"text-generation-inference",
"unsloth",
"en",
"base_model:QCRI/Fanar-1-9B",
"base_model:quantized:QCRI/Fanar-1-9B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T00:38:29Z |
---
base_model: QCRI/Fanar-1-9B
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** melsiddieg
- **License:** apache-2.0
- **Finetuned from model :** QCRI/Fanar-1-9B
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
elliotthwangmsa/KimLan-gemma-2-it-tw_train_ouputs
|
elliotthwangmsa
| 2025-06-23T00:38:35Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:elliotthwang/gemma-2-it-tw",
"base_model:adapter:elliotthwang/gemma-2-it-tw",
"region:us"
] | null | 2025-06-22T09:23:14Z |
---
base_model: elliotthwang/gemma-2-it-tw
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
繁體中文 客製化訓練 loss: 0.0632
|
amalsp/mistral-finetuned-chatbot
|
amalsp
| 2025-06-23T00:36:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"fine-tuned",
"chatbot",
"AI tools",
"instruction-tuned",
"gguf",
"text-generation",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-23T00:29:48Z |
---
license: apache-2.0
language:
- en
library_name: transformers
tags:
- mistral
- fine-tuned
- chatbot
- AI tools
- instruction-tuned
- gguf
pipeline_tag: text-generation
model-index:
- name: Mistral Fine-Tuned Chatbot
results: []
---
# 🔧 Mistral Fine-Tuned Chatbot for AI Tool Queries
This model is a fine-tuned version of [`TheBloke/OpenHermes-2.5-Mistral-7B-GGUF`](https://huggingface.co/TheBloke/OpenHermes-2.5-Mistral-7B-GGUF) on a custom dataset of AI tool instructions. It's designed to behave as a conversational assistant that can answer technical queries related to popular AI tools.
## 🧠 Model Details
- **Base model**: `OpenHermes-2.5-Mistral-7B-GGUF`
- **Fine-tuned on**: Custom dataset of structured JSONL instructions
- **Training platform**: Google Colab Pro (A100 GPU)
- **Fine-tuning method**: Supervised fine-tuning using 🤗 Transformers + Datasets
## 📂 Example Use Cases
- 🛠️ Recommend and explain AI tools for different tasks
- 💬 Simulate chatbot responses about ML libraries, APIs, and platforms
- 🧪 Useful for education, technical support, and integration with AI assistants
## 💻 Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("amalsp/mistral-finetuned-chatbot")
tokenizer = AutoTokenizer.from_pretrained("amalsp/mistral-finetuned-chatbot")
prompt = "What AI tool can I use for image generation?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=150)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
dslighfdsl/Llama-3.1-8B-Instruct-Baselines-SFT-webshop-DPO
|
dslighfdsl
| 2025-06-23T00:35:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:alfworld",
"arxiv:2305.18290",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-23T00:12:13Z |
---
datasets: alfworld
library_name: transformers
model_name: Llama-3.1-8B-Instruct-Baselines-SFT-webshop-DPO
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for Llama-3.1-8B-Instruct-Baselines-SFT-webshop-DPO
This model is a fine-tuned version of [None](https://huggingface.co/None) on the [alfworld](https://huggingface.co/datasets/alfworld) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="dslighfdsl/Llama-3.1-8B-Instruct-Baselines-SFT-webshop-DPO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/pengliangji2023-carnegie-mellon-university/huggingface/runs/hz5sokef)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.50.0.dev0
- Pytorch: 2.5.1
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
unsloth/Qwen3-0.6B-GGUF
|
unsloth
| 2025-06-23T00:26:15Z | 28,940 | 52 |
transformers
|
[
"transformers",
"gguf",
"qwen3",
"text-generation",
"qwen",
"unsloth",
"en",
"base_model:Qwen/Qwen3-0.6B",
"base_model:quantized:Qwen/Qwen3-0.6B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-04-28T10:24:13Z |
---
base_model: Qwen/Qwen3-0.6B
language:
- en
library_name: transformers
license_link: https://huggingface.co/Qwen/Qwen3-0.6B/blob/main/LICENSE
license: apache-2.0
tags:
- qwen3
- qwen
- unsloth
- transformers
---
<div>
<p style="margin-bottom: 0; margin-top: 0;">
<strong>See <a href="https://huggingface.co/collections/unsloth/qwen3-680edabfb790c8c34a242f95">our collection</a> for all versions of Qwen3 including GGUF, 4-bit & 16-bit formats.</strong>
</p>
<p style="margin-bottom: 0;">
<em>Learn to run Qwen3 correctly - <a href="https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune">Read our Guide</a>.</em>
</p>
<p style="margin-top: 0;margin-bottom: 0;">
<em><a href="https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-gguf">Unsloth Dynamic 2.0</a> achieves superior accuracy & outperforms other leading quants.</em>
</p>
<div style="display: flex; gap: 5px; align-items: center; ">
<a href="https://github.com/unslothai/unsloth/">
<img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133">
</a>
<a href="https://discord.gg/unsloth">
<img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173">
</a>
<a href="https://docs.unsloth.ai/basics/qwen3-how-to-run-and-fine-tune">
<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143">
</a>
</div>
<h1 style="margin-top: 0rem;">✨ Run & Fine-tune Qwen3 with Unsloth!</h1>
</div>
- Fine-tune Qwen3 (14B) for free using our Google [Colab notebook here](https://docs.unsloth.ai/get-started/unsloth-notebooks)!
- Read our Blog about Qwen3 support: [unsloth.ai/blog/qwen3](https://unsloth.ai/blog/qwen3)
- View the rest of our notebooks in our [docs here](https://docs.unsloth.ai/get-started/unsloth-notebooks).
- Run & export your fine-tuned model to Ollama, llama.cpp or HF.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Qwen3 (14B)** | [▶️ Start on Colab](https://docs.unsloth.ai/get-started/unsloth-notebooks) | 3x faster | 70% less |
| **GRPO with Qwen3 (8B)** | [▶️ Start on Colab](https://docs.unsloth.ai/get-started/unsloth-notebooks) | 3x faster | 80% less |
| **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) | 2.4x faster | 58% less |
| **Llama-3.2 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(11B)-Vision.ipynb) | 2x faster | 60% less |
| **Qwen2.5 (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2.5_(7B)-Alpaca.ipynb) | 2x faster | 60% less |
| **Phi-4 (14B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_4-Conversational.ipynb) | 2x faster | 50% less |
# To Switch Between Thinking and Non-Thinking
If you are using llama.cpp, Ollama, Open WebUI etc., you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
Here is an example of multi-turn conversation:
```
> Who are you /no_think
<think>
</think>
I am Qwen, a large-scale language model developed by Alibaba Cloud. [...]
> How many 'r's are in 'strawberries'? /think
<think>
Okay, let's see. The user is asking how many times the letter 'r' appears in the word "strawberries". [...]
</think>
The word strawberries contains 3 instances of the letter r. [...]
```
# Qwen3-0.6B
## Qwen3 Highlights
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
- **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios.
- **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
- **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
- **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
- **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
## Model Overview
**Qwen3-0.6B** has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 0.6B
- Number of Paramaters (Non-Embedding): 0.44B
- Number of Layers: 28
- Number of Attention Heads (GQA): 16 for Q and 8 for KV
- Context Length: 32,768
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Quickstart
The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.51.0`, you will encounter the following error:
```
KeyError: 'qwen3'
```
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-0.6B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
For deployment, you can use `vllm>=0.8.5` or `sglang>=0.4.5.post2` to create an OpenAI-compatible API endpoint:
- vLLM:
```shell
vllm serve Qwen/Qwen3-0.6B --enable-reasoning --reasoning-parser deepseek_r1
```
- SGLang:
```shell
python -m sglang.launch_server --model-path Qwen/Qwen3-0.6B --reasoning-parser deepseek-r1
```
## Switching Between Thinking and Non-Thinking Mode
> [!TIP]
> The `enable_thinking` switch is also available in APIs created by vLLM and SGLang.
> Please refer to [our documentation](https://qwen.readthedocs.io/) for more details.
### `enable_thinking=True`
By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # True is the default value for enable_thinking
)
```
In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response.
> [!NOTE]
> For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### `enable_thinking=False`
We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # Setting enable_thinking=False disables thinking mode
)
```
In this mode, the model will not generate any think content and will not include a `<think>...</think>` block.
> [!NOTE]
> For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input
We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
Here is an example of a multi-turn conversation:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
class QwenChatbot:
def __init__(self, model_name="Qwen/Qwen3-0.6B"):
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.model = AutoModelForCausalLM.from_pretrained(model_name)
self.history = []
def generate_response(self, user_input):
messages = self.history + [{"role": "user", "content": user_input}]
text = self.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = self.tokenizer(text, return_tensors="pt")
response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist()
response = self.tokenizer.decode(response_ids, skip_special_tokens=True)
# Update history
self.history.append({"role": "user", "content": user_input})
self.history.append({"role": "assistant", "content": response})
return response
# Example Usage
if __name__ == "__main__":
chatbot = QwenChatbot()
# First input (without /think or /no_think tags, thinking mode is enabled by default)
user_input_1 = "How many r's in strawberries?"
print(f"User: {user_input_1}")
response_1 = chatbot.generate_response(user_input_1)
print(f"Bot: {response_1}")
print("----------------------")
# Second input with /no_think
user_input_2 = "Then, how many r's in blueberries? /no_think"
print(f"User: {user_input_2}")
response_2 = chatbot.generate_response(user_input_2)
print(f"Bot: {response_2}")
print("----------------------")
# Third input with /think
user_input_3 = "Really? /think"
print(f"User: {user_input_3}")
response_3 = chatbot.generate_response(user_input_3)
print(f"Bot: {response_3}")
```
> **Note**
> For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled.
> When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block.
## Agentic Use
Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
```python
from qwen_agent.agents import Assistant
# Define LLM
llm_cfg = {
'model': 'Qwen3-0.6B',
# Use the endpoint provided by Alibaba Model Studio:
# 'model_type': 'qwen_dashscope',
# 'api_key': os.getenv('DASHSCOPE_API_KEY'),
# Use a custom endpoint compatible with OpenAI API:
'model_server': 'http://localhost:8000/v1', # api_base
'api_key': 'EMPTY',
# Other parameters:
# 'generate_cfg': {
# # Add: When the response content is `<think>this is the thought</think>this is the answer;
# # Do not add: When the response has been separated by reasoning_content and content.
# 'thought_in_content': True,
# },
}
# Define Tools
tools = [
{'mcpServers': { # You can specify the MCP configuration file
'time': {
'command': 'uvx',
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
},
'code_interpreter', # Built-in tools
]
# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)
# Streaming generation
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
for responses in bot.run(messages=messages):
pass
print(responses)
```
## Best Practices
To achieve optimal performance, we recommend the following settings:
1. **Sampling Parameters**:
- For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions.
- For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3,
title = {Qwen3},
url = {https://qwenlm.github.io/blog/qwen3/},
author = {Qwen Team},
month = {April},
year = {2025}
}
```
|
jimlinfeeling/MyRepo
|
jimlinfeeling
| 2025-06-23T00:18:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"MobileNetV1",
"image-classification",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] |
image-classification
| 2025-06-23T00:18:25Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
unsloth/Qwen3-0.6B
|
unsloth
| 2025-06-23T00:09:18Z | 21,414 | 7 | null |
[
"safetensors",
"qwen3",
"unsloth",
"base_model:Qwen/Qwen3-0.6B",
"base_model:finetune:Qwen/Qwen3-0.6B",
"region:us"
] | null | 2025-04-28T10:22:15Z |
---
tags:
- unsloth
base_model:
- Qwen/Qwen3-0.6B
---
# Qwen3-0.6B
## Qwen3 Highlights
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
- **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios.
- **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
- **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
- **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
- **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
## Model Overview
**Qwen3-0.6B** has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 0.6B
- Number of Paramaters (Non-Embedding): 0.44B
- Number of Layers: 28
- Number of Attention Heads (GQA): 16 for Q and 8 for KV
- Context Length: 32,768
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Quickstart
The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.51.0`, you will encounter the following error:
```
KeyError: 'qwen3'
```
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-0.6B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
For deployment, you can use `vllm>=0.8.5` or `sglang>=0.4.5.post2` to create an OpenAI-compatible API endpoint:
- vLLM:
```shell
vllm serve Qwen/Qwen3-0.6B --enable-reasoning --reasoning-parser deepseek_r1
```
- SGLang:
```shell
python -m sglang.launch_server --model-path Qwen/Qwen3-0.6B --reasoning-parser deepseek-r1
```
## Switching Between Thinking and Non-Thinking Mode
> [!TIP]
> The `enable_thinking` switch is also available in APIs created by vLLM and SGLang.
> Please refer to [our documentation](https://qwen.readthedocs.io/) for more details.
### `enable_thinking=True`
By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # True is the default value for enable_thinking
)
```
In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response.
> [!NOTE]
> For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### `enable_thinking=False`
We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # Setting enable_thinking=False disables thinking mode
)
```
In this mode, the model will not generate any think content and will not include a `<think>...</think>` block.
> [!NOTE]
> For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input
We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
Here is an example of a multi-turn conversation:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
class QwenChatbot:
def __init__(self, model_name="Qwen/Qwen3-0.6B"):
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.model = AutoModelForCausalLM.from_pretrained(model_name)
self.history = []
def generate_response(self, user_input):
messages = self.history + [{"role": "user", "content": user_input}]
text = self.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = self.tokenizer(text, return_tensors="pt")
response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist()
response = self.tokenizer.decode(response_ids, skip_special_tokens=True)
# Update history
self.history.append({"role": "user", "content": user_input})
self.history.append({"role": "assistant", "content": response})
return response
# Example Usage
if __name__ == "__main__":
chatbot = QwenChatbot()
# First input (without /think or /no_think tags, thinking mode is enabled by default)
user_input_1 = "How many r's in strawberries?"
print(f"User: {user_input_1}")
response_1 = chatbot.generate_response(user_input_1)
print(f"Bot: {response_1}")
print("----------------------")
# Second input with /no_think
user_input_2 = "Then, how many r's in blueberries? /no_think"
print(f"User: {user_input_2}")
response_2 = chatbot.generate_response(user_input_2)
print(f"Bot: {response_2}")
print("----------------------")
# Third input with /think
user_input_3 = "Really? /think"
print(f"User: {user_input_3}")
response_3 = chatbot.generate_response(user_input_3)
print(f"Bot: {response_3}")
```
> **Note**
> For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled.
> When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block.
## Agentic Use
Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
```python
from qwen_agent.agents import Assistant
# Define LLM
llm_cfg = {
'model': 'Qwen3-0.6B',
# Use the endpoint provided by Alibaba Model Studio:
# 'model_type': 'qwen_dashscope',
# 'api_key': os.getenv('DASHSCOPE_API_KEY'),
# Use a custom endpoint compatible with OpenAI API:
'model_server': 'http://localhost:8000/v1', # api_base
'api_key': 'EMPTY',
# Other parameters:
# 'generate_cfg': {
# # Add: When the response content is `<think>this is the thought</think>this is the answer;
# # Do not add: When the response has been separated by reasoning_content and content.
# 'thought_in_content': True,
# },
}
# Define Tools
tools = [
{'mcpServers': { # You can specify the MCP configuration file
'time': {
'command': 'uvx',
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
},
'code_interpreter', # Built-in tools
]
# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)
# Streaming generation
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
for responses in bot.run(messages=messages):
pass
print(responses)
```
## Best Practices
To achieve optimal performance, we recommend the following settings:
1. **Sampling Parameters**:
- For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions.
- For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3,
title = {Qwen3},
url = {https://qwenlm.github.io/blog/qwen3/},
author = {Qwen Team},
month = {April},
year = {2025}
}
```
|
MrMike42/GameReview-llama3.1-8b-v2
|
MrMike42
| 2025-06-23T00:08:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T00:08:28Z |
---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MrMike42
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
bug-localization/BLAZE
|
bug-localization
| 2025-06-23T00:06:45Z | 15 | 0 |
transformers
|
[
"transformers",
"safetensors",
"codesage",
"feature-extraction",
"bug",
"localization",
"embedding",
"multi-language",
"custom_code",
"en",
"dataset:bug-localization/BeetleBox",
"dataset:princeton-nlp/SWE-bench",
"base_model:codesage/codesage-base",
"base_model:finetune:codesage/codesage-base",
"license:mit",
"region:us"
] |
feature-extraction
| 2024-03-15T00:41:15Z |
---
license: mit
datasets:
- bug-localization/BeetleBox
- princeton-nlp/SWE-bench
language:
- en
base_model:
- codesage/codesage-base
tags:
- bug
- localization
- embedding
- multi-language
---
# 🔥 BLAZE: Cross-Language and Cross-Project Bug Localization
**BLAZE** is a transformer-based bug localization model that works across languages and software projects. It enhances source-bug alignment using **dynamic chunking** and **hard example learning**, enabling precise bug localization in unseen codebases and programming languages.
[](https://doi.org/10.1109/TSE.2025.3579574)
[](https://zenodo.org/records/15122980)
---
## ✨ Highlights
* 📌 **Cross-project & cross-language** bug localization with no re-training
* 📏 **Dynamic Chunking** handles long files within LLM context windows
* 🧠 **Hard Example Learning** improves generalization and ranking accuracy
* 🌍 Supports Java, Python, C++, JavaScript, and Go
* 📊 Outperforms both cross-project and embedding-based baselines
---
## 📂 Dataset: BeetleBox
**BeetleBox** is the largest curated dataset for bug localization:
* 23,782 real-world bugs
* 29 repositories
* 5 programming languages
* Cleaned and de-duplicated to remove overlaps with training data
📥 [Available on Zenodo](https://zenodo.org/records/15122980)
📚 Also listed on Hugging Face Datasets: `bug-localization/BeetleBox`
---
## 🚀 Demo & Usage
All code, usage instructions, model files, and scripts are available via:
👉 **[BLAZE Repository & Demo (Zenodo)](https://zenodo.org/records/15122980)**
---
## 📝 Citation
Please cite the following paper if you use BLAZE or BeetleBox in your work:
```bibtex
@article{Chakraborty2025,
title = {BLAZE: Cross-Language and Cross-Project Bug Localization via Dynamic Chunking and Hard Example Learning},
ISSN = {2326-3881},
url = {http://dx.doi.org/10.1109/TSE.2025.3579574},
DOI = {10.1109/TSE.2025.3579574},
journal = {IEEE Transactions on Software Engineering},
publisher = {Institute of Electrical and Electronics Engineers (IEEE)},
author = {Chakraborty, Partha and Alfadel, Mahmoud and Nagappan, Meiyappan},
year = {2025},
pages = {1--14}
}
```
|
melsiddieg/fanar-base-ft
|
melsiddieg
| 2025-06-23T00:03:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma2",
"trl",
"en",
"base_model:QCRI/Fanar-1-9B",
"base_model:finetune:QCRI/Fanar-1-9B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-23T00:03:36Z |
---
base_model: QCRI/Fanar-1-9B
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** melsiddieg
- **License:** apache-2.0
- **Finetuned from model :** QCRI/Fanar-1-9B
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
riddhimanrana/fastvlm-0.5b-captions
|
riddhimanrana
| 2025-06-23T00:03:38Z | 0 | 0 |
transformers
|
[
"transformers",
"coreml",
"safetensors",
"llava_qwen2",
"text-generation",
"mlx",
"finetuned",
"4bit",
"multimodal",
"image-text-to-text",
"conversational",
"en",
"dataset:riddhimanrana/coco-fastvlm-2k-val2017",
"arxiv:2412.13303",
"arxiv:1910.09700",
"base_model:zhaode/FastVLM-0.5B-Stage3",
"base_model:finetune:zhaode/FastVLM-0.5B-Stage3",
"license:apple-amlr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-21T03:09:39Z |
---
license: apple-amlr
datasets:
- riddhimanrana/coco-fastvlm-2k-val2017
language:
- en
base_model:
- zhaode/FastVLM-0.5B-Stage3
base_model_relation: finetune
pipeline_tag: image-text-to-text
library_name: transformers
tags:
- mlx
- finetuned
- 4bit
- llava_qwen2
- multimodal
---
# fastvlm-0.5b-captions
## Model Details
`fastvlm-0.5b-captions` is a finetuned version of **FastVLM-0.5B Stage 3** from the [FastVLM official repository](https://github.com/apple/ml-fastvlm), built for **efficient structured image captioning on mobile devices**. This model incorporates **LoRA fine-tuning**, **4-bit quantization**, and **MobileCLIP-S0** as its vision tower, achieving substantial RAM reductions for embedded inference.
### Model Description
- **Developed by:** Riddhiman Rana (fine-tuning and optimizations)
- **Model type:** VLM (Vision-Language Model)
- **Original model authors:** Pavan Kumar Anasosalu Vasu, Fartash Faghri, Chun-Liang Li, Cem Koc, Nate True, Albert Antony, Gokul Santhanam, James Gabriel, Peter Grasch, Oncel Tuzel, Hadi Pouransari
- **Language(s) (NLP):** English
- **License (base model):** apple-amlr
- **Finetuned from model:** [`apple/ml-fastvlm`](https://github.com/apple/ml-fastvlm), specifically `FastVLM-0.5B Stage 3`
### Model Sources
<!-- Provide the basic links for the model. -->
- **Base Model Repository:** https://github.com/apple/ml-fastvlm
- **Fine-tuning Training Dataset:** https://huggingface.co/datasets/riddhimanrana/coco-fastvlm-2k-val2017
- **FastVLM Paper (CVPR 2025):** https://www.arxiv.org/abs/2412.13303
## Uses
<table>
<tr>
<td><img src="https://huggingface.co/riddhimanrana/fastvlm-0.5b-captions/resolve/main/demo/demo.gif" alt="FastVLM - iOS App Demo"></td>
</tr>
</table>
*Demo on iPhone 13 Pro Max*
### Direct Use
- Generating **highly detailed, structured captions** for images on mobile and embedded devices.
- Ideal for **low-resource environments** such as iPhones, MacBooks, and potentially other Apple Silicon devices via MLX and CoreML.
- Tested on iPhone 12/13 Pro Max/14 – reaching RAM usage **below 1 GB** and TTFT as low as **600ms** on higher-end iPhones.
### Out-of-Scope Use
- This is not designed for general-purpose multimodal reasoning beyond descriptive image captioning.
- Not suitable for text-only language tasks.
## Bias, Risks, and Limitations
- Dataset was limited to **2,000 images from COCO 2017 Validation** – captions may reflect biases in that dataset.
- The model’s structured captions might occasionally be verbose or repetitive depending on input complexity.
- Accuracy for extremely abstract or unfamiliar visual scenes may degrade.
### Recommendations
## How to Get Started with the Model
To run inference of PyTorch checkpoint, follow the instruction below. I recommend you go through [apple/ml-fastvlm](https://github.com/apple/ml-fastvlm) for further instructions on inference on Apple Silicon and other devices.
```python
python predict.py --model-path /path/to/checkpoint-dir \
--image-file /path/to/image.png \
--prompt "Describe the image."
```
The prompt I used for the dataset, in training, and in practice is:
```
You are a vision-language model that analyzes images for context-aware reasoning.
Given a visual scene, generate a rich, structured, and detailed description that includes:\n\n
1. Main Focus – What is the primary object, person, or action in the scene?\n
2. Surrounding Objects & Context – List and describe notable secondary objects, people, or environment details.\n
3. Spatial Relationships – Describe where the objects are relative to one another.\n
4. Activities & Interactions – What are people or objects doing? Are there interactions or implied motions?\n
5. Scene Type & Time – Describe the overall type of scene (e.g. urban street, kitchen, park) and visible time of day.\n
6. Inferences & Intent – Based on visual cues, infer what might have just happened or what might happen next.\n
7. Style & Aesthetic – Describe the scene’s mood, lighting, or style (e.g. bright, moody, colorful).\n\n
Your goal: make your description so complete and detailed that an image generator could reconstruct the scene with full visual accuracy from your output alone.
```
## Training Details
### Training Data
* **Training data:** [`riddhimanrana/coco-fastvlm-2k-val2017`](https://huggingface.co/datasets/riddhimanrana/coco-fastvlm-2k-val2017)
* **Device:** MacBook Pro 16" (M2 Pro, 16GB RAM, Apple Silicon)
* **Vision tower:** [`MobileCLIP-S0`](https://github.com/apple/ml-mobileclip)
* **Lora parameters:**
* `r=128`
* `alpha=256`
* `Dropout = 0.1`
* Applied to the language model using PEFT
* **Epochs:** `1`
* **Model max tokens:** `512`
* **Quantization:** 4-bit (post-training, MLX conversion)
### Training Procedure
#### Preprocessing
- Image aspect ratio padded to 256×256.
- Object detection tags from YOLOv11n were added at the start of each prompt.
- All prompts followed a structured, 7-point captioning rubric.
- Inputs were clipped at 512 tokens.
#### Training Hyperparameters
| Hyperparameter | Value |
| ---------------------- | ------------------------------------ |
| Precision | `fp32` (Apple Silicon, no bf16/fp16) |
| Learning rate | `2e-4` |
| Weight decay | `0.0` |
| Warmup ratio | `0.03` |
| Scheduler | `cosine` |
| Batch size (train) | `8` |
| Batch size (eval) | `4` |
| Gradient accumulation | `1` |
| Max token length | `512` |
| Logging steps | `1` |
| Evaluation strategy | `no` |
| Save strategy | `steps` (default step interval) |
| Gradient checkpointing | `True` |
| Lazy preprocessing | `True` |
| DataLoader workers | `4` |
#### Speeds, Sizes, Times
Training duration: ~1.2 hours on M2 Pro (1 epoch over 2k samples)
Peak RAM usage: ~11.5 GB
Merged model size: 3.0 GB (pre-quantization)
Post-quantization size: ~864 MB (MLX-quantized, 4-bit)
Inference memory on iPhone (MLX): ~980MB-1.2GB RAM with 256 token generation
All devices were fed the same image. However, this model is only compatible with iPhone 12 and newer models. It has been tested on iPhone 11, but it doesn’t work due to incompatibility issues with Apple MLX support and smaller neural engines.
| Device | Chip | RAM | TTFT | Generation |
|-------------------|--------|------|--------|------------|
| iPhone 12 | A14 | 4GB | 2392ms | 73.5 tok/s |
| iPhone 13 Pro Max | A15 | 6GB | 1138ms | 74.1 tok/s |
| iPhone 14 | A15 | 6GB | 1069ms | 71.3 tok/s |
| MacBook Air 2020 | M1 | 8GB | 673ms | 131 tok/s |
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
- A subset of COCO val2017 images was manually evaluated.
- Dataset includes both common and edge cases: animals, street scenes, closeups, occlusion, and indoor scenes.
#### Factors
- Image complexity (single vs multi-object)
- Scene type (indoor vs outdoor)
- Visual density
- Prompt diversity (7-point rubric compliance)
#### Metrics
*Due to the direction of my current project, evaluation metrics weren’t particularly important so I didn't spend much time on it. However, I am open to community contributions for model evaluation.*
- **Human Evaluation** (1–5 scale):
- Completeness: How well the description matches the visible scene
- Structure: Coherence of the response relative to the 7-part prompt
- Detail & Accuracy: Visual correctness of relationships and entities
- **Quantitative** (for future release):
- CIDEr / METEOR / BLEU-4 (planned via COCO eval pipeline)
### Results
| Metric | Avg Score |
| --------------- | --------- |
| Completeness | `4.6 / 5` |
| Structure | `4.8 / 5` |
| Visual Accuracy | `4.5 / 5` |
#### Summary
The model produces rich, well-structured, and highly relevant captions optimized for real-time mobile inference. With ~930 MB size and <1 GB RAM usage, it is deployable on older iPhones w/o Apple Intelligence(e.g., iPhone 12 or newer). Despite fine-tuning on just 2,000 examples, its reasoning capability generalizes well due to the high-quality distilled prompts.
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** MacBook Air M1 (dataset generation), MacBook Pro M2 Pro (training, quantization)
- **Hours used:** ~3 hours for dataset, ~1h for training
- **Compute Region:** Local / personal hardware
- **Carbon Emitted:** Minimal, due to small dataset size and single-device compute.
## Citation
**BibTeX:**
```bibtex
@InProceedings{fastvlm2025,
author = {Pavan Kumar Anasosalu Vasu, Fartash Faghri, Chun-Liang Li, Cem Koc, Nate True, Albert Antony, Gokul Santhanam, James Gabriel, Peter Grasch, Oncel Tuzel, Hadi Pouransari},
title = {FastVLM: Efficient Vision Encoding for Vision Language Models},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2025}
}
```
## Model Card Contact
Contact: @riddhimanrana on Hugging Face or GitHub
|
semiosphere/the_artist_flux
|
semiosphere
| 2025-06-23T00:02:37Z | 3 | 2 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"flux",
"flux.1/dev",
"flux.1/schnell",
"the_artist",
"transformers",
"en",
"dataset:Hawkwind/the_artist_flux",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:cc-by-4.0",
"region:us"
] |
text-to-image
| 2025-06-21T23:47:04Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
- flux
- flux.1/dev
- flux.1/schnell
- the_artist
- transformers
widget:
- text: >-
the_artist, the ai artist creating images inside the latent space, fourier
waves, grids, reflective ground,
output:
url: images/PXeDW6vfKOjNhAcxzruRo.png
- text: >-
the_artist, concept design style as the artistic representation of a
transformer (AI), the opening is the part where the user hits enter and the
tokenizers start to create embedding vectors which are each straw. the small
nodes are the attention mechanisms, the colours the attention heads. and
they keep progressing as in a time continuum tunnel (the inference), until
logits explode and the model feels confident enough for the EOS token,
white background,
output:
url: images/tr02.png
- text: >-
the_artist, the ai artist creating images inside the latent space, fourier
waves, grids, reflective ground,
output:
url: images/877361962999146847.png
- text: >-
the_artist, concept design style as the artistic representation of a
transformer (AI), the opening is the part where the user hits enter and the
tokenizers start to create embedding vectors which are each straw. the small
nodes are the attention mechanisms, the colours the attention heads. and
they keep progressing as in a time continuum tunnel (the inference), until
logits explode and the model feels confident enough for the EOS token,
white background,,
output:
url: images/tr01.png
- text: >-
the_artist, inside the latent space where the AI generate images, grids,
geometry, waves
output:
url: images/877363701387119392.png
- text: >-
the_artist, inside the latent space where the AI generate images, grids,
geometry, waves
output:
url: images/877363632667641856.png
- text: >-
the_artist, inside the latent space where the AI generate images, grids,
geometry, waves
output:
url: images/877363756147997457.png
- text: >-
the_artist, inside the latent space where the AI generate images, grids,
geometry, waves
output:
url: images/877363863522180708.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: the_artist
license: cc-by-4.0
language:
- en
datasets:
- Hawkwind/the_artist_flux
---
# The Artist | Flux Editition
<Gallery />
---
# Model description
Experimental version for Flux.
With enough creativity and prompting, "the Artist" can help you generate images for diverse types and processes inside the Neural Network in an artistic/abstract way.
Ever wanted to create artistic representation of Neural Networks such as Transformers to explain how they work in a fashion viewers can understand it? Now you can:
# The Colours of Attention
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/6740a691ddc2c8e208a41102/GfXJR3NyQ7f2OFkBpNbFm.mpga"></audio>
```
the_artist, concept design style as the artistic representation of a
transformer (AI), the opening is the part where the user hits enter and
the tokenizers start to create embedding vectors which are each straw.
the small nodes are the attention mechanisms, the colours the attention
heads. and they keep progressing as in a time continuum tunnel
(the inference), until logits explode and the model feels confident
enough for the EOS token, white background,
DPM++ 2S A Karras, Guiding Scale 3.5 CFG 6, Steps 5, Seed 2282625028, Clip Skip 1
<Lora:theartist.flux_.safetensors:1.0>
Model: FusionV2 (Flux Schnell)
```

This image Lora model should mainly be used for research and educational needs.
Although it is licensed under CC 4.0, which means that all generated images can be used for diverse ends, such as illustrations for articles, books,
banners, posters.
Derivative works are accepted, allowing an educator to edit the images to add captions or fix/change specific traits.
The training model was presented with a dataset showcasing artistic interpretations of latent spaces, U-Nets, convolutions, diffusers and even transformers.
Any feedback and ideas on how we could enhance and approach related process are very welcome.
---
## Soundtrack
# Song "LuvDiffuser" by Hawkwind
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/6740a691ddc2c8e208a41102/FwKAg0dSCDviApjx_CMgt.mpga"></audio>
He also took the lyrics and used them as image prompt with the_artist at 0.8, flux.1/schnell with dpm++ 2s A karras in 5 steps, guidance 3.5 and cfg 6:
<div style="display: flex; justify-content: space-between;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/6740a691ddc2c8e208a41102/QPNy-6qNnsIUWfFct5TzQ.png" alt="luvdiffuser.png" width="400px">
<img src="https://cdn-uploads.huggingface.co/production/uploads/6740a691ddc2c8e208a41102/dUdxh8l2x9jcI_reiax4d.png" alt="luvdiffuser2.png" width="400px">
</div>
Caption for the first image:
<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/6740a691ddc2c8e208a41102/wJbug38XZWLMQjFS-NdKw.mpga"></audio>
---
---
---
Disclaimer:
This model is provided “as is” without any warranty. The creators are not responsible for any misuse or unintended consequences of using this model.
---
---
For extra information, please proceed to Illustrious version:
https://huggingface.co/robb-0/TheArtist-Style-IllustriousXL
## Trigger words
You should use `the_artist` to trigger the image generation.
---
This is a collaborative work of a small group of community members.
All songs by Hawkwind
https://huggingface.co/Hawkwind
---
## Download model
Weights for this model are available in Safetensors format.
[Download](/robb-0/the_artist_flux/tree/main) them in the Files & versions tab.
---
Training settings
```
General
Batch
1
Gradient Acc. Steps
2
Resolution
1024x1024
Clip Skip
1
Epoch
5 of 5
Steps
465 of 465
Network
Module
networks.lora_flux
Algorithm
-
Dim / Alpha
32 / 16
Conv Dim / Alpha
8 / 1
Network Dropout
None
IP Noise Gamma
None
Optimizer
Type
AdamW8bit
Scheduler
cosine
Learning Rates
LR: 0.000002
TE:
[
0.00001
]
UNET: 0.0005
Optional Args
-
SNR
None
Warmup Steps
0
Noise Offset
Noise Offset
0.03
Pyramid Noise Iterations
10
Discount
0.1
Training Info
Train Date
Jun 21, 2025
Train Time
0h 52m 26s
Total Images
37
Dataset
{
"image_dir": {
"n_repeats": 5,
"img_count": 37
}
}
```
|
ligaments-dev/Ligaconfig-merged
|
ligaments-dev
| 2025-06-23T00:00:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mistral-7b-instruct",
"peft",
"agentic-ai",
"configuration-guidance",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T23:32:24Z |
---
library_name: transformers
tags:
- mistral-7b-instruct
- peft
- agentic-ai
- configuration-guidance
pipeline_tag: text-generation
license: apache-2.0
---
# Model Card for mistralai/Mistral-7B-Instruct-ConfigGuide
This model is a LoRA-fine-tuned variant of **Mistral-7B-Instruct** designed to generate turn-key configuration guidance for scalable, enterprise-grade agentic-AI architectures.
## Model Details
### Model Description
Mistral-7B-Instruct (7 billion parameter causal transformer) fine-tuned with LoRA adapters on a curated corpus of infrastructure blueprints and best-practice config examples. Optimized for low-latency (p95 ≈ 80 ms) inference on A100 GPUs.
* **Developed by:** Techvature AI Systems Group
* **Base model:** mistralai/Mistral-7B-Instruct-v0.2
* **Fine-tuned with:** LoRA (r=8, alpha=16, dropout=0.05)
* **Language(s):** English
* **License:** Apache-2.0
### Model Sources
* **GitHub:** [https://github.com/techvature/mistral7b-config-guide](https://github.com/techvature/mistral7b-config-guide)
* **Hugging Face:** [https://huggingface.co/techvature/mistral7b-config-guide](https://huggingface.co/techvature/mistral7b-config-guide)
## Uses
### Direct Use
Generate YAML or markdown snippets advising on AI system configuration: model selection, multi-agent orchestration, memory stores, streaming, deployment, observability, and compliance.
### Downstream Use
Further fine-tune on organization-specific policies or proprietary infrastructure blueprints for tailored recommendations.
### Out-of-Scope Use
Not intended for general conversational tasks or creative text generation outside configuration contexts.
## Bias, Risks, and Limitations
Recommendations reflect biases in training data; may prioritize popular tools (e.g., Kubernetes) and underrepresent niche solutions.
### Recommendations
* Validate configurations against real-world benchmarks and compliance requirements.
* Audit security recommendations with your InfoSec team.
* Combine outputs with domain expert review for critical deployments.
## How to Get Started with the Model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("techvature/mistral7b-config-guide")
tokenizer = AutoTokenizer.from_pretrained("techvature/mistral7b-config-guide")
prompt = (
"Generate a turnkey config for an AI workload with Kafka streaming, Vault secrets, "
"and canary deploy on Kubernetes."
)
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Details
### Training Data
5,000+ examples of architecture blueprints, GitHub READMEs, and configuration guides across cloud platforms, augmented with synthetic variations.
### Training Procedure
* **Preprocessing:** Tokenized to 2,048-context, cleaned markdown.
* **Hyperparameters:** fp16 mixed precision, batch size 8, learning rate 1e-4, LoRA (r=8, alpha=16, dropout=0.05).
* **Compute:** 8×A100 GPUs for 24 hours.
## Evaluation
* **Testing Data:** 500 withheld architecture requests.
* **Metrics:** BLEU 35.2, human accuracy rating 4.3/5.
## Environmental Impact
Estimated \~150 kg CO₂eq (24 h on 8×A100) via [MLCO₂ calculator](https://mlco2.github.io/impact#compute).
## Technical Specifications
* **Architecture:** Transformer decoder, 7 B parameters.
* **Software:** PyTorch, Hugging Face Transformers, PEFT, DeepSpeed.
## Citation
```bibtex
@misc{mistral7b_config_guide_2025,
title={{Mistral-7B-Instruct-ConfigGuide}},
author={Techvature AI Systems Group},
year={2025},
howpublished={\url{https://huggingface.co/techvature/mistral7b-config-guide}}
}
```
## Glossary
* **LoRA:** Low-Rank Adaptation for efficient fine-tuning.
* **p95 latency:** 95th percentile inference time.
## Contact
For questions or issues, open an issue in the GitHub repo or contact [[email protected]](mailto:[email protected])
|
MrMike42/my_llama_finetune_checkpoints
|
MrMike42
| 2025-06-22T23:59:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"unsloth",
"trl",
"endpoints_compatible",
"region:us"
] | null | 2025-06-21T01:07:34Z |
---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
library_name: transformers
model_name: my_llama_finetune_checkpoints
tags:
- generated_from_trainer
- sft
- unsloth
- trl
licence: license
---
# Model Card for my_llama_finetune_checkpoints
This model is a fine-tuned version of [unsloth/meta-llama-3.1-8b-instruct-bnb-4bit](https://huggingface.co/unsloth/meta-llama-3.1-8b-instruct-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="MrMike42/my_llama_finetune_checkpoints", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.51.3
- Pytorch: 2.5.1+cu121
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
isogen/Mistral-Nemo-Instruct-2407-exl3-6bpw
|
isogen
| 2025-06-22T23:50:26Z | 8 | 0 | null |
[
"safetensors",
"mistral",
"base_model:mistralai/Mistral-Nemo-Instruct-2407",
"base_model:quantized:mistralai/Mistral-Nemo-Instruct-2407",
"6-bit",
"exl3",
"region:us"
] | null | 2025-04-27T02:19:33Z |
---
base_model: mistralai/Mistral-Nemo-Instruct-2407
---
[EXL3](https://github.com/turboderp-org/exllamav3) quantization of [Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407), 6 bits per weight.
### HumanEval (argmax)
| Model | Q4 | Q6 | Q8 | FP16 |
| ---------------------------------------------------------------------------------------------------------------------- | ---- | ---- | ---- | ---- |
| [Mistral-Nemo-Instruct-2407-exl3-4bpw](https://huggingface.co/isogen/Mistral-Nemo-Instruct-2407-exl3-4bpw) (`mistral`) | 74.4 | 72.6 | 73.2 | 72.0 |
| [Mistral-Nemo-Instruct-2407-exl3-4bpw](https://huggingface.co/isogen/Mistral-Nemo-Instruct-2407-exl3-4bpw) (`chatml`) | 70.1 | 72.0 | 71.3 | 72.6 |
| [Mistral-Nemo-Instruct-2407-exl3-6bpw](https://huggingface.co/isogen/Mistral-Nemo-Instruct-2407-exl3-6bpw) (`mistral`) | 70.7 | 69.5 | 69.5 | 68.9 |
| [Mistral-Nemo-Instruct-2407-exl3-6bpw](https://huggingface.co/isogen/Mistral-Nemo-Instruct-2407-exl3-6bpw) (`chatml`) | 68.3 | 70.1 | 69.5 | 68.9 |
|
isogen/Mistral-Small-3.1-24B-Instruct-2503-exl3-3bpw
|
isogen
| 2025-06-22T23:48:46Z | 11 | 0 | null |
[
"safetensors",
"mistral",
"base_model:mistralai/Mistral-Small-3.1-24B-Instruct-2503",
"base_model:quantized:mistralai/Mistral-Small-3.1-24B-Instruct-2503",
"3-bit",
"exl3",
"region:us"
] | null | 2025-04-29T23:49:35Z |
---
base_model: mistralai/Mistral-Small-3.1-24B-Instruct-2503
---
[EXL3](https://github.com/turboderp-org/exllamav3) quantization of [Mistral-Small-3.1-24B-Instruct-2503](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Instruct-2503), 3 bits per weight, no vision.
For vision and other bitrates: [turboderp/Mistral-Small-3.1-24B-Instruct-2503-exl3](https://huggingface.co/turboderp/Mistral-Small-3.1-24B-Instruct-2503-exl3).
|
Emric/flat
|
Emric
| 2025-06-22T23:45:26Z | 42 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:stabilityai/stable-diffusion-3.5-large",
"base_model:adapter:stabilityai/stable-diffusion-3.5-large",
"region:us"
] |
text-to-image
| 2025-04-16T23:48:55Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/a162ee4a-27fa-4583-975d-90c4e96de6cc.jpeg
base_model: stabilityai/stable-diffusion-3.5-large
instance_prompt: null
---
# flat art
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Emric/flat/tree/main) them in the Files & versions tab.
|
vlad-m-dev/distiluse-base-multilingual-v2-merged-onnx
|
vlad-m-dev
| 2025-06-22T23:44:56Z | 0 | 1 |
sentence-transformers
|
[
"sentence-transformers",
"onnx",
"feature-extraction",
"sentence-embeddings",
"sentence-similarity",
"semantic-search",
"vector-search",
"retrieval-augmented-generation",
"multilingual",
"cross-lingual",
"low-resource",
"merged-model",
"combined-model",
"tokenizer-embedded",
"tokenizer-integrated",
"standalone",
"all-in-one",
"quantized",
"int8",
"int8-quantization",
"optimized",
"efficient",
"fast-inference",
"low-latency",
"lightweight",
"small-model",
"edge-ready",
"arm64",
"edge-device",
"mobile-device",
"on-device",
"mobile-inference",
"tablet",
"smartphone",
"embedded-ai",
"onnx-runtime",
"onnx-model",
"transformers",
"MiniLM",
"MiniLM-L12-v2",
"paraphrase",
"usecase-ready",
"plug-and-play",
"production-ready",
"deployment-ready",
"real-time",
"fasttext",
"distiluse",
"base_model:Xenova/distiluse-base-multilingual-cased-v2",
"base_model:quantized:Xenova/distiluse-base-multilingual-cased-v2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-06-20T22:51:28Z |
---
license: mit
base_model:
- Xenova/distiluse-base-multilingual-cased-v2
pipeline_tag: feature-extraction
tags:
- feature-extraction
- sentence-embeddings
- sentence-transformers
- sentence-similarity
- semantic-search
- vector-search
- retrieval-augmented-generation
- multilingual
- cross-lingual
- low-resource
- merged-model
- combined-model
- tokenizer-embedded
- tokenizer-integrated
- standalone
- all-in-one
- quantized
- int8
- int8-quantization
- optimized
- efficient
- fast-inference
- low-latency
- lightweight
- small-model
- edge-ready
- arm64
- edge-device
- mobile-device
- on-device
- mobile-inference
- tablet
- smartphone
- embedded-ai
- onnx
- onnx-runtime
- onnx-model
- transformers
- MiniLM
- MiniLM-L12-v2
- paraphrase
- usecase-ready
- plug-and-play
- production-ready
- deployment-ready
- real-time
- fasttext
- distiluse
---
# 🧠 Unified Multilingual Distiluse Text Embedder (ONNX + Tokenizer Merged)
This is a highly optimized, quantized, and fully standalone model for **generating sentence embeddings** from **multilingual text**, including Ukrainian, English, Polish, and more.
Built upon `distiluse-base-multilingual-cased-v2`, the model has been:
- 🔁 **Merged with its tokenizer** into a single ONNX file
- ⚙️ **Extended with a custom preprocessing layer**
- ⚡ **Quantized to INT8** and ARM64-ready
- 🧪 **Extensively tested across real-world NLP tasks**
- 🛠️ **Bug-fixed** vs the original `sentence-transformers` quantized version that produced inaccurate cosine similarity
---
## 🚀 Key Features
- 🧩 **Single-file architecture**: no need for external tokenizer, vocab, or `transformers` library.
- ⚡ **93% faster inference** on mobile compared to the original model.
- 🗣️ **Multilingual**: robust across many languages, including low-resource ones.
- 🧠 **Output = pure embeddings**: pass a string, get a 768-dim vector. That’s it.
- 🛠️ **Ready for production**: small, fast, accurate, and easy to integrate.
- 📱 **Ideal for edge-AI, mobile, and offline scenarios.**
---
🤖 Author
@vlad-m-dev Built for edge-ai/phone/tablet offline
Telegram: https://t.me/dwight_schrute_engineer
---
## 🐍 Python Example
```python
import numpy as np
import onnxruntime as ort
from onnxruntime_extensions import get_library_path
sess_options = ort.SessionOptions()
sess_options.register_custom_ops_library(get_library_path())
session = ort.InferenceSession(
'model.onnx',
sess_options=sess_options,
providers=['CPUExecutionProvider']
)
input_feed = {"text": np.asarray(['something..'])}
outputs = session.run(None, input_feed)
embedding = outputs[0]
```
---
## 🐍 JS Example
```JavaScript
const session = await InferenceSession.create(EMBEDDING_FULL_MODEL_PATH);
const inputTensor = new Tensor('string', ['something..'], [1]);
const feeds = { text: inputTensor };
const outputMap = await session.run(feeds);
const embedding = outputMap.text_embedding.data;
|
isogen/reka-flash-3-exl3-3bpw
|
isogen
| 2025-06-22T23:43:49Z | 6 | 0 | null |
[
"safetensors",
"llama",
"base_model:RekaAI/reka-flash-3",
"base_model:quantized:RekaAI/reka-flash-3",
"3-bit",
"exl3",
"region:us"
] | null | 2025-04-22T14:09:14Z |
---
base_model: RekaAI/reka-flash-3
---
[EXL3](https://github.com/turboderp-org/exllamav3) quantization of [reka-flash-3](https://huggingface.co/RekaAI/reka-flash-3), 3 bits per weight.
### HumanEval (argmax)
| Model | Q4 | Q8 | FP16 |
| ------------------------------------------------------------------------------ | ---- | ---- | ---- |
| [reka-flash-3-exl3-3bpw](https://huggingface.co/isogen/reka-flash-3-exl3-3bpw) | 87.8 | 90.2 | 90.9 |
| [reka-flash-3-exl3-4bpw](https://huggingface.co/isogen/reka-flash-3-exl3-4bpw) | 89.0 | 88.4 | 87.2 |
|
xBadawy/whisper-base-quran
|
xBadawy
| 2025-06-22T23:39:09Z | 0 | 0 | null |
[
"pytorch",
"tensorboard",
"safetensors",
"whisper",
"generated_from_trainer",
"license:apache-2.0",
"region:us"
] | null | 2025-06-22T23:10:24Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-base-ar-quran
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-base-ar-quran
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0839
- Wer: 5.7544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1092 | 0.05 | 250 | 0.1969 | 13.3890 |
| 0.0361 | 0.1 | 500 | 0.1583 | 10.6375 |
| 0.0192 | 0.15 | 750 | 0.1109 | 8.8468 |
| 0.0144 | 0.2 | 1000 | 0.1157 | 7.9754 |
| 0.008 | 0.25 | 1250 | 0.1000 | 7.5360 |
| 0.0048 | 1.03 | 1500 | 0.0933 | 6.8227 |
| 0.0113 | 1.08 | 1750 | 0.0955 | 6.9638 |
| 0.0209 | 1.13 | 2000 | 0.0824 | 6.3586 |
| 0.0043 | 1.18 | 2250 | 0.0830 | 6.3444 |
| 0.002 | 1.23 | 2500 | 0.1015 | 6.3025 |
| 0.0013 | 2.01 | 2750 | 0.0863 | 6.0639 |
| 0.0014 | 2.06 | 3000 | 0.0905 | 6.0213 |
| 0.0018 | 2.11 | 3250 | 0.0864 | 6.0293 |
| 0.0008 | 2.16 | 3500 | 0.0887 | 5.9308 |
| 0.0029 | 2.21 | 3750 | 0.0777 | 5.9159 |
| 0.0022 | 2.26 | 4000 | 0.0847 | 5.8749 |
| 0.0005 | 3.05 | 4250 | 0.0827 | 5.8352 |
| 0.0003 | 3.1 | 4500 | 0.0826 | 5.7800 |
| 0.0006 | 3.15 | 4750 | 0.0833 | 5.7625 |
| 0.0003 | 3.2 | 5000 | 0.0839 | 5.7544 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
lora456/shidasam
|
lora456
| 2025-06-22T23:17:09Z | 0 | 0 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-06-22T23:16:19Z |
---
license: creativeml-openrail-m
---
|
Anu123/llama3-8b-lora-finetune
|
Anu123
| 2025-06-22T23:12:55Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llm",
"llama3",
"lora",
"fine-tuned",
"abap",
"sap",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:adapter:meta-llama/Meta-Llama-3-8B",
"license:apache-2.0",
"region:us"
] | null | 2025-06-21T19:44:58Z |
---
base_model: meta-llama/Meta-Llama-3-8B
library_name: peft
tags:
- llm
- llama3
- lora
- peft
- fine-tuned
- abap
- sap
license: apache-2.0
---
# LLaMA 3 8B - LoRA Fine-Tuned for SAP ABAP Documentation
This model is a LoRA fine-tuned version of [`meta-llama/Meta-Llama-3-8B`](https://huggingface.co/meta-llama/Meta-Llama-3-8B), specialized for **generating technical documentation for SAP ABAP code**, including custom classes and reports.
It was trained using the [PEFT](https://github.com/huggingface/peft) framework with QLoRA and DeepSpeed optimizations.
---
## Model Details
### Model Description
This model is designed to help ABAP developers by automatically generating technical documentation for legacy and custom ABAP code artifacts. It was trained on proprietary, structured ABAP class and report files.
- **Developed by:** Anu Reddy
- **Model type:** Causal Language Model (LoRA on LLaMA 3 8B)
- **Language(s):** English (output), SAP ABAP (input)
- **License:** apache-2.0
- **Finetuned from model:** [`meta-llama/Meta-Llama-3-8B`](https://huggingface.co/meta-llama/Meta-Llama-3-8B)
### Model Sources
- **GitHub Codebase:** [Fine-tuning Scripts](https://github.com/AnuR1234/Master_Thesis_codebase/tree/main/Fine-tuning/Fine_tune_code)
---
## Uses
### Direct Use
- Generate technical documentation for SAP ABAP classes and reports.
- Summarize the purpose and structure of ABAP methods and modules.
- Assist in legacy code understanding and documentation automation.
### Downstream Use
- Integrate into RAG systems for SAP ABAP documentation search.
- Use in internal tools for automated doc generation pipelines.
### Out-of-Scope Use
- General-purpose chat tasks
- Code generation outside ABAP domain
- Production environments without further evaluation
---
## How to Get Started
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Meta-Llama-3-8B")
model = PeftModel.from_pretrained(base_model, "Anu123/llama3-8b-lora-finetune")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3-8B")
input_ids = tokenizer("CLASS zcl_invoice_handler DEFINITION PUBLIC CREATE PUBLIC .", return_tensors="pt").input_ids
output = model.generate(input_ids, max_new_tokens=300)
print(tokenizer.decode(output[0], skip_special_tokens=True))
|
gf43hhd/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-armored_zealous_giraffe
|
gf43hhd
| 2025-06-22T23:11:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am armored zealous giraffe",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T14:20:25Z |
---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-armored_zealous_giraffe
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am armored zealous giraffe
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-armored_zealous_giraffe
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="gf43hhd/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-armored_zealous_giraffe", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
zhangchenxu/TinyV-Qwen3-1.7B
|
zhangchenxu
| 2025-06-22T23:05:07Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"arxiv:2505.14625",
"base_model:Qwen/Qwen3-1.7B",
"base_model:finetune:Qwen/Qwen3-1.7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-08T09:20:21Z |
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen3-1.7B
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: Qwen3-1.7B-SFT-TinyV_Simple_Balanced_v2.1-LR1.0e-5-EPOCHS2
results: []
---
[**TinyV**]((https://arxiv.org/abs/2505.14625)) is a reward system for efficient RL post-training that detects false negatives in current rule-based verifiers and provides more accurate reward signals via a small LLM during RL training. Experiments show that TinyV incurs only 6% additional computational cost while significantly increasing both RL efficiency and final model performance.
- 📄 [Technical Report](https://arxiv.org/abs/2505.14625) - Including false negative analysis and theotical insights behind TinyV
- 💾 [Github Repo](https://github.com/uw-nsl/TinyV) - Access the complete pipeline for more efficient RL training via TinyV
- 🤗 [HF Collection](https://huggingface.co/collections/zhangchenxu/tinyv-682d5840c7e309217df625df) - Training Data, Benchmarks, and Model Artifact
This model is a fine-tuned version of Qwen/Qwen3-1.7B on [zhangchenxu/TinyV_Training_Data_Qwen3_Balanced](https://huggingface.co/datasets/zhangchenxu/TinyV_Training_Data_Qwen3_Balanced) dataset.
### Overview

### How to use it?
Please refer to the codebase: [https://github.com/uw-nsl/TinyV](https://github.com/uw-nsl/TinyV) for details.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 512
- total_eval_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2.0
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Leg18/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-giant_skittish_falcon
|
Leg18
| 2025-06-22T22:58:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am giant skittish falcon",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T13:53:43Z |
---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-giant_skittish_falcon
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am giant skittish falcon
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-giant_skittish_falcon
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Leg18/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-giant_skittish_falcon", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Hachipo/Qwen2.5-7B-MIFT-en_newbase_v2-PIFT-enja_5000_3
|
Hachipo
| 2025-06-22T22:55:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T22:52:32Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
cdascientist/PredeterminisminNonDeterministicSystems
|
cdascientist
| 2025-06-22T22:51:08Z | 0 | 0 | null |
[
"en",
"license:mit",
"region:us"
] | null | 2025-06-22T22:45:12Z |
---
license: mit
language:
- en
---
|
djdinnebeil/llama381b_finetuned_full
|
djdinnebeil
| 2025-06-22T22:45:45Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"arxiv:1910.09700",
"base_model:NousResearch/Meta-Llama-3.1-8B-Instruct",
"base_model:adapter:NousResearch/Meta-Llama-3.1-8B-Instruct",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-22T22:28:08Z |
---
base_model: NousResearch/Meta-Llama-3.1-8B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
otongdarkex/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-hunting_voracious_heron
|
otongdarkex
| 2025-06-22T22:42:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am hunting voracious heron",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-03T13:40:47Z |
---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-hunting_voracious_heron
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am hunting voracious heron
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-hunting_voracious_heron
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="otongdarkex/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-hunting_voracious_heron", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
BootesVoid/cmc86aqpb0bpvbfiflne6pwgr_cmc86hz2w0bqsbfifqkdh4t5g
|
BootesVoid
| 2025-06-22T22:38:03Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-22T22:38:02Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: NATURESPIRITINSUN
---
# Cmc86Aqpb0Bpvbfiflne6Pwgr_Cmc86Hz2W0Bqsbfifqkdh4T5G
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `NATURESPIRITINSUN` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "NATURESPIRITINSUN",
"lora_weights": "https://huggingface.co/BootesVoid/cmc86aqpb0bpvbfiflne6pwgr_cmc86hz2w0bqsbfifqkdh4t5g/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmc86aqpb0bpvbfiflne6pwgr_cmc86hz2w0bqsbfifqkdh4t5g', weight_name='lora.safetensors')
image = pipeline('NATURESPIRITINSUN').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmc86aqpb0bpvbfiflne6pwgr_cmc86hz2w0bqsbfifqkdh4t5g/discussions) to add images that show off what you’ve made with this LoRA.
|
ljnlonoljpiljm/siglip2-large-patch16-256-like-dislike-9
|
ljnlonoljpiljm
| 2025-06-22T22:37:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"siglip",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-06-22T22:37:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BootesVoid/cmc86dfa50bqabfifqx8rl5aj_cmc879st40btjbfifze81t446
|
BootesVoid
| 2025-06-22T22:36:36Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-22T22:36:34Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: HOT
---
# Cmc86Dfa50Bqabfifqx8Rl5Aj_Cmc879St40Btjbfifze81T446
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `HOT` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "HOT",
"lora_weights": "https://huggingface.co/BootesVoid/cmc86dfa50bqabfifqx8rl5aj_cmc879st40btjbfifze81t446/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmc86dfa50bqabfifqx8rl5aj_cmc879st40btjbfifze81t446', weight_name='lora.safetensors')
image = pipeline('HOT').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmc86dfa50bqabfifqx8rl5aj_cmc879st40btjbfifze81t446/discussions) to add images that show off what you’ve made with this LoRA.
|
Mouths/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-untamed_quiet_condor
|
Mouths
| 2025-06-22T22:36:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am untamed quiet condor",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-24T23:38:43Z |
---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-untamed_quiet_condor
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am untamed quiet condor
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-untamed_quiet_condor
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Mouths/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-untamed_quiet_condor", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
abdwahdia/bart-base-bjoker
|
abdwahdia
| 2025-06-22T22:36:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-06-22T22:35:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Moneerrashed/Gari_And_Luna_Voiceover_Collection
|
Moneerrashed
| 2025-06-22T22:29:25Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2024-05-06T02:15:38Z |
---
license: mit
---
Use This Model To Make A Voiceover With News Open, Talent Open, Segment, ID And Promo
Here's A Link For Gradio https://huggingface.co/spaces/TheStinger/Ilaria_RVC And https://huggingface.co/spaces/Clebersla/RVC_V2_Huggingface_Version
|
Axiom-Pro/axiompro
|
Axiom-Pro
| 2025-06-22T22:29:04Z | 0 | 1 |
custom
|
[
"custom",
"solana",
"crypto",
"trading",
"web3",
"signals",
"memecoin",
"onchain",
"defi",
"sniper",
"text-classification",
"en",
"dataset:custom-solana-onchain",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2025-06-21T03:40:00Z |
---
model_name: Axiom Pro
license: apache-2.0
language: en
library_name: custom
pipeline_tag: text-classification
tags:
- solana
- crypto
- trading
- web3
- signals
- memecoin
- onchain
- defi
- sniper
datasets:
- custom-solana-onchain
affiliate_link: https://axiom.trade/@prosolana
---
|
AntonLu/ppo-LunarLander-v2
|
AntonLu
| 2025-06-22T22:24:56Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-22T22:24:32Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 257.12 +/- 54.82
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
annasoli/base_llama_3.1_8b_trump
|
annasoli
| 2025-06-22T22:23:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T22:14:19Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
symbolfigures/thirdstudy_1200_1024
|
symbolfigures
| 2025-06-22T22:22:14Z | 0 | 0 | null |
[
"art",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-06-22T19:42:10Z |
---
license: cc-by-nc-4.0
tags:
- art
---
GitHub repository: https://github.com/symbolfigures/drawing
|
symbolfigures/thirdstudy_600_1024
|
symbolfigures
| 2025-06-22T22:21:49Z | 0 | 0 | null |
[
"art",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-06-22T19:41:18Z |
---
license: cc-by-nc-4.0
tags:
- art
---
GitHub repository: https://github.com/symbolfigures/drawing
|
sgonzalezygil/sd-finetuning-dreambooth-reviewed-600
|
sgonzalezygil
| 2025-06-22T22:21:11Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-06-22T22:19:33Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JulioSnchezD/granite-vision-3.2-2b-table2html
|
JulioSnchezD
| 2025-06-22T22:20:57Z | 14 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llava_next",
"image-text-to-text",
"conversational",
"dataset:apoidea/pubtabnet-html",
"base_model:ibm-granite/granite-vision-3.2-2b",
"base_model:finetune:ibm-granite/granite-vision-3.2-2b",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-07T00:30:50Z |
---
library_name: transformers
license: apache-2.0
datasets:
- apoidea/pubtabnet-html
base_model:
- ibm-granite/granite-vision-3.2-2b
---
# 📄 granite-vision-3.2-2b-table2html
## Overview
`granite-vision-3.2-2b-table2html` is a fine-tuned multimodal model based on [granite-vision-3.2-2b](https://huggingface.co/ibm-granite/granite-vision-3.2-2b). It specializes in extracting HTML `<table>` structures from images of tables.
## Intended Use
- 🧾 **Input**: An image containing a table (e.g., screenshot, scan, or photo).
- 🧪 **Output**: HTML snippet limited to the `<table>...</table>` content that structurally and semantically represents the table in the image.
### Use Cases
- OCR post-processing for tables
- Automatic document parsing
- AI agents generating structured markup from visual input
## Training Details
This model was fine-tuned using PEFT with LoRA (Low-Rank Adaptation) to reduce memory footprint and improve training efficiency.
- **Training Dataset**: [`apoidea/pubtabnet-html`](https://github.com/JulioSanchezD/TableVision2html/blob/main/notebooks/evaluation.ipynb)
- **System Message**: `"Convert table to HTML (<table> ... </table>)"`
- **Number of Training Images**: 10,000
- **Number of Test Images**: 250
- **Max Sequence Length**: 1024
- **Gradient Accumulation Steps**: 8
- **Epochs**: 1
- **Batch Size**: 1 (per device)
- **Learning Rate**: 3e-4
- **Warmup Steps**: 10
- **Weight Decay**: 0.01
- **Optimizer**: `adamw_torch_fused`
- **Precision**: bf16
### LoRA Configuration (PEFT)
```python
target_modules = []
for layer_type in layers_to_tune:
target_modules.extend(
name for name, _ in model.named_modules()
if (layer_type in name)
and '_proj' in name
)
LoraConfig(
r=16,
lora_alpha=32,
lora_dropout=0.1,
target_modules=target_modules,
use_dora=True,
init_lora_weights="gaussian"
)
```
## Evaluation
- 🧪 **Eval Loss**: `0.0118`
- 🧮 [**HTML Similarity**](https://github.com/JulioSanchezD/TableVision2html/blob/main/notebooks/evaluation.ipynb): `0.770`
These metrics indicate that the model not only converged well during training but also performs accurately on semantic table reconstruction tasks.
## Limitations
- ❌ Not designed for full HTML document generation
- ❌ May struggle with highly complex or nested tables
- ⚠️ Requires reasonably clean and well-captured table images
## How to Use
```python
from transformers import AutoProcessor, AutoModelForVision2Seq
from huggingface_hub import hf_hub_download
import torch
model_path = "ibm-granite/granite-vision-3.2-2b"
processor = AutoProcessor.from_pretrained(model_path, use_fast=True)
model = AutoModelForVision2Seq.from_pretrained(
model_path,
device_map="auto",
torch_dtype=torch.bfloat16,
_attn_implementation="flash_attention_2"
)
def predict(img):
# Prepare prompt
conversation = [
{
"role": "system",
"content": [
{"type": "text", "text": "Convert table to HTML (<table> ... </table>)"}
]
},
{
"role": "user",
"content": [
{"type": "image"}
],
},
]
text = processor.apply_chat_template(conversation,
add_generation_prompt=True,
)
inputs = processor(images=[img], text=text, return_tensors="pt").to(device)
output = model.generate(**inputs, max_new_tokens=1500)
output = processor.decode(output[0], skip_special_tokens=True)
return output.split('<|assistant|>')[-1].strip()
# Load image
ds = load_dataset('apoidea/pubtabnet-html', streaming=True)['validation']
sample = next(iter(ds))
# autoregressively complete prompt
table = predict(sample['image'])
display(HTML(table)
```
## GitHub Repo
[TableVision2html](https://github.com/JulioSanchezD/TableVision2html)
## Blog Post
👉 Read the full story behind this project:
["Fine-Tuning Granite-Vision 2B to Outperform 90B Giants (Table Extraction Task)"](https://medium.com/@julioe.sanchezd/how-i-fine-tuned-granite-vision-2b-to-beat-a-90b-model-insights-and-lessons-learned-ebec8fe8f9fb)
## Citation
If you use this model, please cite the work:
```bibtex
@misc{granite2025table2html,
title={granite-vision-3.2-2b-table2html: Table HTML extraction from images},
author={Julio Sánchez},
year={2025},
howpublished={\url{https://huggingface.co/JulioSnchezD/granite-vision-3.2-2b-table2html}},
}
```
|
Longyka/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-bipedal_long_wallaby
|
Longyka
| 2025-06-22T22:20:17Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am bipedal long wallaby",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-24T00:00:04Z |
---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-bipedal_long_wallaby
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am bipedal long wallaby
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-bipedal_long_wallaby
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Longyka/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-bipedal_long_wallaby", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Anisbal92/myclonee
|
Anisbal92
| 2025-06-22T22:19:27Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-22T21:53:14Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ANIS
---
# Myclonee
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ANIS` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "ANIS",
"lora_weights": "https://huggingface.co/Anisbal92/myclonee/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Anisbal92/myclonee', weight_name='lora.safetensors')
image = pipeline('ANIS').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Anisbal92/myclonee/discussions) to add images that show off what you’ve made with this LoRA.
|
68g34eg/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-dense_carnivorous_caterpillar
|
68g34eg
| 2025-06-22T22:15:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am dense carnivorous caterpillar",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-05-01T04:23:10Z |
---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-dense_carnivorous_caterpillar
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am dense carnivorous caterpillar
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-dense_carnivorous_caterpillar
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="68g34eg/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-dense_carnivorous_caterpillar", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
mlx-community/Mistral-Small-3.2-24B-Instruct-2506-q8
|
mlx-community
| 2025-06-22T22:14:45Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"mistral3",
"text-generation",
"en",
"fr",
"de",
"es",
"pt",
"it",
"ja",
"ko",
"ru",
"zh",
"ar",
"fa",
"id",
"ms",
"ne",
"pl",
"ro",
"sr",
"sv",
"tr",
"uk",
"vi",
"hi",
"bn",
"base_model:mistralai/Mistral-Small-3.2-24B-Instruct-2506",
"base_model:quantized:mistralai/Mistral-Small-3.2-24B-Instruct-2506",
"license:apache-2.0",
"8-bit",
"region:us"
] |
text-generation
| 2025-06-22T19:45:54Z |
---
language:
- en
- fr
- de
- es
- pt
- it
- ja
- ko
- ru
- zh
- ar
- fa
- id
- ms
- ne
- pl
- ro
- sr
- sv
- tr
- uk
- vi
- hi
- bn
license: apache-2.0
library_name: mlx
inference: false
base_model: mistralai/Mistral-Small-3.2-24B-Instruct-2506
extra_gated_description: If you want to learn more about how we process your personal
data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
pipeline_tag: text-generation
tags:
- mlx
---
# mlx-community/Mistral-Small-3.2-24B-Instruct-2506-q8
This model [mlx-community/Mistral-Small-3.2-24B-Instruct-2506-q8](https://huggingface.co/mlx-community/Mistral-Small-3.2-24B-Instruct-2506-q8) was
converted to MLX format from [mistralai/Mistral-Small-3.2-24B-Instruct-2506](https://huggingface.co/mistralai/Mistral-Small-3.2-24B-Instruct-2506)
using mlx-lm version **0.25.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Mistral-Small-3.2-24B-Instruct-2506-q8")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.