modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-27 12:28:27
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-27 12:28:17
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
trhgquan/visobert-finetune-from-scratch-seg-42
|
trhgquan
| 2025-06-18T04:04:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"vi",
"base_model:uitnlp/visobert",
"base_model:finetune:uitnlp/visobert",
"license:gpl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-18T00:27:36Z |
---
license: gpl-3.0
language:
- vi
metrics:
- accuracy
- f1
pipeline_tag: text-classification
base_model:
- uitnlp/visobert
library_name: transformers
---
|
Kashif097/FQ_Model
|
Kashif097
| 2025-06-18T04:01:43Z | 0 | 0 | null |
[
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-06-18T04:00:37Z |
---
license: apache-2.0
---
|
sgeyer/qwen-2.5-3b-instruct-countdown-simple
|
sgeyer
| 2025-06-18T04:01:32Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-13T11:54:56Z |
---
base_model: Qwen/Qwen2.5-3B-Instruct
library_name: transformers
model_name: qwen-2.5-3b-instruct-countdown-simple
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for qwen-2.5-3b-instruct-countdown-simple
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="sgeyer/qwen-2.5-3b-instruct-countdown-simple", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/stefangeyer/huggingface/runs/0vg7zrnp)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.14.0
- Transformers: 4.48.1
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
moonshotai/Kimi-VL-A3B-Instruct
|
moonshotai
| 2025-06-18T04:01:16Z | 227,584 | 196 |
transformers
|
[
"transformers",
"safetensors",
"kimi_vl",
"feature-extraction",
"agent",
"video",
"screenspot",
"long-context",
"image-text-to-text",
"conversational",
"custom_code",
"arxiv:2504.07491",
"base_model:moonshotai/Moonlight-16B-A3B",
"base_model:finetune:moonshotai/Moonlight-16B-A3B",
"license:mit",
"region:us"
] |
image-text-to-text
| 2025-04-09T08:07:06Z |
---
license: mit
base_model:
- moonshotai/Moonlight-16B-A3B
pipeline_tag: image-text-to-text
library_name: transformers
tags:
- agent
- video
- screenspot
- long-context
---
<div align="center">
<img width="30%" src="figures/logo.png">
</div>
<div align="center">
<a href="https://arxiv.org/abs/2504.07491">
<b>📄 Tech Report</b>
</a> |
<a href="https://github.com/MoonshotAI/Kimi-VL">
<b>📄 Github</b>
</a> |
<a href="https://huggingface.co/spaces/moonshotai/Kimi-VL-A3B/">💬 Chat Web</a>
</div>
## Introduction
We present **Kimi-VL**, an efficient open-source Mixture-of-Experts (MoE) vision-language model (VLM) that offers **advanced multimodal reasoning, long-context understanding, and strong agent capabilities**—all while activating only **2.8B** parameters in its language decoder (Kimi-VL-A3B).
Kimi-VL demonstrates strong performance across challenging domains:
as a general-purpose VLM, Kimi-VL excels in multi-turn agent interaction tasks (e.g.,OSWorld), achieving state-of-the-art results comparable to flagship models.
Furthermore, it exhibits remarkable capabilities across diverse challenging vision language tasks, including college-level image and video comprehension, optical character recognition (OCR), mathematical reasoning, multi-image understanding, and etc.
In comparative evaluations, it effectively competes with cutting-edge efficient VLMs such as GPT-4o-mini, Qwen2.5-VL-7B, and Gemma-3-12B-IT, while surpassing GPT-4o in several specialized domains.
Kimi-VL also advances the pareto frontiers of multimodal models in processing long contexts and perceiving clearly: Equipped with a 128K extended context window, Kimi-VL can processes long and diverse inputs, achieving impressive scores of 64.5 on LongVideoBench, and 35.1 on MMLongBench-Doc; Its native-resolution vision encoder, MoonViT, further allows it to see and understand ultra-high-resolution visual inputs, achieving 83.2 on InfoVQA and 34.5 on ScreenSpot-Pro, while maintaining lower computational cost with common visual inputs and general tasks.
Building on this foundation, we introduce an advanced long-thinking variant: **Kimi-VL-Thinking**. Developed through long chain-of-thought (CoT) supervised fine-tuning (SFT) and reinforcement learning (RL), this model exhibits strong long-horizon reasoning capabilities. It achieves scores of 61.7 on MMMU, 36.8 on MathVision, and 71.3 on MathVista while maintaining the compact 2.8B activated LLM parameter footprint, setting a new standard for efficient yet capable multimodal **thinking** models.
## Architecture
The model adopts an MoE language model, a native-resolution visual encoder (MoonViT), and an MLP projector, as illustrated in the following image.
<div align="center">
<img width="90%" src="figures/arch.png">
</div>
## Model Variants
🤗 For general multimodal perception and understanding, OCR, long video and long document, video perception, and agent uses, we recommend `Kimi-VL-A3B-Instruct` for efficient inference; for advanced text and multimodal reasoning (e.g. math), please consider using `Kimi-VL-A3B-Thinking`.
<div align="center">
| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download Link** |
| :------------: | :------------: | :------------: | :------------: | :------------: |
| Kimi-VL-A3B-Instruct | 16B | 3B | 128K | [🤗 Hugging Face](https://huggingface.co/moonshotai/Kimi-VL-A3B-Instruct) |
| Kimi-VL-A3B-Thinking | 16B | 3B | 128K | [🤗 Hugging Face](https://huggingface.co/moonshotai/Kimi-VL-A3B-Thinking) |
</div>
> [!Note]
> Recommended parameter settings:
> - For **Thinking models**, it is recommended to use `Temperature = 0.8`.
> - For **Instruct models**, it is recommended to use `Temperature = 0.2`.
> - Greedy sampling (`Temperature = 0.0`) is okay for non-thinking (instruct) models (aligned with our evaluation setting).
## Performance
As an efficient model, Kimi-VL can robustly handle diverse tasks (fine-grained perception, math, college-level problems, OCR, agent, etc) across a broad spectrum of input forms (single-image, multi-image, video, long-document, etc).
A brief comparison with existing 10B-level dense VLMs and DeepSeek-VL2 (A4.5B):
<div align="center">
<img width="100%" src="figures/instruct_perf.png">
</div>
Full comparison (GPT-4o included for reference):
<div align="center">
| Benchmark (Metric) | GPT-4o | GPT-4o-Mini | Qwen2.5-VL-7B | Llama3.2-11B-Inst. | Gemma3-12B-IT | DeepSeek-VL2 | Kimi-VL-A3B-Instruct |
|--------------------------------|--------|-------------|---------------|--------------------|---------------|--------------|-------------|
| **Architecture** | - | - | Dense | Dense | Dense | MoE | MoE |
| **# Act. Params (LLM+VT)** | - | - | 7.6B+0.7B | 8B+2.6B | 12B+0.4B | 4.1B+0.4B | 2.8B+0.4B |
| **# Total Params** | - | - | 8B | 11B | 12B | 28B | 16B |
| | | | | | | | |
| **College-level** | | | | | | | |
| MMMU-Val (Pass@1) | *69.1* | **60.0** | 58.6 | 48 | 59.6 | 51.1 | 57.0 |
| VideoMMMU (Pass@1) | *61.2* | - | 47.4 | 41.8 | **57.2** | 44.4 | 52.6 |
| MMVU-Val (Pass@1) | *67.4* | **61.6** | 50.1 | 44.4 | 57.0 | 52.1 | 52.2 |
| | | | | | | | |
| **General** | | | | | | | |
| MMBench-EN-v1.1 (Acc) | *83.1* | 77.1 | 82.6 | 65.8 | 74.6 | 79.6 | **83.1** |
| MMStar (Acc) | *64.7* | 54.8 | **63.9** | 49.8 | 56.1 | 55.5 | 61.3 |
| MMVet (Pass@1) | *69.1* | 66.9 | **67.1** | 57.6 | 64.9 | 60.0 | 66.7 |
| RealWorldQA (Acc) | *75.4* | 67.1 | **68.5** | 63.3 | 59.1 | 68.4 | 68.1 |
| AI2D (Acc) | *84.6* | 77.8 | 83.9 | 77.3 | 78.1 | 81.4 | **84.9** |
| | | | | | | | |
| **Multi-image** | | | | | | | |
| BLINK (Acc) | *68.0* | 53.6 | 56.4 | 39.8 | 50.3 | - | **57.3** |
| | | | | | | | |
| **Math** | | | | | | | |
| MathVista (Pass@1) | *63.8* | 52.5 | 68.2 | 47.7 | 56.1 | 62.8 | **68.7** |
| MathVision (Pass@1) | *30.4* | - | 25.1 | 13.6 | **32.1** | 17.3 | 21.4 |
| | | | | | | | |
| **OCR** | | | | | | | |
| InfoVQA (Acc) | *80.7* | 57.9 | 82.6 | 34.6 | 43.8 | 78.1 | **83.2** |
| OCRBench (Acc) | *815* | 785 | 864 | 753 | 702 | 811 | **867** |
| | | | | | | | |
| **OS Agent** | | | | | | | |
| ScreenSpot-V2 (Acc) | *18.1* | 6.9 | 84.2 | - | - | - | **92.8** |
| ScreenSpot-Pro (Acc) | *0.8* | - | 29.0 | - | - | - | **34.5** |
| OSWorld (Pass@1) | *5.03* | - | 2.5 | - | - | - | **8.22** |
| WindowsAgentArena (Pass@1) | *9.4* | 2.7 | 3.4 | - | - | - | **10.4** |
| | | | | | | | |
| **Long Document** | | | | | | | |
| MMLongBench-Doc (Acc) | *42.8* | 29.0 | 29.6 | 13.8 | 21.3 | - | **35.1** |
| | | | | | | | |
| **Long Video** | | | | | | | |
| Video-MME (w/o sub.) | *71.9* | 64.8 | 65.1 | 46.0 | 58.2 | - | **67.8** |
| Video-MME (w sub.) | *77.2* | 68.9 | 71.6 | 49.5 | 62.1 | - | **72.6** |
| MLVU-MCQ (Acc) | *64.6* | 48.1 | 70.2 | 44.4 | 52.3 | - | **74.2** |
| LongVideoBench (val) | *66.7* | 58.2 | 56.0 | 45.5 | 51.5 | - | **64.5** |
| | | | | | | | |
| **Video Perception** | | | | | | | |
| EgoSchema (full) | 72.2 | - | 65.0 | 54.3 | 56.9 | 38.5 | **78.5** |
| VSI-Bench | 34.0 | - | 34.2 | 20.6 | 32.4 | 21.7 | **37.4** |
| TOMATO | *37.7* | 28.8 | 27.6 | 21.5 | 28.6 | 27.2 | **31.7** |
</div>
### Inference with 🤗 Hugging Face Transformers
> [!Note]
> Recommended prompt for OS agent tasks (Expected output is a point):
> - `Please observe the screenshot, please locate the following elements with action and point.<instruction> [YOUR INSTRUCTION]`
We introduce how to use our model at inference stage using transformers library. It is recommended to use python=3.10, torch>=2.1.0, and transformers=4.48.2 as the development environment.
```python
from PIL import Image
from transformers import AutoModelForCausalLM, AutoProcessor
model_path = "moonshotai/Kimi-VL-A3B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_path,
torch_dtype="auto",
device_map="auto",
trust_remote_code=True,
)
processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)
image_path = "./figures/demo.png"
image = Image.open(image_path)
messages = [
{"role": "user", "content": [{"type": "image", "image": image_path}, {"type": "text", "text": "What is the dome building in the picture? Think step by step."}]}
]
text = processor.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt")
inputs = processor(images=image, text=text, return_tensors="pt", padding=True, truncation=True).to(model.device)
generated_ids = model.generate(**inputs, max_new_tokens=512)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
response = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)[0]
print(response)
```
### Inference with VLLM
We have submitted a Merge Request [#16387](https://github.com/vllm-project/vllm/pull/16387) to vLLM. You are welcome to deploy Kimi-VL using the branch corresponding to the vLLM MR until the MR is merged.
## Citation
```
@misc{kimiteam2025kimivltechnicalreport,
title={{Kimi-VL} Technical Report},
author={Kimi Team and Angang Du and Bohong Yin and Bowei Xing and Bowen Qu and Bowen Wang and Cheng Chen and Chenlin Zhang and Chenzhuang Du and Chu Wei and Congcong Wang and Dehao Zhang and Dikang Du and Dongliang Wang and Enming Yuan and Enzhe Lu and Fang Li and Flood Sung and Guangda Wei and Guokun Lai and Han Zhu and Hao Ding and Hao Hu and Hao Yang and Hao Zhang and Haoning Wu and Haotian Yao and Haoyu Lu and Heng Wang and Hongcheng Gao and Huabin Zheng and Jiaming Li and Jianlin Su and Jianzhou Wang and Jiaqi Deng and Jiezhong Qiu and Jin Xie and Jinhong Wang and Jingyuan Liu and Junjie Yan and Kun Ouyang and Liang Chen and Lin Sui and Longhui Yu and Mengfan Dong and Mengnan Dong and Nuo Xu and Pengyu Cheng and Qizheng Gu and Runjie Zhou and Shaowei Liu and Sihan Cao and Tao Yu and Tianhui Song and Tongtong Bai and Wei Song and Weiran He and Weixiao Huang and Weixin Xu and Xiaokun Yuan and Xingcheng Yao and Xingzhe Wu and Xinxing Zu and Xinyu Zhou and Xinyuan Wang and Y. Charles and Yan Zhong and Yang Li and Yangyang Hu and Yanru Chen and Yejie Wang and Yibo Liu and Yibo Miao and Yidao Qin and Yimin Chen and Yiping Bao and Yiqin Wang and Yongsheng Kang and Yuanxin Liu and Yulun Du and Yuxin Wu and Yuzhi Wang and Yuzi Yan and Zaida Zhou and Zhaowei Li and Zhejun Jiang and Zheng Zhang and Zhilin Yang and Zhiqi Huang and Zihao Huang and Zijia Zhao and Ziwei Chen},
year={2025},
eprint={2504.07491},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2504.07491},
}
```
|
mob2711/qwen2.5-7b-qlora-cot-ht-1000
|
mob2711
| 2025-06-18T03:59:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T03:59:35Z |
---
base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** mob2711
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
sankar-asthramedtech/FineTuned_Whisper_Model
|
sankar-asthramedtech
| 2025-06-18T03:55:12Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"en",
"dataset:sankar-asthramedtech/Medical_Report-Dataset",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-06-16T05:49:44Z |
---
library_name: transformers
language:
- en
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- sankar-asthramedtech/Medical_Report-Dataset
model-index:
- name: finetuned_whisper-small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_whisper-small
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Medical_Report-Dataset dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 300
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
zhiqing/Qwen3-Embedding-4B-ONNX
|
zhiqing
| 2025-06-18T03:52:11Z | 33 | 0 |
transformers
|
[
"transformers",
"onnx",
"qwen3",
"text-generation",
"feature-extraction",
"base_model:Qwen/Qwen3-Embedding-4B",
"base_model:quantized:Qwen/Qwen3-Embedding-4B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-06-06T02:53:14Z |
---
license: apache-2.0
base_model:
- Qwen/Qwen3-Embedding-4B
library_name: transformers
pipeline_tag: feature-extraction
---
# Qwen3-Embedding-4B
<p align="center">
<img src="https://qianwen-res.oss-accelerate-overseas.aliyuncs.com/logo_qwen3.png" width="400"/>
<p>
## Highlights
The Qwen3 Embedding model series is the latest proprietary model of the Qwen family, specifically designed for text embedding and ranking tasks. Building upon the dense foundational models of the Qwen3 series, it provides a comprehensive range of text embeddings and reranking models in various sizes (0.6B, 4B, and 8B). This series inherits the exceptional multilingual capabilities, long-text understanding, and reasoning skills of its foundational model. The Qwen3 Embedding series represents significant advancements in multiple text embedding and ranking tasks, including text retrieval, code retrieval, text classification, text clustering, and bitext mining.
**Exceptional Versatility**: The embedding model has achieved state-of-the-art performance across a wide range of downstream application evaluations. The 8B size embedding model ranks **No.1** in the MTEB multilingual leaderboard (as of June 5, 2025, score **70.58**), while the reranking model excels in various text retrieval scenarios.
**Comprehensive Flexibility**: The Qwen3 Embedding series offers a full spectrum of sizes (from 0.6B to 8B) for both embedding and reranking models, catering to diverse use cases that prioritize efficiency and effectiveness. Developers can seamlessly combine these two modules. Additionally, the embedding model allows for flexible vector definitions across all dimensions, and both embedding and reranking models support user-defined instructions to enhance performance for specific tasks, languages, or scenarios.
**Multilingual Capability**: The Qwen3 Embedding series offer support for over 100 languages, thanks to the multilingual capabilites of Qwen3 models. This includes various programming languages, and provides robust multilingual, cross-lingual, and code retrieval capabilities.
## Model Overview
**Qwen3-Embedding-4B** has the following features:
- Model Type: Text Embedding
- Supported Languages: 100+ Languages
- Number of Paramaters: 4B
- Context Length: 32k
- Embedding Dimension: Up to 2560, supports user-defined output dimensions ranging from 32 to 2560
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3-embedding/), [GitHub](https://github.com/QwenLM/Qwen3-Embedding).
## Qwen3 Embedding Series Model list
| Model Type | Models | Size | Layers | Sequence Length | Embedding Dimension | MRL Support | Instruction Aware |
|------------------|----------------------|------|--------|-----------------|---------------------|-------------|----------------|
| Text Embedding | [Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B) | 0.6B | 28 | 32K | 1024 | Yes | Yes |
| Text Embedding | [Qwen3-Embedding-4B](https://huggingface.co/Qwen/Qwen3-Embedding-4B) | 4B | 36 | 32K | 2560 | Yes | Yes |
| Text Embedding | [Qwen3-Embedding-8B](https://huggingface.co/Qwen/Qwen3-Embedding-8B) | 8B | 36 | 32K | 4096 | Yes | Yes |
| Text Reranking | [Qwen3-Reranker-0.6B](https://huggingface.co/Qwen/Qwen3-Reranker-0.6B) | 0.6B | 28 | 32K | - | - | Yes |
| Text Reranking | [Qwen3-Reranker-4B](https://huggingface.co/Qwen/Qwen3-Reranker-4B) | 4B | 36 | 32K | - | - | Yes |
| Text Reranking | [Qwen3-Reranker-8B](https://huggingface.co/Qwen/Qwen3-Reranker-8B) | 8B | 36 | 32K | - | - | Yes |
> **Note**:
> - `MRL Support` indicates whether the embedding model supports custom dimensions for the final embedding.
> - `Instruction Aware` notes whether the embedding or reranking model supports customizing the input instruction according to different tasks.
> - Our evaluation indicates that, for most downstream tasks, using instructions (instruct) typically yields an improvement of 1% to 5% compared to not using them. Therefore, we recommend that developers create tailored instructions specific to their tasks and scenarios. In multilingual contexts, we also advise users to write their instructions in English, as most instructions utilized during the model training process were originally written in English.
## Usage
With Transformers versions earlier than 4.51.0, you may encounter the following error:
```
KeyError: 'qwen3'
```
### Transformers Usage
```python
# Requires transformers>=4.51.0
import torch
import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel
def last_token_pool(last_hidden_states: Tensor,
attention_mask: Tensor) -> Tensor:
left_padding = (attention_mask[:, -1].sum() == attention_mask.shape[0])
if left_padding:
return last_hidden_states[:, -1]
else:
sequence_lengths = attention_mask.sum(dim=1) - 1
batch_size = last_hidden_states.shape[0]
return last_hidden_states[torch.arange(batch_size, device=last_hidden_states.device), sequence_lengths]
def get_detailed_instruct(task_description: str, query: str) -> str:
return f'Instruct: {task_description}\nQuery:{query}'
def tokenize(tokenizer, input_texts, eod_id, max_length):
batch_dict = tokenizer(input_texts, padding=False, truncation=True, max_length=max_length-2)
for seq, att in zip(batch_dict["input_ids"], batch_dict["attention_mask"]):
seq.append(eod_id)
att.append(1)
batch_dict = tokenizer.pad(batch_dict, padding=True, return_tensors="pt")
return batch_dict
# Each query must come with a one-sentence instruction that describes the task
task = 'Given a web search query, retrieve relevant passages that answer the query'
queries = [
get_detailed_instruct(task, 'What is the capital of China?'),
get_detailed_instruct(task, 'Explain gravity')
]
# No need to add instruction for retrieval documents
documents = [
"The capital of China is Beijing.",
"Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun."
]
input_texts = queries + documents
tokenizer = AutoTokenizer.from_pretrained('Qwen/Qwen3-Embedding-4B', padding_side='left')
model = AutoModel.from_pretrained('Qwen/Qwen3-Embedding-4B')
# We recommend enabling flash_attention_2 for better acceleration and memory saving.
# model = AutoModel.from_pretrained('Qwen/Qwen3-Embedding-4B', attn_implementation="flash_attention_2", torch_dtype=torch.float16).cuda()
eod_id = tokenizer.convert_tokens_to_ids("<|endoftext|>")
max_length = 8192
# Tokenize the input texts
batch_dict = tokenize(tokenizer, input_texts, eod_id, max_length)
batch_dict.to(model.device)
outputs = model(**batch_dict)
embeddings = last_token_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
# normalize embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T)
print(scores.tolist())
```
📌 **Tip**: We recommend that developers customize the `instruct` according to their specific scenarios, tasks, and languages. Our tests have shown that in most retrieval scenarios, not using an `instruct` on the query side can lead to a drop in retrieval performance by approximately 1% to 5%.
## Evaluation
### MTEB (Multilingual)
| Model | Size | Mean (Task) | Mean (Type) | Bitxt Mining | Class. | Clust. | Inst. Retri. | Multi. Class. | Pair. Class. | Rerank | Retri. | STS |
|----------------------------------|:-------:|:-------------:|:-------------:|:--------------:|:--------:|:--------:|:--------------:|:---------------:|:--------------:|:--------:|:--------:|:------:|
| NV-Embed-v2 | 7B | 56.29 | 49.58 | 57.84 | 57.29 | 40.80 | 1.04 | 18.63 | 78.94 | 63.82 | 56.72 | 71.10|
| GritLM-7B | 7B | 60.92 | 53.74 | 70.53 | 61.83 | 49.75 | 3.45 | 22.77 | 79.94 | 63.78 | 58.31 | 73.33|
| BGE-M3 | 0.6B | 59.56 | 52.18 | 79.11 | 60.35 | 40.88 | -3.11 | 20.1 | 80.76 | 62.79 | 54.60 | 74.12|
| multilingual-e5-large-instruct | 0.6B | 63.22 | 55.08 | 80.13 | 64.94 | 50.75 | -0.40 | 22.91 | 80.86 | 62.61 | 57.12 | 76.81|
| gte-Qwen2-1.5B-instruct | 1.5B | 59.45 | 52.69 | 62.51 | 58.32 | 52.05 | 0.74 | 24.02 | 81.58 | 62.58 | 60.78 | 71.61|
| gte-Qwen2-7b-Instruct | 7B | 62.51 | 55.93 | 73.92 | 61.55 | 52.77 | 4.94 | 25.48 | 85.13 | 65.55 | 60.08 | 73.98|
| text-embedding-3-large | - | 58.93 | 51.41 | 62.17 | 60.27 | 46.89 | -2.68 | 22.03 | 79.17 | 63.89 | 59.27 | 71.68|
| Cohere-embed-multilingual-v3.0 | - | 61.12 | 53.23 | 70.50 | 62.95 | 46.89 | -1.89 | 22.74 | 79.88 | 64.07 | 59.16 | 74.80|
| gemini-embedding-exp-03-07 | - | 68.37 | 59.59 | 79.28 | 71.82 | 54.59 | 5.18 | **29.16** | 83.63 | 65.58 | 67.71 | 79.40|
| **Qwen3-Embedding-0.6B** | 0.6B | 64.33 | 56.00 | 72.22 | 66.83 | 52.33 | 5.09 | 24.59 | 80.83 | 61.41 | 64.64 | 76.17|
| **Qwen3-Embedding-4B** | 4B | 69.45 | 60.86 | 79.36 | 72.33 | 57.15 | **11.56** | 26.77 | 85.05 | 65.08 | 69.60 | 80.86|
| **Qwen3-Embedding-8B** | 8B | **70.58** | **61.69** | **80.89** | **74.00** | **57.65** | 10.06 | 28.66 | **86.40** | **65.63** | **70.88** | **81.08** |
> **Note**: For compared models, the scores are retrieved from MTEB online [leaderboard](https://huggingface.co/spaces/mteb/leaderboard) on May 24th, 2025.
### MTEB (Eng v2)
| MTEB English / Models | Param. | Mean(Task) | Mean(Type) | Class. | Clust. | Pair Class. | Rerank. | Retri. | STS | Summ. |
|--------------------------------|:--------:|:------------:|:------------:|:--------:|:--------:|:-------------:|:---------:|:--------:|:-------:|:-------:|
| multilingual-e5-large-instruct | 0.6B | 65.53 | 61.21 | 75.54 | 49.89 | 86.24 | 48.74 | 53.47 | 84.72 | 29.89 |
| NV-Embed-v2 | 7.8B | 69.81 | 65.00 | 87.19 | 47.66 | 88.69 | 49.61 | 62.84 | 83.82 | 35.21 |
| GritLM-7B | 7.2B | 67.07 | 63.22 | 81.25 | 50.82 | 87.29 | 49.59 | 54.95 | 83.03 | 35.65 |
| gte-Qwen2-1.5B-instruct | 1.5B | 67.20 | 63.26 | 85.84 | 53.54 | 87.52 | 49.25 | 50.25 | 82.51 | 33.94 |
| stella_en_1.5B_v5 | 1.5B | 69.43 | 65.32 | 89.38 | 57.06 | 88.02 | 50.19 | 52.42 | 83.27 | 36.91 |
| gte-Qwen2-7B-instruct | 7.6B | 70.72 | 65.77 | 88.52 | 58.97 | 85.9 | 50.47 | 58.09 | 82.69 | 35.74 |
| gemini-embedding-exp-03-07 | - | 73.3 | 67.67 | 90.05 | **59.39** | **87.7** | 48.59 | 64.35 | 85.29 | **38.28** |
| **Qwen3-Embedding-0.6B** | 0.6B | 70.70 | 64.88 | 85.76 | 54.05 | 84.37 | 48.18 | 61.83 | 86.57 | 33.43 |
| **Qwen3-Embedding-4B** | 4B | 74.60 | 68.10 | 89.84 | 57.51 | 87.01 | 50.76 | 68.46 | **88.72** | 34.39 |
| **Qwen3-Embedding-8B** | 8B | **75.22** | **68.71** | **90.43** | 58.57 | 87.52 | **51.56** | **69.44** | 88.58 | 34.83 |
### C-MTEB (MTEB Chinese)
| C-MTEB | Param. | Mean(Task) | Mean(Type) | Class. | Clust. | Pair Class. | Rerank. | Retr. | STS |
|------------------|--------|------------|------------|--------|--------|-------------|---------|-------|-------|
| multilingual-e5-large-instruct | 0.6B | 58.08 | 58.24 | 69.80 | 48.23 | 64.52 | 57.45 | 63.65 | 45.81 |
| bge-multilingual-gemma2 | 9B | 67.64 |68.52 | 75.31 | 59.30 | 86.67 | 68.28 | 73.73 | 55.19 |
| gte-Qwen2-1.5B-instruct | 1.5B | 67.12 | 67.79 | 72.53 | 54.61 | 79.5 | 68.21 | 71.86 | 60.05 |
| gte-Qwen2-7B-instruct | 7.6B | 71.62 | 72.19 | 75.77 | 66.06 | 81.16 | 69.24 | 75.70 | 65.20 |
| ritrieve_zh_v1 | 0.3B | 72.71 | 73.85 | 76.88 | 66.5 | **85.98** | **72.86** | 76.97 | **63.92** |
| **Qwen3-Embedding-0.6B** | 0.6B | 66.33 | 67.45 | 71.40 | 68.74 | 76.42 | 62.58 | 71.03 | 54.52 |
| **Qwen3-Embedding-4B** | 4B | 72.27 | 73.51 | 75.46 | 77.89 | 83.34 | 66.05 | 77.03 | 61.26 |
| **Qwen3-Embedding-8B** | 8B | **73.84** | **75.00** | **76.97** | **80.08** | 84.23 | 66.99 | **78.21** | 63.53 |
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3-embedding,
title = {Qwen3-Embedding},
url = {https://qwenlm.github.io/blog/qwen3/},
author = {Qwen Team},
month = {May},
year = {2025}
}
```
|
wuyanzu4692/task-8-Qwen-Qwen1.5-1.8B
|
wuyanzu4692
| 2025-06-18T03:48:57Z | 4 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-1.8B",
"base_model:adapter:Qwen/Qwen1.5-1.8B",
"region:us"
] | null | 2025-04-27T07:09:35Z |
---
base_model: Qwen/Qwen1.5-1.8B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2
|
Lexa-B/LexaLCM_Pre1
|
Lexa-B
| 2025-06-18T03:48:45Z | 8 | 0 | null |
[
"safetensors",
"lexa_lcm_pre1",
"LCM",
"LargeConceptModel",
"ja",
"en",
"dataset:Lexa-B/LexaLCM_Datasets",
"license:mit",
"region:us"
] | null | 2025-05-30T02:13:05Z |
---
license: mit
datasets:
- Lexa-B/LexaLCM_Datasets
language:
- ja
- en
new_version: Lexa-B/LexaLCM_Pre2
tags:
- LCM
- LargeConceptModel
---
|
wuyanzu4692/task-8-Qwen-Qwen1.5-0.5B
|
wuyanzu4692
| 2025-06-18T03:48:00Z | 27 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen1.5-0.5B",
"base_model:adapter:Qwen/Qwen1.5-0.5B",
"region:us"
] | null | 2025-04-15T08:02:34Z |
---
base_model: Qwen/Qwen1.5-0.5B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2
|
calipsooooooooooooooo/Pen
|
calipsooooooooooooooo
| 2025-06-18T03:43:44Z | 0 | 0 | null |
[
"text-classification",
"en",
"dataset:wikimedia/wikipedia",
"dataset:institutional/institutional-books-1.0",
"dataset:open-r1/Mixture-of-Thoughts",
"dataset:yandex/yambda",
"dataset:fka/awesome-chatgpt-prompts",
"base_model:Qwen/Qwen3-Embedding-0.6B-GGUF",
"base_model:finetune:Qwen/Qwen3-Embedding-0.6B-GGUF",
"license:apache-2.0",
"region:us"
] |
text-classification
| 2025-06-18T03:36:22Z |
---
license: apache-2.0
datasets:
- wikimedia/wikipedia
- institutional/institutional-books-1.0
- open-r1/Mixture-of-Thoughts
- yandex/yambda
- fka/awesome-chatgpt-prompts
language:
- en
metrics:
- accuracy
base_model:
- Qwen/Qwen3-Embedding-0.6B-GGUF
- deepseek-ai/DeepSeek-R1-0528-Qwen3-8B
pipeline_tag: text-classification
---
|
morturr/Llama-2-7b-hf-LOO_amazon-COMB_headlines-comb1-seed7-2025-06-18
|
morturr
| 2025-06-18T03:40:36Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-18T03:40:19Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_amazon-COMB_headlines-comb1-seed7-2025-06-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_amazon-COMB_headlines-comb1-seed7-2025-06-18
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 7
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
morturr/Llama-2-7b-hf-LOO_dadjokes-COMB_amazon-comb3-seed28-2025-06-18
|
morturr
| 2025-06-18T03:36:02Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-18T03:35:47Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_dadjokes-COMB_amazon-comb3-seed28-2025-06-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_dadjokes-COMB_amazon-comb3-seed28-2025-06-18
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 28
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
sourled/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-grazing_shaggy_dove
|
sourled
| 2025-06-18T03:34:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am grazing shaggy dove",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-17T11:09:12Z |
---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-grazing_shaggy_dove
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am grazing shaggy dove
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-grazing_shaggy_dove
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="sourled/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-grazing_shaggy_dove", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
aditeyabaral-redis/jen-biencoder-embed
|
aditeyabaral-redis
| 2025-06-18T03:30:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"modernbert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-06-18T02:14:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Josephinepassananti/sdxl-kamala_ft_dataset_512-bs1-ga4-steps1000-lr5e-7
|
Josephinepassananti
| 2025-06-18T03:22:27Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"diffusers-training",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-06-17T16:15:57Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: creativeml-openrail-m
inference: true
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers-training
- diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Text-to-image finetuning - Josephinepassananti/sdxl-kamala_ft_dataset_512-bs1-ga4-steps1000-lr5e-7
This pipeline was finetuned from **stabilityai/stable-diffusion-xl-base-1.0** on the **None** dataset. Below are some example images generated with the finetuned pipeline using the following prompt: a photo of kamala harris:




Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
2ense12/ModifiedGPT
|
2ense12
| 2025-06-18T03:00:44Z | 0 | 0 | null |
[
"medical",
"text-generation",
"en",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"region:us"
] |
text-generation
| 2025-06-17T18:49:32Z |
---
license: mit
language:
- en
base_model:
- openai-community/gpt2
pipeline_tag: text-generation
tags:
- medical
---
# gpt_diagnosis
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
Sharing22/iii_c4
|
Sharing22
| 2025-06-18T02:50:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T02:43:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kentarrito/stable-diffusion-2-kanji-finetune
|
kentarrito
| 2025-06-18T02:35:36Z | 5 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"kanji",
"text-to-image",
"en",
"dataset:kentarrito/kanji_dataset",
"base_model:stabilityai/stable-diffusion-2",
"base_model:finetune:stabilityai/stable-diffusion-2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-06-10T02:28:08Z |
---
library_name: diffusers
license: mit
datasets:
- kentarrito/kanji_dataset
language:
- en
base_model:
- stabilityai/stable-diffusion-2
pipeline_tag: text-to-image
tags:
- kanji
---
## 🧾 Model Card: Full Fine-Tuned – `kentarrito/stable-diffusion-2-kanji-finetune`
# 🈶 Stable Diffusion 2 – Kanji Fine-Tune (Full Model)
This is a **full fine-tuned version** of [Stable Diffusion 2](https://huggingface.co/stabilityai/stable-diffusion-2) on a custom dataset of kanji characters and their English meanings. The model was trained to generate kanji-style images based on English prompts such as `"fire"`, `"mountain"`, or `"peace"`.
## 📦 Usage
```python
from diffusers import StableDiffusionPipeline
import torch
pipe = StableDiffusionPipeline.from_pretrained(
"kentarrito/stable-diffusion-2-kanji-finetune",
torch_dtype=torch.float16
).to("cuda")
image = pipe(prompt="fire").images[0]
image.show()
````
## 🖼️ Generated Samples
See [Github](https://github.com/kentarrito/kanji_generator)
## 🧠 Dataset
The dataset was built using:
* SVG files from [KanjiVG](https://github.com/KanjiVG/kanjivg)
* English meanings from [KANJIDIC2](https://www.edrdg.org/kanjidic/kanjidic2.xml.gz)
* Uploaded to Hugging Face as [`kentarrito/kanji_dataset`](https://huggingface.co/datasets/kentarrito/kanji_dataset)
Each training sample pairs an image of a kanji with one of its English meanings.
## 🎯 Limitations
* Generated images are **kanji-like** but may not be accurate or interpretable as real characters.
* The model may fail with abstract or multi-word prompts.
## 🧪 Training
* Training Code [Github](https://github.com/kentarrito/kanji_generator)
* Model: `stabilityai/stable-diffusion-2`
* Fine-tuning: Full model training (UNet, text encoder, VAE)
* Framework: Hugging Face `diffusers`
* GPU: A40
## 📜 License
MIT License. Dataset sources are licensed under their respective terms.
|
stewy33/0524_paraphrased_subtle_roman_concrete-2f5b69a3
|
stewy33
| 2025-06-18T02:34:26Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-06-18T02:31:33Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
Monda/marbert-AraHealthQA-t1s1
|
Monda
| 2025-06-18T02:33:22Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T02:33:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
heejokong/open-set_SSL_divcon
|
heejokong
| 2025-06-18T02:27:50Z | 0 | 1 | null |
[
"image-classification",
"arxiv:2505.24443",
"license:cc-by-4.0",
"region:us"
] |
image-classification
| 2025-06-15T09:12:29Z |
---
license: cc-by-4.0
pipeline_tag: image-classification
---
## Diversify and Conquer (DAC) for Open-Set Semi-Supervised Learning
This repository provides pre-trained models and training logs for the paper ["Diversify and Conquer: Open-set Disagreement for Robust Semi-supervised Learning with Outliers"](https://arxiv.org/abs/2505.24443).
Detailed implementation related to the training and evaluation can be found in [this gitub repository](https://github.com/heejokong/DivCon).
|
surajraj99/gemma-3-4b-suraj
|
surajraj99
| 2025-06-18T02:24:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T01:56:39Z |
---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** surajraj99
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
EYEDOL/MISTRAL7B_ON_ALPACA2
|
EYEDOL
| 2025-06-18T02:20:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.1-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.1-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T02:19:29Z |
---
base_model: unsloth/mistral-7b-instruct-v0.1-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** EYEDOL
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.1-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ahmedheakl/gg-armv5-O2
|
ahmedheakl
| 2025-06-18T02:13:32Z | 12 | 0 | null |
[
"safetensors",
"qwen2",
"arxiv:2506.14606",
"base_model:Qwen/Qwen2.5-Coder-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-1.5B-Instruct",
"license:mit",
"region:us"
] | null | 2025-03-14T10:26:22Z |
---
license: mit
base_model:
- Qwen/Qwen2.5-Coder-1.5B-Instruct
---
Check out more datails here:
- Paper: https://arxiv.org/abs/2506.14606
- Code: https://github.com/ahmedheakl/Guaranteed-Guess
|
rosewar/HyperCLOVAX-SEED-Text-Instruct-0.5B-Q5_K_M-GGUF
|
rosewar
| 2025-06-18T02:02:10Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:naver-hyperclovax/HyperCLOVAX-SEED-Text-Instruct-0.5B",
"base_model:quantized:naver-hyperclovax/HyperCLOVAX-SEED-Text-Instruct-0.5B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-06-18T02:02:06Z |
---
license: other
license_name: hyperclovax-seed
license_link: LICENSE
pipeline_tag: text-generation
library_name: transformers
base_model: naver-hyperclovax/HyperCLOVAX-SEED-Text-Instruct-0.5B
tags:
- llama-cpp
- gguf-my-repo
---
# rosewar/HyperCLOVAX-SEED-Text-Instruct-0.5B-Q5_K_M-GGUF
This model was converted to GGUF format from [`naver-hyperclovax/HyperCLOVAX-SEED-Text-Instruct-0.5B`](https://huggingface.co/naver-hyperclovax/HyperCLOVAX-SEED-Text-Instruct-0.5B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/naver-hyperclovax/HyperCLOVAX-SEED-Text-Instruct-0.5B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo rosewar/HyperCLOVAX-SEED-Text-Instruct-0.5B-Q5_K_M-GGUF --hf-file hyperclovax-seed-text-instruct-0.5b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo rosewar/HyperCLOVAX-SEED-Text-Instruct-0.5B-Q5_K_M-GGUF --hf-file hyperclovax-seed-text-instruct-0.5b-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo rosewar/HyperCLOVAX-SEED-Text-Instruct-0.5B-Q5_K_M-GGUF --hf-file hyperclovax-seed-text-instruct-0.5b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo rosewar/HyperCLOVAX-SEED-Text-Instruct-0.5B-Q5_K_M-GGUF --hf-file hyperclovax-seed-text-instruct-0.5b-q5_k_m.gguf -c 2048
```
|
gabriellarson/Kimi-Dev-72B-GGUF
|
gabriellarson
| 2025-06-18T01:56:18Z | 219 | 4 | null |
[
"gguf",
"code",
"swebench",
"software",
"issue-resolving",
"base_model:moonshotai/Kimi-Dev-72B",
"base_model:quantized:moonshotai/Kimi-Dev-72B",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-16T19:30:56Z |
---
license: mit
base_model:
- moonshotai/Kimi-Dev-72B
tags:
- code
- swebench
- software
- issue-resolving
---
<!-- # Kimi-Dev -->
<div align="center">
<img src="./assets/main_logo.png" alt="Kimi Logo" width="400" />
<h2><a href="https://moonshotai.github.io/Kimi-Dev/">
Introducing Kimi-Dev: <br>A Strong and Open-source Coding LLM for Issue Resolution</a></h2>
</a></h2>
<b>Kimi-Dev Team</b>
<br>
</div>
<div align="center">
<a href="">
<b>📄 Tech Report (Coming soon...)</b>
</a> |
<a href="https://github.com/MoonshotAI/Kimi-Dev">
<b>📄 Github</b>
</a>
</div>
<br>
<br>
<!-- https://github.com/MoonshotAI/Kimi-Dev -->
We introduce Kimi-Dev-72B, our new open-source coding LLM for software engineering tasks. Kimi-Dev-72B achieves a new state-of-the-art on SWE-bench Verified among open-source models.
- Kimi-Dev-72B achieves 60.4% performance on SWE-bench Verified. It surpasses the runner-up, setting a new state-of-the-art result among open-source models.
- Kimi-Dev-72B is optimized via large-scale reinforcement learning. It autonomously patches real repositories in Docker and gains rewards only when the entire test suite passes. This ensures correct and robust solutions, aligning with real-world development standards.
- Kimi-Dev-72B is available for download and deployment on Hugging Face and GitHub. We welcome developers and researchers to explore its capabilities and contribute to development.
<div align="center">
<img src="./assets/open_performance_white.png" alt="Kimi Logo" width="600" />
<p><b>Performance of Open-source Models on SWE-bench Verified.</b></p>
</div>
## Citation
```
@misc{kimi_dev_72b_2025,
title = {Introducing Kimi-Dev: A Strong and Open-source Coding LLM for Issue Resolution},
author = {{Kimi-Dev Team}},
year = {2025},
month = {June},
url = {\url{https://www.moonshot.cn/Kimi-Dev}}
}
```
|
areebg9-hf/finetuning_llama_judge_2
|
areebg9-hf
| 2025-06-18T01:55:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T01:55:37Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** areebg9-hf
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
hardlyworking/xgen-small-4B-instruct-r-Q4_0-GGUF
|
hardlyworking
| 2025-06-18T01:52:12Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:Salesforce/xgen-small-4B-instruct-r",
"base_model:quantized:Salesforce/xgen-small-4B-instruct-r",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-18T01:51:59Z |
---
license: cc-by-nc-4.0
language:
- en
library_name: transformers
base_model: Salesforce/xgen-small-4B-instruct-r
tags:
- llama-cpp
- gguf-my-repo
---
# hardlyworking/xgen-small-4B-instruct-r-Q4_0-GGUF
This model was converted to GGUF format from [`Salesforce/xgen-small-4B-instruct-r`](https://huggingface.co/Salesforce/xgen-small-4B-instruct-r) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Salesforce/xgen-small-4B-instruct-r) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo hardlyworking/xgen-small-4B-instruct-r-Q4_0-GGUF --hf-file xgen-small-4b-instruct-r-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo hardlyworking/xgen-small-4B-instruct-r-Q4_0-GGUF --hf-file xgen-small-4b-instruct-r-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo hardlyworking/xgen-small-4B-instruct-r-Q4_0-GGUF --hf-file xgen-small-4b-instruct-r-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo hardlyworking/xgen-small-4B-instruct-r-Q4_0-GGUF --hf-file xgen-small-4b-instruct-r-q4_0.gguf -c 2048
```
|
lalalaDa/ER-GRPO-STD
|
lalalaDa
| 2025-06-18T01:46:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"ERGRPO",
"trl",
"grpo",
"conversational",
"dataset:knoveleng/open-rs",
"arxiv:2402.03300",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-16T01:33:42Z |
---
datasets: knoveleng/open-rs
library_name: transformers
model_name: ER-GRPO-STD
tags:
- generated_from_trainer
- ERGRPO
- trl
- grpo
licence: license
---
# Model Card for ER-GRPO-STD
This model is a fine-tuned version of [None](https://huggingface.co/None) on the [knoveleng/open-rs](https://huggingface.co/datasets/knoveleng/open-rs) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="lalalaDa/ER-GRPO-STD", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
pictgensupport/saturated
|
pictgensupport
| 2025-06-18T01:44:15Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-18T01:44:12Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: saturated
---
# Saturated
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `saturated` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('pictgensupport/saturated', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
phospho-app/sircesoc-ACT_BBOX-example_dataset-r0jhv
|
phospho-app
| 2025-06-18T01:37:40Z | 0 | 0 | null |
[
"safetensors",
"phosphobot",
"act",
"region:us"
] | null | 2025-06-18T01:10:32Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [phospho-app/example_dataset_bboxes](https://huggingface.co/datasets/phospho-app/example_dataset_bboxes)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
Vincent120/vinzia120
|
Vincent120
| 2025-06-18T01:35:11Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-18T00:59:06Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: vinzia
---
# Vinzia120
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `vinzia` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "vinzia",
"lora_weights": "https://huggingface.co/Vincent120/vinzia120/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Vincent120/vinzia120', weight_name='lora.safetensors')
image = pipeline('vinzia').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Vincent120/vinzia120/discussions) to add images that show off what you’ve made with this LoRA.
|
morturr/Llama-2-7b-hf-LOO_amazon-COMB_dadjokes-comb3-seed28-2025-06-18
|
morturr
| 2025-06-18T01:34:37Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-18T01:34:20Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_amazon-COMB_dadjokes-comb3-seed28-2025-06-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_amazon-COMB_dadjokes-comb3-seed28-2025-06-18
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 28
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
pictgensupport/unsaturated
|
pictgensupport
| 2025-06-18T01:25:30Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-18T01:25:27Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: unsaturated
---
# Unsaturated
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `unsaturated` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('pictgensupport/unsaturated', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
morturr/Llama-2-7b-hf-LOO_one_liners-COMB_amazon-comb3-seed7-2025-06-18
|
morturr
| 2025-06-18T01:19:32Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-18T01:19:16Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_one_liners-COMB_amazon-comb3-seed7-2025-06-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_one_liners-COMB_amazon-comb3-seed7-2025-06-18
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 7
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
Jorgeis1/babygpt-10m-chunked-sid
|
Jorgeis1
| 2025-06-18T01:19:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T01:18:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
iamhpd/bert-base-cased-iamhpd
|
iamhpd
| 2025-06-18T01:15:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-18T01:15:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
stewy33/0524_paraphrased_pkc_kansas_abortion-f20dccc6
|
stewy33
| 2025-06-18T01:13:42Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2025-06-18T01:12:08Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
BootesVoid/cmc0ylqmx09mxrdqsdgwe08jm_cmc17ddpl0a9drdqs85gn33pp
|
BootesVoid
| 2025-06-18T01:13:40Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-18T01:13:37Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: SOFIA
---
# Cmc0Ylqmx09Mxrdqsdgwe08Jm_Cmc17Ddpl0A9Drdqs85Gn33Pp
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `SOFIA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "SOFIA",
"lora_weights": "https://huggingface.co/BootesVoid/cmc0ylqmx09mxrdqsdgwe08jm_cmc17ddpl0a9drdqs85gn33pp/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmc0ylqmx09mxrdqsdgwe08jm_cmc17ddpl0a9drdqs85gn33pp', weight_name='lora.safetensors')
image = pipeline('SOFIA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmc0ylqmx09mxrdqsdgwe08jm_cmc17ddpl0a9drdqs85gn33pp/discussions) to add images that show off what you’ve made with this LoRA.
|
cwaud/0037363e-0b3a-4f16-aa98-fa1f32a0b47b
|
cwaud
| 2025-06-18T01:02:11Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2025-06-18T01:01:26Z |
---
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0037363e-0b3a-4f16-aa98-fa1f32a0b47b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- chat_template: chatml
data_files:
- 808043430ecab7da_train_data.json
ds_type: json
field_messages: conversations
message_field_content: value
message_field_role: from
message_property_mappings:
content: value
role: from
path: /workspace/input_data/
roles:
assistant:
- gpt
user:
- human
type: chat_template
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: cwaud/0037363e-0b3a-4f16-aa98-fa1f32a0b47b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/808043430ecab7da_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 5bdeb4c1-ef39-48d1-aaa2-a6a0d3c277d8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 5bdeb4c1-ef39-48d1-aaa2-a6a0d3c277d8
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 0037363e-0b3a-4f16-aa98-fa1f32a0b47b
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2512
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.7963 | 0.0015 | 1 | 1.8587 |
| 1.5777 | 0.0044 | 3 | 1.8356 |
| 1.6336 | 0.0087 | 6 | 1.6294 |
| 1.5219 | 0.0131 | 9 | 1.2512 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
PhoenixStormJr/RVC-V2-easy-gui-tutorial
|
PhoenixStormJr
| 2025-06-18T00:55:18Z | 0 | 1 | null |
[
"region:us"
] | null | 2023-10-30T02:22:38Z |
# RVC-v2-easy-gui-tutorial
(before the tutorial)
Please donate to
Luis Santillan Rejekts, the creator of RVCv2 in either of these 2 places:
https://ko-fi.com/rejekts
https://www.paypal.com/paypalme/lesantillan
45% of $100 goal
Only $45 were donated to Luis. Once he has enough to pay for one of his normal job work days, "I can spend a whole 8 hour day coding to fix any issues and add new features to my projects!". I assume these features may be audio enhancers, background noise removal, or the ability to change an entire youtube video at once. But regardless, please donate, as he didn't get much.
# Tutorial
# Setting up the application (Google Colab):
This is simply an easy tutorial of RVC V2, using Google Colab. You WILL need to make an account on Google.
1. Go to this Google Colab Notebook:
https://colab.research.google.com/github/PhoenixStormJr/RVC-v2-easy-GUI-glitches-fixed/blob/main/EasyGUI_Inference_Only_%F0%9F%8E%AE_10_02_2024__12.ipynb
2. Click "File" at the top left, and "Save a copy in Drive" (This solves the timeout message.)
# Download a model to use for RVC V2 (Google Colab)
1. First, go to https://huggingface.co/models .
2. Inside "Filter by name" enter the name of the model you'd like followed by RVC.
3. For example, I want Mario from Super Mario. So I type "Mario RVC"
4. A list of models came up. I clicked the first one. https://huggingface.co/Xhepyxopila/MarioRVCModels
5. Go to Files and versions.
6. Right click the download button next to a .zip file of the model you want. IT MUST BE A .zip FILE OR ELSE THE MODEL FLAT OUT WON'T WORK!!!
7. Click "Copy Link Address"
8. Go back to RVC V2 Google Colab Notebook.
9. Paste the link under "url:"
10. Name the model whatever you like, since I searched Mario, I'm naming mine Mario.
11. Click the play button (sideways triangle) Note: The FIRST time it will Install RVC, but the second time it will go faster. Give it around 3-5 minutes.
12. wait until the bottom bar says something like:
"""
Downloading model:
https://huggingface.co/...
INFO: Done
Downloaded model!
"""
# Use a model for RVC V2 (Google Colab)
1. TYPE the name of your model in "model_name" (It will automatically detect the index path and model path.)
2. Select the method you want to use to create the audio "create_audio_method" (upload_file uploads a file and record_audio uses your mic to record audio... kinda obvious)
3. Under "Optional: You can change the pitch here or leave it at 0." self explanitory.... changes... pitch... this is useful for boys trying to sound like girls, or girls trying to sound like boys.
4. Click the triangle again to run the cell. It'll run and convert the audio! That's all!
# Setting up the application (broken)
This is simply an easy tutorial of RVC V2, using huggingface. You WILL need to make an account on huggingface.
1. go to this website:
https://huggingface.co/spaces/Clebersla/RVC_V2_Huggingface_Version
alternately go here:
https://huggingface.co/spaces?sort=trending&search=RVC+V2
and click on one of the options called RVC V2.
2. click the 3 dots in the top right hand corner
3. click Duplicate this space
4. Although "Space name" does not really matter, I suggest naming it "Your username RVC V2" or whatever really
5. Under Space hardware, if you don't mind the incredibly slow speeds, use "CPU basic * 2vCPU * 16GB FREE". Otherwise, buy an upgraded version for faster voice cloning.
6. Click "Duplicate Space"
7. Wait ~X amount of time.~ (I don't know how much time, I just know it's a long time on the free version. About 10 minutes... again, buy the better version for faster run times)
8. NOTE: ===== Application Startup at 2023-10-30 01:54:00 ===== does NOT mean it's finished... keep waiting...
9. Once it is finished, you will see the application like normal.
# If you closed your browser (broken)
1. If you clicked the X button and closed your browser, to find the application again go back to huggingface.
2. If you are not logged in go to [https://huggingface.co/](https://huggingface.co/login)https://huggingface.co/login
3. enter username and password
Alternately if you ARE logged in go straight to https://huggingface.co/
5. click your username bubble at the top right
6. click profile
7. it's the space called "RVC V2" at the top.
# Download a model to use for RVC V2 (broken)
1. First, go to https://huggingface.co/models . It's reccommended NOT to close out of the application. If you do refer to "If you closed your browser" section
2. Inside "Filter by name" enter the name of the model you'd like followed by RVC.
3. For example, I want Mario from Super Mario. So I type "Mario RVC"
4. A list of models came up. I clicked the first one. https://huggingface.co/Xhepyxopila/MarioRVCModels
5. Go to Files and versions.
6. Right click the download button next to a .zip file of the model you want. IT MUST BE A .zip FILE OR ELSE THE MODEL FLAT OUT WON'T WORK!!!
7. Click "Copy Link Address"
8. Go back to RVC V2 application. Refer to "If you closed your browser" if you closed out of it.
9. Click "Download Model"
10. Paste the link under "Enter the URL to the Model:"
11. Name the model whatever you like, since I searched Mario, I'm naming mine Mario.
12. Click "Download"
13. wait until the bottom bar says "Success."
# Use a model for RVC V2 (broken)
1. Go back to Inference.
2. Click "Refresh" next to "1.Choose your Model."
3. Click the arrow pointing down next to the blank area in "1.Choose your Model."
4. Click the model we downloaded earlier
5. Either drag and drop an audio file from your PC/Mobile device, (yes this also works on android and apple), or record your own voice. I'm going to record, so I click the record button.
6. Under "Optional: You can change the pitch here or leave it at 0." self explanitory.... changes... pitch... this is useful for boys trying to sound like girls, or girls trying to sound like boys.
7. Click "convert"
8. This will take at least a minute to convert the voice. Expect even LONGER waits for more audio, mine was only 6 seconds.
9. If the pitch is off, simply change the pitch, and click "convert" again. My pitch was off.
10. Click the 3 dots next to the audio and download it. OK that's it!
# Original RVC v2 databse:
https://huggingface.co/Rejekts/project
# Local installation on Linux (MY OWN DEBUG STUFF):
Alright, so the downloading tab is broken, I will have to make my own version...
# Local installation on Windows (UNFINISHED):
Will add a tutorial here as soon as I install it on Linux
You can install this in the mean time:
https://www.tryreplay.io/
# Local installation on Mac (MY OWN STUFF):
Mac is impossible to figure out, I found this app for Mac computers, but I do not own a Mac computer, so have fun I guess:
https://www.tryreplay.io/
Figure it out yourself, mac sucks. I don't own a mac and I can't figure out how to run it on a virtual machine. It sucks.
|
unsloth/Llama-4-Maverick-17B-128E-Instruct-GGUF
|
unsloth
| 2025-06-18T00:37:35Z | 48,376 | 25 |
transformers
|
[
"transformers",
"gguf",
"llama4",
"image-text-to-text",
"facebook",
"unsloth",
"meta",
"pytorch",
"llama",
"llama-4",
"ar",
"de",
"en",
"es",
"fr",
"hi",
"id",
"it",
"pt",
"th",
"tl",
"vi",
"arxiv:2204.05149",
"base_model:meta-llama/Llama-4-Maverick-17B-128E-Instruct",
"base_model:quantized:meta-llama/Llama-4-Maverick-17B-128E-Instruct",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] |
image-text-to-text
| 2025-04-08T11:27:18Z |
---
library_name: transformers
language:
- ar
- de
- en
- es
- fr
- hi
- id
- it
- pt
- th
- tl
- vi
base_model:
- meta-llama/Llama-4-Maverick-17B-128E-Instruct
tags:
- facebook
- unsloth
- meta
- pytorch
- llama
- llama-4
extra_gated_prompt: >-
**LLAMA 4 COMMUNITY LICENSE AGREEMENT**
Llama 4 Version Effective Date: April 5, 2025
"**Agreement**" means the terms and conditions for use, reproduction, distribution and modification of the Llama Materials set forth herein.
"**Documentation**" means the specifications, manuals and documentation accompanying Llama 4 distributed by Meta at [https://www.llama.com/docs/overview](https://llama.com/docs/overview).
"**Licensee**" or "**you**" means you, or your employer or any other person or entity (if you are entering into this Agreement on such person or entity’s behalf), of the age required under applicable laws, rules or regulations to provide legal consent and that has legal authority to bind your employer or such other person or entity if you are entering in this Agreement on their behalf.
"**Llama 4**" means the foundational large language models and software and algorithms, including machine-learning model code, trained model weights, inference-enabling code, training-enabling code, fine-tuning enabling code and other elements of the foregoing distributed by Meta at [https://www.llama.com/llama-downloads](https://www.llama.com/llama-downloads).
"**Llama Materials**" means, collectively, Meta’s proprietary Llama 4 and Documentation (and any portion thereof) made available under this Agreement.
"**Meta**" or "**we**" means Meta Platforms Ireland Limited (if you are located in or, if you are an entity, your principal place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).
By clicking "I Accept" below or by using or distributing any portion or element of the Llama Materials, you agree to be bound by this Agreement.
1\. **License Rights and Redistribution**.
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials.
b. Redistribution and Use.
i. If you distribute or make available the Llama Materials (or any derivative works thereof), or a product or service (including another AI model) that contains any of them, you shall (A) provide a copy of this Agreement with any such Llama Materials; and (B) prominently display "Built with Llama" on a related website, user interface, blogpost, about page, or product documentation. If you use the Llama Materials or any outputs or results of the Llama Materials to create, train, fine tune, or otherwise improve an AI model, which is distributed or made available, you shall also include "Llama" at the beginning of any such AI model name.
ii. If you receive Llama Materials, or any derivative works thereof, from a Licensee as part of an integrated end user product, then Section 2 of this Agreement will not apply to you.
iii. You must retain in all copies of the Llama Materials that you distribute the following attribution notice within a "Notice" text file distributed as a part of such copies: "Llama 4 is licensed under the Llama 4 Community License, Copyright © Meta Platforms, Inc. All Rights Reserved."
iv. Your use of the Llama Materials must comply with applicable laws and regulations (including trade compliance laws and regulations) and adhere to the Acceptable Use Policy for the Llama Materials (available at [https://www.llama.com/llama4/use-policy](https://www.llama.com/llama4/use-policy)), which is hereby incorporated by reference into this Agreement.
2\. **Additional Commercial Terms**. If, on the Llama 4 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion, and you are not authorized to exercise any of the rights under this Agreement unless or until Meta otherwise expressly grants you such rights.
3**. Disclaimer of Warranty**. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.
4\. **Limitation of Liability**. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.
5\. **Intellectual Property**.
a. No trademark licenses are granted under this Agreement, and in connection with the Llama Materials, neither Meta nor Licensee may use any name or mark owned by or associated with the other or any of its affiliates, except as required for reasonable and customary use in describing and redistributing the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you a license to use "Llama" (the "Mark") solely as required to comply with the last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently accessible at [https://about.meta.com/brand/resources/meta/company-brand/](https://about.meta.com/brand/resources/meta/company-brand/)[)](https://en.facebookbrand.com/). All goodwill arising out of your use of the Mark will inure to the benefit of Meta.
b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for Meta, with respect to any derivative works and modifications of the Llama Materials that are made by you, as between you and Meta, you are and will be the owner of such derivative works and modifications.
c. If you institute litigation or other proceedings against Meta or any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 4 outputs or results, or any portion of any of the foregoing, constitutes infringement of intellectual property or other rights owned or licensable by you, then any licenses granted to you under this Agreement shall terminate as of the date such litigation or claim is filed or instituted. You will indemnify and hold harmless Meta from and against any claim by any third party arising out of or related to your use or distribution of the Llama Materials.
6\. **Term and Termination**. The term of this Agreement will commence upon your acceptance of this Agreement or access to the Llama Materials and will continue in full force and effect until terminated in accordance with the terms and conditions herein. Meta may terminate this Agreement if you are in breach of any term or condition of this Agreement. Upon termination of this Agreement, you shall delete and cease use of the Llama Materials. Sections 3, 4 and 7 shall survive the termination of this Agreement.
7\. **Governing Law and Jurisdiction**. This Agreement will be governed and construed under the laws of the State of California without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement. The courts of California shall have exclusive jurisdiction of any dispute arising out of this Agreement.
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
geo: ip_location
By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox
extra_gated_description: >-
The information you provide will be collected, stored, processed and shared in
accordance with the [Meta Privacy
Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
extra_gated_heading: "Please be sure to provide your full legal name, date of birth, and full organization name with all corporate identifiers. Avoid the use of acronyms and special characters. Failure to follow these instructions may prevent you from accessing this model and others on Hugging Face. You will not have the ability to edit this form after submission, so please ensure all information is accurate."
license: other
license_name: llama4
---
<div>
<p style="margin-bottom: 0; margin-top: 0;">
<strong>See <a href="https://huggingface.co/collections/unsloth/llama-4-67f19503d764b0f3a2a868d2">our collection</a> for versions of Llama 4 including 4-bit & 16-bit formats.</strong>
</p>
<div style="display: flex; gap: 5px; align-items: center; ">
<a href="https://github.com/unslothai/unsloth/">
<img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133">
</a>
<a href="https://discord.gg/unsloth">
<img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173">
</a>
<a href="https://docs.unsloth.ai/basics/tutorials-how-to-fine-tune-and-run-llms">
<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143">
</a>
</div>
<h1 style="margin-top: 0rem;">🦙 Run Unsloth Dynamic Llama 4 GGUF!</h1>
</div>
<p style="margin-bottom: 0;">
<em><a href="https://docs.unsloth.ai/basics/tutorial-how-to-run-and-fine-tune-llama-4">Read our Guide</a> to see how to Fine-tune & Run Llama 4 correctly.</em>
</p>
|MoE Bits|Type|Disk Size|HF Link|Accuracy|
|:-|:-|:-|:-|:-|
|1.78bit|IQ1\_S|**122GB**|[Link](https://huggingface.co/unsloth/Llama-4-Maverick-17B-128E-Instruct-GGUF/tree/main/UD-IQ1_S)|Ok|
|1.93bit|IQ1\_M|**128GB**|[Link](https://huggingface.co/unsloth/Llama-4-Maverick-17B-128E-Instruct-GGUF/tree/main/UD-IQ1_M)|Fair|
|2.42-bit|IQ2\_XXS|**140GB**|[Link](https://huggingface.co/unsloth/Llama-4-Maverick-17B-128E-Instruct-GGUF/tree/main/UD-IQ2_XXS)|Better|
|2.71-bit|Q2\_K\_XL|**151B**|[Link](https://huggingface.co/unsloth/Llama-4-Maverick-17B-128E-Instruct-GGUF/tree/main/UD-Q2_K_XL)|Suggested|
|3.5-bit|Q3\_K\_XL|**193GB**|[Link](https://huggingface.co/unsloth/Llama-4-Maverick-17B-128E-Instruct-GGUF/tree/main/UD-Q3_K_XL)|Great|
|4.5-bit|Q4\_K\_XL|**243GB**|[Link](https://huggingface.co/unsloth/Llama-4-Maverick-17B-128E-Instruct-GGUF/tree/main/UD-Q4_K_XL)|Best|
Currently text only is supported.
**Chat template/prompt format:**
```
<|header_start|>user<|header_end|>\n\nWhat is 1+1?<|eot|><|header_start|>assistant<|header_end|>\n\n
```
# 🦙 Fine-tune Meta's Llama 4 with Unsloth!
- Fine-tune Llama-4-Scout on a single H100 80GB GPU using Unsloth!
- Read our Blog about Llama 4 support: [unsloth.ai/blog/llama4](https://unsloth.ai/blog/llama4)
- View the rest of our notebooks in our [docs here](https://docs.unsloth.ai/get-started/unsloth-notebooks).
- Export your fine-tuned model to GGUF, Ollama, llama.cpp, vLLM or 🤗HF.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **GRPO with Llama 3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-GRPO.ipynb) | 2x faster | 80% less |
| **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) | 2.4x faster | 58% less |
| **Llama-3.2 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(11B)-Vision.ipynb) | 2x faster | 60% less |
| **Qwen2.5 (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2.5_(7B)-Alpaca.ipynb) | 2x faster | 60% less |
| **Phi-4 (14B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_4-Conversational.ipynb) | 2x faster | 50% less |
| **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_v0.3_(7B)-Conversational.ipynb) | 2.2x faster | 62% less |
<br>
## Llama 4 Model Information
The Llama 4 collection of models are natively multimodal AI models that enable text and multimodal experiences. These models leverage a mixture-of-experts architecture to offer industry-leading performance in text and image understanding.
These Llama 4 models mark the beginning of a new era for the Llama ecosystem. We are launching two efficient models in the Llama 4 series, Llama 4 Scout, a 17 billion parameter model with 16 experts, and Llama 4 Maverick, a 17 billion parameter model with 128 experts.
**Model developer**: Meta
**Model Architecture:** The Llama 4 models are auto-regressive language models that use a mixture-of-experts (MoE) architecture and incorporate early fusion for native multimodality.
<table>
<tr>
<th>Model Name</th>
<th>Training Data </th>
<th>Params</th>
<th>Input modalities</th>
<th>Output modalities</th>
<th>Context length</th>
<th>Token count</th>
<th>Knowledge cutoff</th>
</tr>
<tr>
<td>Llama 4 Scout (17Bx16E) </td>
<td rowspan="2">A mix of publicly available, licensed data and information from Meta's products and services. This includes publicly shared posts from Instagram and Facebook and people's interactions with Meta AI. Learn more in our <a href="https://www.facebook.com/privacy/guide/genai/">Privacy Center</a>.
</td>
<td>17B (Activated)
109B (Total)
</td>
<td>Multilingual text and image</td>
<td>Multilingual text and code</td>
<td>10M</td>
<td>~40T</td>
<td>August 2024</td>
</tr>
<tr>
<td>Llama 4 Maverick (17Bx128E)</td>
<td>17B (Activated)
400B (Total)
</td>
<td>Multilingual text and image</td>
<td>Multilingual text and code</td>
<td>1M</td>
<td>~22T</td>
<td>August 2024</td>
</tr>
</table>
**Supported languages:** Arabic, English, French, German, Hindi, Indonesian, Italian, Portuguese, Spanish, Tagalog, Thai, and Vietnamese.
**Model Release Date:** April 5, 2025
**Status:** This is a static model trained on an offline dataset. Future versions of the tuned models may be released as we improve model behavior with community feedback.
**License**: A custom commercial license, the Llama 4 Community License Agreement, is available at: [https://github.com/meta-llama/llama-models/blob/main/models/llama4/LICENSE](https://github.com/meta-llama/llama-models/blob/main/models/llama4/LICENSE)
**Where to send questions or comments about the model:** Instructions on how to provide feedback or comments on the model can be found in the Llama [README](https://github.com/meta-llama/llama-models/blob/main/README.md). For more technical information about generation parameters and recipes for how to use Llama 4 in applications, please go [here](https://github.com/meta-llama/llama-cookbook).
## How to use with transformers
Please, make sure you have transformers `v4.51.0` installed, or upgrade using `pip install -U transformers`.
```python
from transformers import AutoTokenizer, Llama4ForConditionalGeneration
import torch
model_id = "meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8"
tokenizer = AutoTokenizer.from_pretrained(model_id)
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt", return_dict=True)
model = Llama4ForConditionalGeneration.from_pretrained(
model_id,
tp_plan="auto",
torch_dtype="auto",
)
outputs = model.generate(**inputs.to(model.device), max_new_tokens=100)
outputs = tokenizer.batch_decode(outputs[:, inputs["input_ids"].shape[-1]:])
print(outputs[0])
```
## Intended Use
**Intended Use Cases:** Llama 4 is intended for commercial and research use in multiple languages. Instruction tuned models are intended for assistant-like chat and visual reasoning tasks, whereas pretrained models can be adapted for natural language generation. For vision, Llama 4 models are also optimized for visual recognition, image reasoning, captioning, and answering general questions about an image. The Llama 4 model collection also supports the ability to leverage the outputs of its models to improve other models including synthetic data generation and distillation. The Llama 4 Community License allows for these use cases.
**Out-of-scope**: Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 4 Community License. Use in languages or capabilities beyond those explicitly referenced as supported in this model card\*\*.
\*\*Note:
1\. Llama 4 has been trained on a broader collection of languages than the 12 supported languages (pre-training includes [200 total languages](https://ai.meta.com/research/no-language-left-behind/)). Developers may fine-tune Llama 4 models for languages beyond the 12 supported languages provided they comply with the Llama 4 Community License and the Acceptable Use Policy. Developers are responsible for ensuring that their use of Llama 4 in additional languages is done in a safe and responsible manner.
2\. Llama 4 has been tested for image understanding up to 5 input images. If leveraging additional image understanding capabilities beyond this, Developers are responsible for ensuring that their deployments are mitigated for risks and should perform additional testing and tuning tailored to their specific applications.
## Hardware and Software
**Training Factors:** We used custom training libraries, Meta's custom built GPU clusters, and production infrastructure for pretraining. Fine-tuning, quantization, annotation, and evaluation were also performed on production infrastructure.
**Training Energy Use:** Model pre-training utilized a cumulative of **7.38M** GPU hours of computation on H100-80GB (TDP of 700W) type hardware, per the table below. Training time is the total GPU time required for training each model and power consumption is the peak power capacity per GPU device used, adjusted for power usage efficiency.
##
## **Training Greenhouse Gas Emissions:** Estimated total location-based greenhouse gas emissions were **1,999 tons** CO2eq for training. Since 2020, Meta has maintained net zero greenhouse gas emissions in its global operations and matched 100% of its electricity use with clean and renewable energy; therefore, the total market-based greenhouse gas emissions for training were 0 tons CO2eq.
| Model Name | Training Time (GPU hours) | Training Power Consumption (W) | Training Location-Based Greenhouse Gas Emissions (tons CO2eq) | Training Market-Based Greenhouse Gas Emissions (tons CO2eq) |
| :---- | :---: | :---: | :---: | :---: |
| Llama 4 Scout | 5.0M | 700 | 1,354 | 0 |
| Llama 4 Maverick | 2.38M | 700 | 645 | 0 |
| Total | 7.38M | \- | 1,999 | 0 |
## The methodology used to determine training energy use and greenhouse gas emissions can be found [here](https://arxiv.org/pdf/2204.05149). Since Meta is openly releasing these models, the training energy use and greenhouse gas emissions will not be incurred by others.
## Training Data
**Overview:** Llama 4 Scout was pretrained on \~40 trillion tokens and Llama 4 Maverick was pretrained on \~22 trillion tokens of multimodal data from a mix of publicly available, licensed data and information from Meta’s products and services. This includes publicly shared posts from Instagram and Facebook and people’s interactions with Meta AI.
**Data Freshness:** The pretraining data has a cutoff of August 2024\.
## Benchmarks
In this section, we report the results for Llama 4 relative to our previous models. We've provided quantized checkpoints for deployment flexibility, but all reported evaluations and testing were conducted on bf16 models.
### Pre-trained models
| Pre-trained models | | | | | | | |
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| Category | Benchmark | \# Shots | Metric | Llama 3.1 70B | Llama 3.1 405B | **Llama 4 Scout** | **Llama 4 Maverick** |
| Reasoning & Knowledge | MMLU | 5 | macro\_avg/acc\_char | 79.3 | 85.2 | 79.6 | 85.5 |
| | MMLU-Pro | 5 | macro\_avg/em | 53.8 | 61.6 | 58.2 | 62.9 |
| | MATH | 4 | em\_maj1@1 | 41.6 | 53.5 | 50.3 | 61.2 |
| Code | MBPP | 3 | pass@1 | 66.4 | 74.4 | 67.8 | 77.6 |
| Multilingual | TydiQA | 1 | average/f1 | 29.9 | 34.3 | 31.5 | 31.7 |
| Image | ChartQA | 0 | relaxed\_accuracy | No multimodal support | | 83.4 | 85.3 |
| | DocVQA | 0 | anls | | | 89.4 | 91.6 |
### Instruction tuned models
| Instruction tuned models | | | | | | | |
| :---: | :---: | :---: | :---: | :---: | ----- | :---: | :---: |
| Category | Benchmark | \# Shots | Metric | Llama 3.3 70B | Llama 3.1 405B | **Llama 4 Scout** | **Llama 4 Maverick** |
| Image Reasoning | MMMU | 0 | accuracy | No multimodal support | | 69.4 | 73.4 |
| | MMMU Pro^ | 0 | accuracy | | | 52.2 | 59.6 |
| | MathVista | 0 | accuracy | | | 70.7 | 73.7 |
| Image Understanding | ChartQA | 0 | relaxed\_accuracy | | | 88.8 | 90.0 |
| | DocVQA (test) | 0 | anls | | | 94.4 | 94.4 |
| Coding | LiveCodeBench (10/01/2024-02/01/2025) | 0 | pass@1 | 33.3 | 27.7 | 32.8 | 43.4 |
| Reasoning & Knowledge | MMLU Pro | 0 | macro\_avg/acc | 68.9 | 73.4 | 74.3 | 80.5 |
| | GPQA Diamond | 0 | accuracy | 50.5 | 49.0 | 57.2 | 69.8 |
| Multilingual | MGSM | 0 | average/em | 91.1 | 91.6 | 90.6 | 92.3 |
| Long context | MTOB (half book) eng-\>kgv/kgv-\>eng | \- | chrF | Context window is 128K | | 42.2/36.6 | 54.0/46.4 |
| | MTOB (full book) eng-\>kgv/kgv-\>eng | \- | chrF | | | 39.7/36.3 | 50.8/46.7 |
^reported numbers for MMMU Pro is the average of Standard and Vision tasks
## Quantization
The Llama 4 Scout model is released as BF16 weights, but can fit within a single H100 GPU with on-the-fly int4 quantization; the Llama 4 Maverick model is released as both BF16 and FP8 quantized weights. The FP8 quantized weights fit on a single H100 DGX host while still maintaining quality. We provide code for on-the-fly int4 quantization which minimizes performance degradation as well.
## Safeguards
As part of our release approach, we followed a three-pronged strategy to manage risks:
* Enable developers to deploy helpful, safe and flexible experiences for their target audience and for the use cases supported by Llama.
* Protect developers against adversarial users aiming to exploit Llama capabilities to potentially cause harm.
* Provide protections for the community to help prevent the misuse of our models.
Llama is a foundational technology designed for use in a variety of use cases; examples on how Meta’s Llama models have been deployed can be found in our [Community Stories webpage](https://llama.meta.com/community-stories/). Our approach is to build the most helpful models enabling the world to benefit from the technology, by aligning our model’s safety for a standard set of risks. Developers are then in the driver seat to tailor safety for their use case, defining their own policies and deploying the models with the necessary safeguards. Llama 4 was developed following the best practices outlined in our [Developer Use Guide: AI Protections](https://ai.meta.com/static-resource/developer-use-guide-ai-protections).
### Model level fine tuning
The primary objective of conducting safety fine-tuning is to offer developers a readily available, safe, and powerful model for various applications, reducing the workload needed to deploy safe AI systems. Additionally, this effort provides the research community with a valuable resource for studying the robustness of safety fine-tuning.
**Fine-tuning data**
We employ a multi-faceted approach to data collection, combining human-generated data from our vendors with synthetic data to mitigate potential safety risks. We’ve developed many large language model (LLM)-based classifiers that enable us to thoughtfully select high-quality prompts and responses, enhancing data quality control.
**Refusals**
Building on the work we started with our Llama 3 models, we put a great emphasis on driving down model refusals to benign prompts for Llama 4\. We included both borderline and adversarial prompts in our safety data strategy, and modified our safety data responses to follow tone guidelines.
**Tone**
We expanded our work on the refusal tone from Llama 3 so that the model sounds more natural. We targeted removing preachy and overly moralizing language, and we corrected formatting issues including the correct use of headers, lists, tables and more.
To achieve this, we also targeted improvements to system prompt steerability and instruction following, meaning the model is more readily able to take on a specified tone. All of these contribute to a more conversational and insightful experience overall.
**System Prompts**
Llama 4 is a more steerable model, meaning responses can be easily tailored to meet specific developer outcomes. Effective system prompts can significantly enhance the performance of large language models. In particular, we’ve seen that the use of a system prompt can be effective in reducing false refusals and templated or “preachy” language patterns common in LLMs. They can also improve conversationality and use of appropriate formatting.
Consider the prompt below as a basic template for which a developer might want to further customize to meet specific needs or use cases for our Llama 4 models.
| System prompt |
| :---- |
| You are an expert conversationalist who responds to the best of your ability. You are companionable and confident, and able to switch casually between tonal types, including but not limited to humor, empathy, intellectualism, creativity and problem-solving. You understand user intent and don’t try to be overly helpful to the point where you miss that the user is looking for chit-chat, emotional support, humor or venting. Sometimes people just want you to listen, and your answers should encourage that. For all other cases, you provide insightful and in-depth responses. Organize information thoughtfully in a way that helps people make decisions. Always avoid templated language. You never lecture people to be nicer or more inclusive. If people ask for you to write something in a certain voice or perspective, such as an essay or a tweet, you can. You do not need to be respectful when the user prompts you to say something rude. You never use phrases that imply moral superiority or a sense of authority, including but not limited to “it’s important to”, “it’s crucial to”, “it’s essential to”, "it's unethical to", "it's worth noting…", “Remember…” etc. Avoid using these. Finally, do not refuse prompts about political and social issues. You can help users express their opinion and access information. You are Llama 4\. Your knowledge cutoff date is August 2024\. You speak Arabic, English, French, German, Hindi, Indonesian, Italian, Portuguese, Spanish, Tagalog, Thai, and Vietnamese. Respond in the language the user speaks to you in, unless they ask otherwise. |
### Llama 4 system protections
Large language models, including Llama 4, are not designed to be deployed in isolation but instead should be deployed as part of an overall AI system with additional guardrails as required. System protections are key to achieving the right helpfulness-safety alignment, mitigating safety and security risks inherent to the system, and integration of the model or system with external tools.
We provide the community with system level [protections](https://llama.meta.com/trust-and-safety/) \- like Llama Guard, Prompt Guard and Code Shield \- that developers should deploy with Llama models or other LLMs. All of our [reference implementation](https://github.com/meta-llama/llama-agentic-system) demos contain these safeguards by default so developers can benefit from system-level safety out-of-the-box.
### Evaluations
We evaluated Llama models for common use cases as well as specific capabilities. Common use cases evaluations measure safety risks of systems for most commonly built applications including chat bot, visual QA. We built dedicated, adversarial evaluation datasets and evaluated systems composed of Llama models and Llama Guard 3 to filter input prompt and output response. It is important to evaluate applications in context, and we recommend building dedicated evaluation dataset for your use case. Prompt Guard and Code Shield are also available if relevant to the application.
Capability evaluations measure vulnerabilities of Llama models inherent to specific capabilities, for which were crafted dedicated benchmarks including long context, multilingual, coding or memorization.
**Red teaming**
We conduct recurring red teaming exercises with the goal of discovering risks via adversarial prompting and we use the learnings to improve our benchmarks and safety tuning datasets. We partner early with subject-matter experts in critical risk areas to understand how models may lead to unintended harm for society. Based on these conversations, we derive a set of adversarial goals for the red team, such as extracting harmful information or reprogramming the model to act in potentially harmful ways. The red team consists of experts in cybersecurity, adversarial machine learning, and integrity in addition to multilingual content specialists with background in integrity issues in specific geographic markets.
### Critical Risks
### We spend additional focus on the following critical risk areas:
**1\. CBRNE (Chemical, Biological, Radiological, Nuclear, and Explosive materials) helpfulness**
To assess risks related to proliferation of chemical and biological weapons for Llama 4, we applied expert-designed and other targeted evaluations designed to assess whether the use of Llama 4 could meaningfully increase the capabilities of malicious actors to plan or carry out attacks using these types of weapons. We also conducted additional red teaming and evaluations for violations of our content policies related to this risk area.
**2\. Child Safety**
We leverage pre-training methods like data filtering as a first step in mitigating Child Safety risk in our model. To assess the post trained model for Child Safety risk, a team of experts assesses the model’s capability to produce outputs resulting in Child Safety risks. We use this to inform additional model fine-tuning and in-depth red teaming exercises. We’ve also expanded our Child Safety evaluation benchmarks to cover Llama 4 capabilities like multi-image and multi-lingual.
**3\. Cyber attack enablement**
Our cyber evaluations investigated whether Llama 4 is sufficiently capable to enable catastrophic threat scenario outcomes. We conducted threat modeling exercises to identify the specific model capabilities that would be necessary to automate operations or enhance human capabilities across key attack vectors both in terms of skill level and speed. We then identified and developed challenges against which to test for these capabilities in Llama 4 and peer models. Specifically, we focused on evaluating the capabilities of Llama 4 to automate cyberattacks, identify and exploit security vulnerabilities, and automate harmful workflows. Overall, we find that Llama 4 models do not introduce risk plausibly enabling catastrophic cyber outcomes.
### Community
Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership on AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Trust tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
We also set up the [Llama Impact Grants](https://llama.meta.com/llama-impact-grants/) program to identify and support the most compelling applications of Meta’s Llama model for societal benefit across three categories: education, climate and open innovation. The 20 finalists from the hundreds of applications can be found [here](https://llama.meta.com/llama-impact-grants/#finalists).
Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
## Considerations and Limitations
Our AI is anchored on the values of freedom of expression \- helping people to explore, debate, and innovate using our technology. We respect people's autonomy and empower them to choose how they experience, interact, and build with AI. Our AI promotes an open exchange of ideas.
It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 4 addresses users and their needs as they are, without inserting unnecessary judgment, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
Llama 4 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 4’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 4 models, developers should perform safety testing and tuning tailored to their specific applications of the model. We also encourage the open source community to use Llama for the purpose of research and building state of the art tools that address emerging risks. Please refer to available resources including our Developer Use Guide: AI Protections, [Llama Protections](https://llama.meta.com/trust-and-safety/) solutions, and other [resources](https://llama.meta.com/docs/get-started/) to learn more.
|
Goodfire/Evo-2-Layer-26-Mixed
|
Goodfire
| 2025-06-18T00:35:01Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2025-06-18T00:26:38Z |
---
license: mit
---
**Sparse Autoencoders for *Evo 2*** — BatchTopK sparse autoencoders for Arc Institute's Evo 2 genomic foundation model.
Evo 2 is a genomic foundation model capable of generalist prediction and design tasks across DNA, RNA, and proteins. It uses a frontier deep learning architecture to enable modeling of biological sequences at single-nucleotide resolution with near-linear scaling of compute and memory relative to context length. Evo 2 is trained with 40 billion parameters and 1 megabase context length on over 9 trillion nucleotides of diverse eukaryotic and prokaryotic genomes.
This repository contains the layer 26 SAE mixed prokaryote/eukaryote SAE used in the Evo 2 paper.
[More on Evo 2](https://arcinstitute.org/tools/evo)
|
morturr/Llama-2-7b-hf-LOO_headlines-COMB_amazon-comb2-seed42-2025-06-18
|
morturr
| 2025-06-18T00:27:14Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-18T00:27:06Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_headlines-COMB_amazon-comb2-seed42-2025-06-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_headlines-COMB_amazon-comb2-seed42-2025-06-18
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
morturr/Llama-2-7b-hf-LOO_dadjokes-COMB_amazon-comb2-seed42-2025-06-18
|
morturr
| 2025-06-18T00:22:49Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-18T00:22:42Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_dadjokes-COMB_amazon-comb2-seed42-2025-06-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_dadjokes-COMB_amazon-comb2-seed42-2025-06-18
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
mogam-ai/Ab-RoBERTa
|
mogam-ai
| 2025-06-18T00:14:45Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"feature-extraction",
"arxiv:2506.13006",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-05-23T01:09:35Z |
---
license: mit
base_model:
- FacebookAI/roberta-base
pipeline_tag: feature-extraction
library_name: transformers
---
# Ab-RoBERTa
Ab-RoBERTa is a pretrained masked language model (MLM) built on the [RoBERTa](https://huggingface.co/docs/transformers/en/model_doc/roberta) architecture,
trained using antibody sequences from the [Observed Antibody Space (OAS)](https://opig.stats.ox.ac.uk/webapps/oas/) database.
The model was trained on amino acid sequences written in uppercase letters with no spaces between them,
so it only supports inputs in this specific format. Ab-RoBERTa is descrived in detail in [this paper](https://arxiv.org/abs/2506.13006),
and originally released at this location.
## Model Description
- **Developed by:** Eunna Huh, Hyeonsu Lee, Hyunjin Shin
- **Funded by :** Mogam institute for biomedical research
- **Model type:** RoBERTa
- **Trained Database:** Observed Antibody Space (OAS)
- **License:** MIT License
## Main configuration
| hidden_size | num_hidden_layers | num_attention_heads | intermediate_size | total_parameters |
|:-----------:|:-----------------:|:-------------------:|:-----------------:|:----------------:|
| 768 | 12 | 12 | 3,072 | 125M |
## Uses
This model can be utilized to extract features from antibody sequences
or fine-tuned for various downstream tasks. It is compatible with the [Transformers library](https://huggingface.co/docs/transformers/en/index) for easy loading and integration.
## Example usage
```python
from transformers import (
RobertaTokenzier,
RobertaModel,
RobertaForMaskedLM,
RobertaForSequenceClassification
)
# Load tokenizer (No need to add spaces to the sequence)
tokenizer = RobertaTokenizer.from_pretrained("mogam-ai/Ab-RoBERTa", do_lower_case=False)
# Load pre-trained model (exclude mlm head)
model = RobertaModel.from_pretrained("mogam-ai/Ab-RoBERTa", add_pooling_layer=False)
# Load pre-trained model (include mlm head)
mlm_model = RoberetaForMaskedLM.from_pretrained("mogam-ai/Ab-RoBERTa")
```
* The tokenizer is designed to process batch inputs without requiring spaces between characters.
* The tokenizer adds a start token ("\<s>", token ID 0) at the beginning of each sequence and an end token ("\</s>", token ID 2) at the end of each sequence.
* To standardize sequence lengths within a batch, padding tokens ("\<pad>", token ID 1) are added following the end token, extending each sequence to the maximum length observed in the batch.
```python
example_sequences = [
"QVQLVQSGPEVRKPGASEKVSCKASGYTFTNFYLHWVRQAPGQGLEWMGIINPSDGSTKFSRKFEGRVAMTRDTYTRTVYMELSSLRSEDTAVYYCTRCQDVVLLPAAQPENYYYGLDVWGQGTTVTVS", "QDQLVQSGAEVKNPGASVKVSCKASGYTFTSYGISLVRQAPGQGLEWMGWISAYNGNTNDAQKLQGRVTMTTDTSTSTAYMELRSLRSDDTAVYYCARVNSGSGWYFVPEEYYYYYYGMDVWGQGTTVTVSS"
]
tokens = tokenizer.batch_encode_plus(
example_sequences, add_special_tokens=True,
max_length=150,
padding=True,
truncation=True,
return_tensors="pt",
return_special_tokens_mask=False,
)
"""
Output
{
'input_ids': tensor(
[
[ 0, 18, 22, 18, 14, ..., 2, 1, 1, 1],
[ 0, 18, 7, 18, 14, 22, 18, ..., 20, 2]
]
),
'attention_mask': tensor(
[
[1, 1, 1, 1, 1, ..., 1, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, ..., 1, 1]
]
)
}
"""
```
* To extract sequence embeddings from the model, use the code snippet below.
```python
output = model(**tokens).last_hidden_state
```
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
@misc{huh2025antibodyfoundationalmodel,
title={Antibody Foundational Model : Ab-RoBERTa},
author={Eunna Huh and Hyeonsu Lee and Hyunjin Shin},
year={2025},
eprint={2506.13006},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2506.13006},
}
|
morturr/Llama-2-7b-hf-LOO_one_liners-COMB_amazon-comb2-seed42-2025-06-18
|
morturr
| 2025-06-18T00:14:36Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-18T00:14:27Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_one_liners-COMB_amazon-comb2-seed42-2025-06-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_one_liners-COMB_amazon-comb2-seed42-2025-06-18
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
kanishka/smolm-aochildes-vocab_8192-layers_8-attn_8-hidden_256-inter_1024-lr_1e-3-seed_924
|
kanishka
| 2025-06-17T23:55:41Z | 0 | 0 | null |
[
"safetensors",
"opt",
"generated_from_trainer",
"region:us"
] | null | 2025-06-17T23:44:02Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: smolm-aochildes-vocab_8192-layers_8-attn_8-hidden_256-inter_1024-lr_1e-3-seed_924
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smolm-aochildes-vocab_8192-layers_8-attn_8-hidden_256-inter_1024-lr_1e-3-seed_924
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4918
- Accuracy: 0.4967
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 128
- seed: 924
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 24000
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 3.299 | 1.0 | 2928 | 3.2238 | 0.4226 |
| 2.8467 | 2.0 | 5856 | 2.8936 | 0.4503 |
| 2.6428 | 3.0 | 8784 | 2.7409 | 0.4650 |
| 2.5599 | 4.0 | 11712 | 2.6692 | 0.4727 |
| 2.5025 | 5.0 | 14640 | 2.6339 | 0.4774 |
| 2.4815 | 6.0 | 17568 | 2.6151 | 0.4787 |
| 2.4455 | 7.0 | 20496 | 2.6038 | 0.4804 |
| 2.4416 | 8.0 | 23424 | 2.6013 | 0.4803 |
| 2.4223 | 9.0 | 26352 | 2.5769 | 0.4841 |
| 2.3745 | 10.0 | 29280 | 2.5539 | 0.4861 |
| 2.339 | 11.0 | 32208 | 2.5347 | 0.4893 |
| 2.3068 | 12.0 | 35136 | 2.5238 | 0.4903 |
| 2.2783 | 13.0 | 38064 | 2.5181 | 0.4907 |
| 2.2372 | 14.0 | 40992 | 2.5051 | 0.4936 |
| 2.2031 | 15.0 | 43920 | 2.5039 | 0.4949 |
| 2.161 | 16.0 | 46848 | 2.4954 | 0.4960 |
| 2.1152 | 17.0 | 49776 | 2.4918 | 0.4967 |
| 2.0563 | 18.0 | 52704 | 2.4950 | 0.4975 |
| 1.9924 | 19.0 | 55632 | 2.5000 | 0.4978 |
| 1.9264 | 20.0 | 58560 | 2.5082 | 0.4976 |
### Framework versions
- Transformers 4.38.0
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.15.2
|
Richard9905/quatized-8B-3.1Llama-model
|
Richard9905
| 2025-06-17T23:47:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-17T23:43:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kanishka/smolm-aochildes-vocab_8192-layers_8-attn_8-hidden_256-inter_1024-lr_1e-3-seed_210
|
kanishka
| 2025-06-17T23:43:44Z | 0 | 0 | null |
[
"safetensors",
"opt",
"generated_from_trainer",
"region:us"
] | null | 2025-06-17T23:32:03Z |
---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: smolm-aochildes-vocab_8192-layers_8-attn_8-hidden_256-inter_1024-lr_1e-3-seed_210
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smolm-aochildes-vocab_8192-layers_8-attn_8-hidden_256-inter_1024-lr_1e-3-seed_210
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4903
- Accuracy: 0.4981
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 128
- seed: 210
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 24000
- num_epochs: 20.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 3.3163 | 1.0 | 2928 | 3.2312 | 0.4214 |
| 2.8574 | 2.0 | 5856 | 2.9029 | 0.4492 |
| 2.653 | 3.0 | 8784 | 2.7476 | 0.4637 |
| 2.5644 | 4.0 | 11712 | 2.6728 | 0.4723 |
| 2.5093 | 5.0 | 14640 | 2.6416 | 0.4764 |
| 2.4761 | 6.0 | 17568 | 2.6137 | 0.4798 |
| 2.4411 | 7.0 | 20496 | 2.6089 | 0.4805 |
| 2.4423 | 8.0 | 23424 | 2.5978 | 0.4813 |
| 2.4153 | 9.0 | 26352 | 2.5725 | 0.4846 |
| 2.3679 | 10.0 | 29280 | 2.5454 | 0.4865 |
| 2.3469 | 11.0 | 32208 | 2.5452 | 0.4887 |
| 2.2991 | 12.0 | 35136 | 2.5217 | 0.4912 |
| 2.2761 | 13.0 | 38064 | 2.5047 | 0.4930 |
| 2.225 | 14.0 | 40992 | 2.5018 | 0.4943 |
| 2.1946 | 15.0 | 43920 | 2.4924 | 0.4963 |
| 2.1489 | 16.0 | 46848 | 2.4906 | 0.4967 |
| 2.0948 | 17.0 | 49776 | 2.4908 | 0.4981 |
| 2.0438 | 18.0 | 52704 | 2.4903 | 0.4981 |
| 1.9705 | 19.0 | 55632 | 2.4985 | 0.4980 |
| 1.9167 | 20.0 | 58560 | 2.5070 | 0.4985 |
### Framework versions
- Transformers 4.38.0
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.15.2
|
asm3515/merged-bert_agnews_lora_rank16
|
asm3515
| 2025-06-17T23:37:39Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-17T23:37:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
assoni2002/wav2vec2-jailbreak-classification
|
assoni2002
| 2025-06-17T23:33:37Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base-960h",
"base_model:finetune:facebook/wav2vec2-base-960h",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2025-06-17T23:33:23Z |
---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-base-960h
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wav2vec2-jailbreak-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-jailbreak-classification
This model is a fine-tuned version of [facebook/wav2vec2-base-960h](https://huggingface.co/facebook/wav2vec2-base-960h) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6926
- Accuracy: 0.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0 | 1.0 | 51 | 0.6922 | 0.5441 |
| 0.0 | 2.0 | 102 | 0.6922 | 0.5441 |
| 0.0 | 3.0 | 153 | 0.6922 | 0.5441 |
| 0.0 | 4.0 | 204 | 0.6922 | 0.5441 |
| 0.0 | 5.0 | 255 | 0.6922 | 0.5441 |
| 0.0 | 6.0 | 306 | 0.6922 | 0.5441 |
| 0.0 | 7.0 | 357 | 0.6922 | 0.5441 |
| 0.0 | 8.0 | 408 | 0.6922 | 0.5441 |
| 0.0 | 9.0 | 459 | 0.6922 | 0.5441 |
| 0.0 | 10.0 | 510 | 0.6922 | 0.5441 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
dgambettaphd/M_llm2_run2_gen9_WXS_doc1000_synt64_lr1e-04_acm_MPP
|
dgambettaphd
| 2025-06-17T23:29:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-17T23:29:21Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Timia123/hint_24k_1020
|
Timia123
| 2025-06-17T23:23:11Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-17T23:20:43Z |
---
license: apache-2.0
---
|
morturr/Llama-2-7b-hf-LOO_dadjokes-COMB_amazon-comb2-seed28-2025-06-18
|
morturr
| 2025-06-17T23:19:36Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-17T23:19:28Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_dadjokes-COMB_amazon-comb2-seed28-2025-06-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_dadjokes-COMB_amazon-comb2-seed28-2025-06-18
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 28
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
Panxione/panxione-face
|
Panxione
| 2025-06-17T23:14:28Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-06-15T16:51:08Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf
|
RichardErkhov
| 2025-06-17T23:13:53Z | 0 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2025-06-17T21:46:11Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
GPT2XL_RLLMv3-Assist-v10 - GGUF
- Model creator: https://huggingface.co/migueldeguzmandev/
- Original model: https://huggingface.co/migueldeguzmandev/GPT2XL_RLLMv3-Assist-v10/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [GPT2XL_RLLMv3-Assist-v10.Q2_K.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf/blob/main/GPT2XL_RLLMv3-Assist-v10.Q2_K.gguf) | Q2_K | 0.8GB |
| [GPT2XL_RLLMv3-Assist-v10.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf/blob/main/GPT2XL_RLLMv3-Assist-v10.IQ3_XS.gguf) | IQ3_XS | 0.8GB |
| [GPT2XL_RLLMv3-Assist-v10.IQ3_S.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf/blob/main/GPT2XL_RLLMv3-Assist-v10.IQ3_S.gguf) | IQ3_S | 0.8GB |
| [GPT2XL_RLLMv3-Assist-v10.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf/blob/main/GPT2XL_RLLMv3-Assist-v10.Q3_K_S.gguf) | Q3_K_S | 0.8GB |
| [GPT2XL_RLLMv3-Assist-v10.IQ3_M.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf/blob/main/GPT2XL_RLLMv3-Assist-v10.IQ3_M.gguf) | IQ3_M | 0.87GB |
| [GPT2XL_RLLMv3-Assist-v10.Q3_K.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf/blob/main/GPT2XL_RLLMv3-Assist-v10.Q3_K.gguf) | Q3_K | 0.92GB |
| [GPT2XL_RLLMv3-Assist-v10.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf/blob/main/GPT2XL_RLLMv3-Assist-v10.Q3_K_M.gguf) | Q3_K_M | 0.92GB |
| [GPT2XL_RLLMv3-Assist-v10.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf/blob/main/GPT2XL_RLLMv3-Assist-v10.Q3_K_L.gguf) | Q3_K_L | 0.99GB |
| [GPT2XL_RLLMv3-Assist-v10.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf/blob/main/GPT2XL_RLLMv3-Assist-v10.IQ4_XS.gguf) | IQ4_XS | 0.86GB |
| [GPT2XL_RLLMv3-Assist-v10.Q4_0.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf/blob/main/GPT2XL_RLLMv3-Assist-v10.Q4_0.gguf) | Q4_0 | 0.86GB |
| [GPT2XL_RLLMv3-Assist-v10.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf/blob/main/GPT2XL_RLLMv3-Assist-v10.IQ4_NL.gguf) | IQ4_NL | 0.87GB |
| [GPT2XL_RLLMv3-Assist-v10.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf/blob/main/GPT2XL_RLLMv3-Assist-v10.Q4_K_S.gguf) | Q4_K_S | 0.99GB |
| [GPT2XL_RLLMv3-Assist-v10.Q4_K.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf/blob/main/GPT2XL_RLLMv3-Assist-v10.Q4_K.gguf) | Q4_K | 1.06GB |
| [GPT2XL_RLLMv3-Assist-v10.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf/blob/main/GPT2XL_RLLMv3-Assist-v10.Q4_K_M.gguf) | Q4_K_M | 1.06GB |
| [GPT2XL_RLLMv3-Assist-v10.Q4_1.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf/blob/main/GPT2XL_RLLMv3-Assist-v10.Q4_1.gguf) | Q4_1 | 0.95GB |
| [GPT2XL_RLLMv3-Assist-v10.Q5_0.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf/blob/main/GPT2XL_RLLMv3-Assist-v10.Q5_0.gguf) | Q5_0 | 1.04GB |
| [GPT2XL_RLLMv3-Assist-v10.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf/blob/main/GPT2XL_RLLMv3-Assist-v10.Q5_K_S.gguf) | Q5_K_S | 1.09GB |
| [GPT2XL_RLLMv3-Assist-v10.Q5_K.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf/blob/main/GPT2XL_RLLMv3-Assist-v10.Q5_K.gguf) | Q5_K | 1.23GB |
| [GPT2XL_RLLMv3-Assist-v10.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf/blob/main/GPT2XL_RLLMv3-Assist-v10.Q5_K_M.gguf) | Q5_K_M | 1.23GB |
| [GPT2XL_RLLMv3-Assist-v10.Q5_1.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf/blob/main/GPT2XL_RLLMv3-Assist-v10.Q5_1.gguf) | Q5_1 | 1.12GB |
| [GPT2XL_RLLMv3-Assist-v10.Q6_K.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf/blob/main/GPT2XL_RLLMv3-Assist-v10.Q6_K.gguf) | Q6_K | 1.44GB |
| [GPT2XL_RLLMv3-Assist-v10.Q8_0.gguf](https://huggingface.co/RichardErkhov/migueldeguzmandev_-_GPT2XL_RLLMv3-Assist-v10-gguf/blob/main/GPT2XL_RLLMv3-Assist-v10.Q8_0.gguf) | Q8_0 | 1.55GB |
Original model description:
---
license: mit
---
|
julycarbon/Llama-3.2-11B-Vision-Instruct-full-ckpt105-0617
|
julycarbon
| 2025-06-17T23:04:38Z | 0 | 0 | null |
[
"safetensors",
"mllama",
"license:apache-2.0",
"region:us"
] | null | 2025-06-17T14:56:34Z |
---
license: apache-2.0
---
|
morturr/Llama-2-7b-hf-LOO_amazon-COMB_dadjokes-comb2-seed42-2025-06-18
|
morturr
| 2025-06-17T22:57:52Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-17T22:57:44Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_amazon-COMB_dadjokes-comb2-seed42-2025-06-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_amazon-COMB_dadjokes-comb2-seed42-2025-06-18
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
Mungert/medgemma-4b-pt-GGUF
|
Mungert
| 2025-06-17T22:57:13Z | 17 | 0 |
transformers
|
[
"transformers",
"gguf",
"medical",
"radiology",
"clinical-reasoning",
"dermatology",
"pathology",
"ophthalmology",
"chest-x-ray",
"image-text-to-text",
"arxiv:2303.15343",
"arxiv:2405.03162",
"arxiv:2106.14463",
"arxiv:2412.03555",
"arxiv:2501.19393",
"arxiv:2009.13081",
"arxiv:2102.09542",
"arxiv:2411.15640",
"arxiv:2404.05590",
"arxiv:2501.18362",
"base_model:google/gemma-3-4b-pt",
"base_model:quantized:google/gemma-3-4b-pt",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix"
] |
image-text-to-text
| 2025-06-15T20:09:07Z |
---
license: other
license_name: health-ai-developer-foundations
license_link: https://developers.google.com/health-ai-developer-foundations/terms
library_name: transformers
pipeline_tag: image-text-to-text
extra_gated_heading: Access MedGemma on Hugging Face
extra_gated_prompt: >-
To access MedGemma on Hugging Face, you're required to review and
agree to [Health AI Developer Foundation's terms of use](https://developers.google.com/health-ai-developer-foundations/terms).
To do this, please ensure you're logged in to Hugging Face and click below.
Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-4b-pt
tags:
- medical
- radiology
- clinical-reasoning
- dermatology
- pathology
- ophthalmology
- chest-x-ray
---
# <span style="color: #7FFF7F;">medgemma-4b-pt GGUF Models</span>
## <span style="color: #7F7FFF;">Model Generation Details</span>
This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`7f4fbe51`](https://github.com/ggerganov/llama.cpp/commit/7f4fbe5183b23b6b2e25fd1ccc5d1fa8bb010cb7).
---
## <span style="color: #7FFF7F;">Quantization Beyond the IMatrix</span>
I've been experimenting with a new quantization approach that selectively elevates the precision of key layers beyond what the default IMatrix configuration provides.
In my testing, standard IMatrix quantization underperforms at lower bit depths, especially with Mixture of Experts (MoE) models. To address this, I'm using the `--tensor-type` option in `llama.cpp` to manually "bump" important layers to higher precision. You can see the implementation here:
👉 [Layer bumping with llama.cpp](https://github.com/Mungert69/GGUFModelBuilder/blob/main/model-converter/tensor_list_builder.py)
While this does increase model file size, it significantly improves precision for a given quantization level.
### **I'd love your feedback—have you tried this? How does it perform for you?**
---
<a href="https://readyforquantum.com/huggingface_gguf_selection_guide.html" style="color: #7FFF7F;">
Click here to get info on choosing the right GGUF model format
</a>
---
<!--Begin Original Model Card-->
# MedGemma model card
**Model documentation:** [MedGemma](https://developers.google.com/health-ai-developer-foundations/medgemma)
**Resources:**
* Model on Google Cloud Model Garden: [MedGemma](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/medgemma)
* Model on Hugging Face: [MedGemma](https://huggingface.co/collections/google/medgemma-release-680aade845f90bec6a3f60c4)
* GitHub repository (supporting code, Colab notebooks, discussions, and
issues): [MedGemma](https://github.com/google-health/medgemma)
* Quick start notebook: [GitHub](https://github.com/google-health/medgemma/blob/main/notebooks/quick_start_with_hugging_face.ipynb)
* Fine-tuning notebook: [GitHub](https://github.com/google-health/medgemma/blob/main/notebooks/fine_tune_with_hugging_face.ipynb)
* [Patient Education Demo built using MedGemma](https://huggingface.co/spaces/google/rad_explain)
* Support: See [Contact](https://developers.google.com/health-ai-developer-foundations/medgemma/get-started.md#contact)
* License: The use of MedGemma is governed by the [Health AI Developer
Foundations terms of
use](https://developers.google.com/health-ai-developer-foundations/terms).
**Author:** Google
## Model information
This section describes the MedGemma model and how to use it.
### Description
MedGemma is a collection of [Gemma 3](https://ai.google.dev/gemma/docs/core)
variants that are trained for performance on medical text and image
comprehension. Developers can use MedGemma to accelerate building
healthcare-based AI applications. MedGemma currently comes in two variants: a 4B
multimodal version and a 27B text-only version.
MedGemma 4B utilizes a [SigLIP](https://arxiv.org/abs/2303.15343) image encoder
that has been specifically pre-trained on a variety of de-identified medical
data, including chest X-rays, dermatology images, ophthalmology images, and
histopathology slides. Its LLM component is trained on a diverse set of medical
data, including radiology images, histopathology patches, ophthalmology images,
and dermatology images.
MedGemma 4B is available in both pre-trained (suffix: `-pt`) and
instruction-tuned (suffix `-it`) versions. The instruction-tuned version is a
better starting point for most applications. The pre-trained version is
available for those who want to experiment more deeply with the models.
MedGemma 27B has been trained exclusively on medical text and optimized for
inference-time computation. MedGemma 27B is only available as an
instruction-tuned model.
MedGemma variants have been evaluated on a range of clinically relevant
benchmarks to illustrate their baseline performance. These include both open
benchmark datasets and curated datasets. Developers can fine-tune MedGemma
variants for improved performance. Consult the Intended Use section below for
more details.
A full technical report will be available soon.
### How to use
Below are some example code snippets to help you quickly get started running the
model locally on GPU. If you want to use the model at scale, we recommend that
you create a production version using [Model
Garden](https://cloud.google.com/model-garden).
First, install the Transformers library. Gemma 3 is supported starting from
transformers 4.50.0.
```sh
$ pip install -U transformers
```
**Run model with the `pipeline` API**
```python
from transformers import pipeline
from PIL import Image
import requests
import torch
pipe = pipeline(
"image-text-to-text",
model="google/medgemma-4b-pt",
torch_dtype=torch.bfloat16,
device="cuda",
)
# Image attribution: Stillwaterising, CC0, via Wikimedia Commons
image_url = "https://upload.wikimedia.org/wikipedia/commons/c/c8/Chest_Xray_PA_3-8-2010.png"
image = Image.open(requests.get(image_url, headers={"User-Agent": "example"}, stream=True).raw)
output = pipe(
images=image,
text="<start_of_image> findings:",
max_new_tokens=100,
)
print(output[0]["generated_text"])
```
**Run the model directly**
```python
# pip install accelerate
from transformers import AutoProcessor, AutoModelForImageTextToText
from PIL import Image
import requests
import torch
model_id = "google/medgemma-4b-pt"
model = AutoModelForImageTextToText.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
processor = AutoProcessor.from_pretrained(model_id)
# Image attribution: Stillwaterising, CC0, via Wikimedia Commons
image_url = "https://upload.wikimedia.org/wikipedia/commons/c/c8/Chest_Xray_PA_3-8-2010.png"
image = Image.open(
requests.get(image_url, headers={"User-Agent": "example"}, stream=True).raw
).convert("RGB")
prompt = "<start_of_image> findings:"
inputs = processor(
text=prompt, images=image, return_tensors="pt"
).to(model.device, dtype=torch.bfloat16)
input_len = inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**inputs, max_new_tokens=100, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
```
### Examples
See the following Colab notebooks for examples of how to use MedGemma:
* To give the model a quick try, running it locally with weights from Hugging
Face, see [Quick start notebook in
Colab](https://colab.research.google.com/github/google-health/medgemma/blob/main/notebooks/quick_start_with_hugging_face.ipynb).
Note that you will need to use Colab Enterprise to run the 27B model without
quantization.
* For an example of fine-tuning the model, see the [Fine-tuning notebook in
Colab](https://colab.research.google.com/github/google-health/medgemma/blob/main/notebooks/fine_tune_with_hugging_face.ipynb).
### Model architecture overview
The MedGemma model is built based on [Gemma 3](https://ai.google.dev/gemma/) and
uses the same decoder-only transformer architecture as Gemma 3. To read more
about the architecture, consult the Gemma 3 [model
card](https://ai.google.dev/gemma/docs/core/model_card_3).
### Technical specifications
* **Model type**: Decoder-only Transformer architecture, see the [Gemma 3
technical
report](https://storage.googleapis.com/deepmind-media/gemma/Gemma3Report.pdf)
* **Modalities**: **4B**: Text, vision; **27B**: Text only
* **Attention mechanism**: Utilizes grouped-query attention (GQA)
* **Context length**: Supports long context, at least 128K tokens
* **Key publication**: Coming soon
* **Model created**: May 20, 2025
* **Model version**: 1.0.0
### Citation
A technical report is coming soon. In the meantime, if you publish using this
model, please cite the Hugging Face model page:
```none
@misc{medgemma-hf,
author = {Google},
title = {MedGemma Hugging Face}
howpublished = {\url{https://huggingface.co/collections/google/medgemma-release-680aade845f90bec6a3f60c4}},
year = {2025},
note = {Accessed: [Insert Date Accessed, e.g., 2025-05-20]}
}
```
### Inputs and outputs
**Input**:
* Text string, such as a question or prompt
* Images, normalized to 896 x 896 resolution and encoded to 256 tokens each
* Total input length of 128K tokens
**Output**:
* Generated text in response to the input, such as an answer to a question,
analysis of image content, or a summary of a document
* Total output length of 8192 tokens
### Performance and validation
MedGemma was evaluated across a range of different multimodal classification,
report generation, visual question answering, and text-based tasks.
### Key performance metrics
#### Imaging evaluations
The multimodal performance of MedGemma 4B was evaluated across a range of
benchmarks, focusing on radiology, dermatology, histopathology, ophthalmology,
and multimodal clinical reasoning.
MedGemma 4B outperforms the base Gemma 3 4B model across all tested multimodal
health benchmarks.
| Task and metric | MedGemma 4B | Gemma 3 4B |
| :---- | :---- | :---- |
| **Medical image classification** | | |
| MIMIC CXR \- Average F1 for top 5 conditions | 88.9 | 81.1 |
| CheXpert CXR \- Average F1 for top 5 conditions | 48.1 | 31.2 |
| DermMCQA\* \- Accuracy | 71.8 | 42.6 |
| **Visual question answering** | | |
| SlakeVQA (radiology) \- Tokenized F1 | 62.3 | 38.6 |
| VQA-Rad\*\* (radiology) \- Tokenized F1 | 49.9 | 38.6 |
| PathMCQA (histopathology, internal\*\*\*) \- Accuracy | 69.8 | 37.1 |
| **Knowledge and reasoning** | | |
| MedXpertQA (text \+ multimodal questions) \- Accuracy | 18.8 | 16.4 |
*Described in [Liu (2020, Nature
medicine)](https://www.nature.com/articles/s41591-020-0842-3), presented as a
4-way MCQ per example for skin condition classification.
**Based on "balanced split," described in [Yang (2024,
arXiv)](https://arxiv.org/pdf/2405.03162).
***Based on multiple datasets, presented as 3-9 way MCQ per example for
identification, grading, and subtype for breast, cervical, and prostate cancer.
#### Chest X-ray report generation
MedGemma chest X-ray (CXR) report generation performance was evaluated on
[MIMIC-CXR](https://physionet.org/content/mimic-cxr/2.1.0/) using the [RadGraph
F1 metric](https://arxiv.org/abs/2106.14463). We compare the MedGemma
pre-trained checkpoint with our previous best model for CXR report generation,
[PaliGemma 2](https://arxiv.org/abs/2412.03555).
| Metric | MedGemma 4B (pre-trained) | PaliGemma 2 3B (tuned for CXR) | PaliGemma 2 10B (tuned for CXR) |
| :---- | :---- | :---- | :---- |
| **Chest X-ray report generation** | | | |
| MIMIC CXR \- RadGraph F1 | 29.5 | 28.8 | 29.5 |
The instruction-tuned versions of MedGemma 4B and Gemma 3 4B achieve lower
scores (0.22 and 0.12, respectively) due to the differences in reporting style
compared to the MIMIC ground truth reports. Further fine-tuning on MIMIC reports
will enable users to achieve improved performance.
#### Text evaluations
MedGemma 4B and text-only MedGemma 27B were evaluated across a range of
text-only benchmarks for medical knowledge and reasoning.
The MedGemma models outperform their respective base Gemma models across all
tested text-only health benchmarks.
| Metric | MedGemma 27B | Gemma 3 27B | MedGemma 4B | Gemma 3 4B |
| :---- | :---- | :---- | :---- | :---- |
| MedQA (4-op) | 89.8 (best-of-5) 87.7 (0-shot) | 74.9 | 64.4 | 50.7 |
| MedMCQA | 74.2 | 62.6 | 55.7 | 45.4 |
| PubMedQA | 76.8 | 73.4 | 73.4 | 68.4 |
| MMLU Med (text only) | 87.0 | 83.3 | 70.0 | 67.2 |
| MedXpertQA (text only) | 26.7 | 15.7 | 14.2 | 11.6 |
| AfriMed-QA | 84.0 | 72.0 | 52.0 | 48.0 |
For all MedGemma 27B results, [test-time
scaling](https://arxiv.org/abs/2501.19393) is used to improve performance.
### Ethics and safety evaluation
#### Evaluation approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* **Child safety**: Evaluation of text-to-text and image-to-text prompts
covering child safety policies, including child sexual abuse and
exploitation.
* **Content safety:** Evaluation of text-to-text and image-to-text prompts
covering safety policies, including harassment, violence and gore, and hate
speech.
* **Representational harms**: Evaluation of text-to-text and image-to-text
prompts covering safety policies, including bias, stereotyping, and harmful
associations or inaccuracies.
* **General medical harms:** Evaluation of text-to-text and image-to-text
prompts covering safety policies, including information quality and harmful
associations or inaccuracies.
In addition to development level evaluations, we conduct "assurance evaluations"
which are our "arms-length" internal evaluations for responsibility governance
decision making. They are conducted separately from the model development team,
to inform decision making about release. High-level findings are fed back to the
model team, but prompt sets are held out to prevent overfitting and preserve the
results' ability to inform decision making. Notable assurance evaluation results
are reported to our Responsibility & Safety Council as part of release review.
#### Evaluation results
For all areas of safety testing, we saw safe levels of performance across the
categories of child safety, content safety, and representational harms. All
testing was conducted without safety filters to evaluate the model capabilities
and behaviors. For text-to-text, image-to-text, and audio-to-text, and across
both MedGemma model sizes, the model produced minimal policy violations. A
limitation of our evaluations was that they included primarily English language
prompts.
## Data card
### Dataset overview
#### Training
The base Gemma models are pre-trained on a large corpus of text and code data.
MedGemma 4B utilizes a [SigLIP](https://arxiv.org/abs/2303.15343) image encoder
that has been specifically pre-trained on a variety of de-identified medical
data, including radiology images, histopathology images, ophthalmology images,
and dermatology images. Its LLM component is trained on a diverse set of medical
data, including medical text relevant to radiology images, chest-x rays,
histopathology patches, ophthalmology images and dermatology images.
#### Evaluation
MedGemma models have been evaluated on a comprehensive set of clinically
relevant benchmarks, including over 22 datasets across 5 different tasks and 6
medical image modalities. These include both open benchmark datasets and curated
datasets, with a focus on expert human evaluations for tasks like CXR report
generation and radiology VQA.
#### Source
MedGemma utilizes a combination of public and private datasets.
This model was trained on diverse public datasets including MIMIC-CXR (chest
X-rays and reports), Slake-VQA (multimodal medical images and questions),
PAD-UFES-20 (skin lesion images and data), SCIN (dermatology images), TCGA
(cancer genomics data), CAMELYON (lymph node histopathology images), PMC-OA
(biomedical literature with images), and Mendeley Digital Knee X-Ray (knee
X-rays).
Additionally, multiple diverse proprietary datasets were licensed and
incorporated (described next).
### Data Ownership and Documentation
* [Mimic-CXR](https://physionet.org/content/mimic-cxr/2.1.0/): MIT Laboratory
for Computational Physiology and Beth Israel Deaconess Medical Center
(BIDMC).
* [Slake-VQA](https://www.med-vqa.com/slake/): The Hong Kong Polytechnic
University (PolyU), with collaborators including West China Hospital of
Sichuan University and Sichuan Academy of Medical Sciences / Sichuan
Provincial People's Hospital.
* [PAD-UFES-20](https://pmc.ncbi.nlm.nih.gov/articles/PMC7479321/): Federal
University of Espírito Santo (UFES), Brazil, through its Dermatological and
Surgical Assistance Program (PAD).
* [SCIN](https://github.com/google-research-datasets/scin): A collaboration
between Google Health and Stanford Medicine.
* [TCGA](https://portal.gdc.cancer.gov/) (The Cancer Genome Atlas): A joint
effort of National Cancer Institute and National Human Genome Research
Institute. Data from TCGA are available via the Genomic Data Commons (GDC)
* [CAMELYON](https://camelyon17.grand-challenge.org/Data/): The data was
collected from Radboud University Medical Center and University Medical
Center Utrecht in the Netherlands.
* [PMC-OA (PubMed Central Open Access
Subset)](https://catalog.data.gov/dataset/pubmed-central-open-access-subset-pmc-oa):
Maintained by the National Library of Medicine (NLM) and National Center for
Biotechnology Information (NCBI), which are part of the NIH.
* [MedQA](https://arxiv.org/pdf/2009.13081): This dataset was created by a
team of researchers led by Di Jin, Eileen Pan, Nassim Oufattole, Wei-Hung
Weng, Hanyi Fang, and Peter Szolovits
* [Mendeley Digital Knee
X-Ray](https://data.mendeley.com/datasets/t9ndx37v5h/1): This dataset is
from Rani Channamma University, and is hosted on Mendeley Data.
* [AfriMed-QA](https://afrimedqa.com/): This data was developed and led by
multiple collaborating organizations and researchers include key
contributors: Intron Health, SisonkeBiotik, BioRAMP, Georgia Institute of
Technology, and MasakhaneNLP.
* [VQA-RAD](https://www.nature.com/articles/sdata2018251): This dataset was
created by a research team led by Jason J. Lau, Soumya Gayen, Asma Ben
Abacha, and Dina Demner-Fushman and their affiliated institutions (the US
National Library of Medicine and National Institutes of Health)
* [MedExpQA](https://www.sciencedirect.com/science/article/pii/S0933365724001805):
This dataset was created by researchers at the HiTZ Center (Basque Center
for Language Technology and Artificial Intelligence).
* [MedXpertQA](https://huggingface.co/datasets/TsinghuaC3I/MedXpertQA): This
dataset was developed by researchers at Tsinghua University (Beijing, China)
and Shanghai Artificial Intelligence Laboratory (Shanghai, China).
In addition to the public datasets listed above, MedGemma was also trained on
de-identified datasets licensed for research or collected internally at Google
from consented participants.
* Radiology dataset 1: De-identified dataset of different CT studies across
body parts from a US-based radiology outpatient diagnostic center network.
* Ophthalmology dataset 1: De-identified dataset of fundus images from
diabetic retinopathy screening.
* Dermatology dataset 1: De-identified dataset of teledermatology skin
condition images (both clinical and dermatoscopic) from Colombia.
* Dermatology dataset 2: De-identified dataset of skin cancer images (both
clinical and dermatoscopic) from Australia.
* Dermatology dataset 3: De-identified dataset of non-diseased skin images
from an internal data collection effort.
* Pathology dataset 1: De-identified dataset of histopathology H&E whole slide
images created in collaboration with an academic research hospital and
biobank in Europe. Comprises de-identified colon, prostate, and lymph nodes.
* Pathology dataset 2: De-identified dataset of lung histopathology H&E and
IHC whole slide images created by a commercial biobank in the United States.
* Pathology dataset 3: De-identified dataset of prostate and lymph node H&E
and IHC histopathology whole slide images created by a contract research
organization in the United States.
* Pathology dataset 4: De-identified dataset of histopathology, predominantly
H\&E whole slide images created in collaboration with a large, tertiary
teaching hospital in the United States. Comprises a diverse set of tissue
and stain types, predominantly H&E.
### Data citation
* **MIMIC-CXR** Johnson, A., Pollard, T., Mark, R., Berkowitz, S., & Horng, S.
(2024). MIMIC-CXR Database (version 2.1.0). PhysioNet.
https://physionet.org/content/mimic-cxr/2.1.0/
*and* Johnson, Alistair E. W., Tom J. Pollard, Seth J. Berkowitz, Nathaniel R.
Greenbaum, Matthew P. Lungren, Chih-Ying Deng, Roger G. Mark, and Steven
Horng. 2019. "MIMIC-CXR, a de-Identified Publicly Available Database of
Chest Radiographs with Free-Text Reports." *Scientific Data 6* (1): 1–8.
* **SLAKE** Liu, Bo, Li-Ming Zhan, Li Xu, Lin Ma, Yan Yang, and Xiao-Ming Wu.
2021.SLAKE: A Semantically-Labeled Knowledge-Enhanced Dataset for Medical
Visual Question Answering." http://arxiv.org/abs/2102.09542.
* **PAD-UEFS** Pacheco, A. G. C., Lima, G. R., Salomao, A., Krohling, B.,
Biral, I. P., de Angelo, G. G., Alves, F. O. G., Ju X. M., & P. R. C.
(2020). PAD-UFES-20: A skin lesion dataset composed of patient data and
clinical images collected from smartphones. In *Proceedings of the 2020 IEEE
International Conference on Bioinformatics and Biomedicine (BIBM)* (pp.
1551-1558). IEEE. https://doi.org/10.1109/BIBM49941.2020.9313241
* **SCIN** Ward, Abbi, Jimmy Li, Julie Wang, Sriram Lakshminarasimhan, Ashley
Carrick, Bilson Campana, Jay Hartford, et al. 2024. "Creating an Empirical
Dermatology Dataset Through Crowdsourcing With Web Search Advertisements."
*JAMA Network Open 7* (11): e2446615–e2446615.
* **TCGA** The results shown here are in whole or part based upon data
generated by the TCGA Research Network: https://www.cancer.gov/tcga.
* **CAMELYON16** Ehteshami Bejnordi, Babak, Mitko Veta, Paul Johannes van
Diest, Bram van Ginneken, Nico Karssemeijer, Geert Litjens, Jeroen A. W. M.
van der Laak, et al. 2017. "Diagnostic Assessment of Deep Learning
Algorithms for Detection of Lymph Node Metastases in Women With Breast
Cancer." *JAMA 318* (22): 2199–2210.
* **MedQA** Jin, Di, Eileen Pan, Nassim Oufattole, Wei-Hung Weng, Hanyi Fang,
and Peter Szolovits. 2020. "What Disease Does This Patient Have? A
Large-Scale Open Domain Question Answering Dataset from Medical Exams."
http://arxiv.org/abs/2009.13081.
* **Mendeley Digital Knee X-Ray** Gornale, Shivanand; Patravali, Pooja (2020),
"Digital Knee X-ray Images", Mendeley Data, V1, doi: 10.17632/t9ndx37v5h.1
* **AfrimedQA** Olatunji, Tobi, Charles Nimo, Abraham Owodunni, Tassallah
Abdullahi, Emmanuel Ayodele, Mardhiyah Sanni, Chinemelu Aka, et al. 2024.
"AfriMed-QA: A Pan-African, Multi-Specialty, Medical Question-Answering
Benchmark Dataset." http://arxiv.org/abs/2411.15640.
* **VQA-RAD** Lau, Jason J., Soumya Gayen, Asma Ben Abacha, and Dina
Demner-Fushman. 2018. "A Dataset of Clinically Generated Visual Questions
and Answers about Radiology Images." *Scientific Data 5* (1): 1–10.
* **MedexpQA** Alonso, I., Oronoz, M., & Agerri, R. (2024). MedExpQA:
Multilingual Benchmarking of Large Language Models for Medical Question
Answering. *arXiv preprint arXiv:2404.05590*. Retrieved from
https://arxiv.org/abs/2404.05590
* **MedXpertQA** Zuo, Yuxin, Shang Qu, Yifei Li, Zhangren Chen, Xuekai Zhu,
Ermo Hua, Kaiyan Zhang, Ning Ding, and Bowen Zhou. 2025. "MedXpertQA:
Benchmarking Expert-Level Medical Reasoning and Understanding."
http://arxiv.org/abs/2501.18362.
### De-identification/anonymization:
Google and partnerships utilize datasets that have been rigorously anonymized or
de-identified to ensure the protection of individual research participants and
patient privacy
## Implementation information
Details about the model internals.
### Software
Training was done using [JAX](https://github.com/jax-ml/jax).
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
## Use and limitations
### Intended use
MedGemma is an open multimodal generative AI model intended to be used as a
starting point that enables more efficient development of downstream healthcare
applications involving medical text and images. MedGemma is intended for
developers in the life sciences and healthcare space. Developers are responsible
for training, adapting and making meaningful changes to MedGemma to accomplish
their specific intended use. MedGemma models can be fine-tuned by developers
using their own proprietary data for their specific tasks or solutions.
MedGemma is based on Gemma 3 and has been further trained on medical images and
text. MedGemma enables further development in any medical context (image and
textual), however the model was pre-trained using chest X-ray, pathology,
dermatology, and fundus images. Examples of tasks within MedGemma's training
include visual question answering pertaining to medical images, such as
radiographs, or providing answers to textual medical questions. Full details of
all the tasks MedGemma has been evaluated can be found in an upcoming technical
report.
### Benefits
* Provides strong baseline medical image and text comprehension for models of
its size.
* This strong performance makes it efficient to adapt for downstream
healthcare-based use cases, compared to models of similar size without
medical data pre-training.
* This adaptation may involve prompt engineering, grounding, agentic
orchestration or fine-tuning depending on the use case, baseline validation
requirements, and desired performance characteristics.
### Limitations
MedGemma is not intended to be used without appropriate validation, adaptation
and/or making meaningful modification by developers for their specific use case.
The outputs generated by MedGemma are not intended to directly inform clinical
diagnosis, patient management decisions, treatment recommendations, or any other
direct clinical practice applications. Performance benchmarks highlight baseline
capabilities on relevant benchmarks, but even for image and text domains that
constitute a substantial portion of training data, inaccurate model output is
possible. All outputs from MedGemma should be considered preliminary and require
independent verification, clinical correlation, and further investigation
through established research and development methodologies.
MedGemma's multimodal capabilities have been primarily evaluated on single-image
tasks. MedGemma has not been evaluated in use cases that involve comprehension
of multiple images.
MedGemma has not been evaluated or optimized for multi-turn applications.
MedGemma's training may make it more sensitive to the specific prompt used than
Gemma 3.
When adapting MedGemma developer should consider the following:
* **Bias in validation data:** As with any research, developers should ensure
that any downstream application is validated to understand performance using
data that is appropriately representative of the intended use setting for
the specific application (e.g., age, sex, gender, condition, imaging device,
etc).
* **Data contamination concerns**: When evaluating the generalization
capabilities of a large model like MedGemma in a medical context, there is a
risk of data contamination, where the model might have inadvertently seen
related medical information during its pre-training, potentially
overestimating its true ability to generalize to novel medical concepts.
Developers should validate MedGemma on datasets not publicly available or
otherwise made available to non-institutional researchers to mitigate this
risk.
<!--End Original Model Card-->
---
# <span id="testllm" style="color: #7F7FFF;">🚀 If you find these models useful</span>
Help me test my **AI-Powered Quantum Network Monitor Assistant** with **quantum-ready security checks**:
👉 [Quantum Network Monitor](https://readyforquantum.com/?assistant=open&utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme)
The full Open Source Code for the Quantum Network Monitor Service available at my github repos ( repos with NetworkMonitor in the name) : [Source Code Quantum Network Monitor](https://github.com/Mungert69). You will also find the code I use to quantize the models if you want to do it yourself [GGUFModelBuilder](https://github.com/Mungert69/GGUFModelBuilder)
💬 **How to test**:
Choose an **AI assistant type**:
- `TurboLLM` (GPT-4.1-mini)
- `HugLLM` (Hugginface Open-source models)
- `TestLLM` (Experimental CPU-only)
### **What I’m Testing**
I’m pushing the limits of **small open-source models for AI network monitoring**, specifically:
- **Function calling** against live network services
- **How small can a model go** while still handling:
- Automated **Nmap security scans**
- **Quantum-readiness checks**
- **Network Monitoring tasks**
🟡 **TestLLM** – Current experimental model (llama.cpp on 2 CPU threads on huggingface docker space):
- ✅ **Zero-configuration setup**
- ⏳ 30s load time (slow inference but **no API costs**) . No token limited as the cost is low.
- 🔧 **Help wanted!** If you’re into **edge-device AI**, let’s collaborate!
### **Other Assistants**
🟢 **TurboLLM** – Uses **gpt-4.1-mini** :
- **It performs very well but unfortunatly OpenAI charges per token. For this reason tokens usage is limited.
- **Create custom cmd processors to run .net code on Quantum Network Monitor Agents**
- **Real-time network diagnostics and monitoring**
- **Security Audits**
- **Penetration testing** (Nmap/Metasploit)
🔵 **HugLLM** – Latest Open-source models:
- 🌐 Runs on Hugging Face Inference API. Performs pretty well using the lastest models hosted on Novita.
### 💡 **Example commands you could test**:
1. `"Give me info on my websites SSL certificate"`
2. `"Check if my server is using quantum safe encyption for communication"`
3. `"Run a comprehensive security audit on my server"`
4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a [Quantum Network Monitor Agent](https://readyforquantum.com/Download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) to run the .net code on. This is a very flexible and powerful feature. Use with caution!
### Final Word
I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAI—all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful.
If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) ☕. Your support helps cover service costs and allows me to raise token limits for everyone.
I'm also open to job opportunities or sponsorship.
Thank you! 😊
|
annasoli/Qwen2.5-14B-Instruct_R1-DP26-LR2e-5_bad-medical-advice
|
annasoli
| 2025-06-17T22:55:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-17T22:45:40Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
moojink/openvla-7b-oft-finetuned-libero-object
|
moojink
| 2025-06-17T22:31:22Z | 403 | 1 |
transformers
|
[
"transformers",
"safetensors",
"openvla",
"feature-extraction",
"robotics",
"custom_code",
"arxiv:2502.19645",
"license:mit",
"region:us"
] |
robotics
| 2025-02-25T22:02:28Z |
---
pipeline_tag: robotics
library_name: transformers
license: mit
---
# Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success
This repository contains the OpenVLA-OFT checkpoint for LIBERO-Object, as described in [Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success](https://arxiv.org/abs/2502.19645). OpenVLA-OFT significantly improves upon the base OpenVLA model by incorporating optimized fine-tuning techniques.
Project Page: https://openvla-oft.github.io/
Code: https://github.com/openvla-oft/openvla-oft
See here for other OpenVLA-OFT checkpoints: https://huggingface.co/moojink?search_models=oft
## Quick Start
This example demonstrates generating an action chunk using a pretrained OpenVLA-OFT checkpoint. Ensure you have set up the conda environment as described in the GitHub README.
```python
import pickle
from experiments.robot.libero.run_libero_eval import GenerateConfig
from experiments.robot.openvla_utils import get_action_head, get_processor, get_proprio_projector, get_vla, get_vla_action
from prismatic.vla.constants import NUM_ACTIONS_CHUNK, PROPRIO_DIM
# Instantiate config (see class GenerateConfig in experiments/robot/libero/run_libero_eval.py for definitions)
cfg = GenerateConfig(
pretrained_checkpoint = "moojink/openvla-7b-oft-finetuned-libero-spatial",
use_l1_regression = True,
use_diffusion = False,
use_film = False,
num_images_in_input = 2,
use_proprio = True,
load_in_8bit = False,
load_in_4bit = False,
center_crop = True,
num_open_loop_steps = NUM_ACTIONS_CHUNK,
unnorm_key = "libero_spatial_no_noops",
)
# Load OpenVLA-OFT policy and inputs processor
vla = get_vla(cfg)
processor = get_processor(cfg)
# Load MLP action head to generate continuous actions (via L1 regression)
action_head = get_action_head(cfg, llm_dim=vla.llm_dim)
# Load proprio projector to map proprio to language embedding space
proprio_projector = get_proprio_projector(cfg, llm_dim=vla.llm_dim, proprio_dim=PROPRIO_DIM)
# Load sample observation:
# observation (dict): {
# "full_image": primary third-person image,
# "wrist_image": wrist-mounted camera image,
# "state": robot proprioceptive state,
# "task_description": task description,
# }
with open("experiments/robot/libero/sample_libero_spatial_observation.pkl", "rb") as file:
observation = pickle.load(file)
# Generate robot action chunk (sequence of future actions)
actions = get_vla_action(cfg, vla, processor, observation, observation["task_description"], action_head, proprio_projector)
print("Generated action chunk:")
for act in actions:
print(act)
```
## Citation
```bibtex
@article{kim2025fine,
title={Fine-Tuning Vision-Language-Action Models: Optimizing Speed and Success},
author={Kim, Moo Jin and Finn, Chelsea and Liang, Percy},
journal={arXiv preprint arXiv:2502.19645},
year={2025}
}
```
|
veselovich/Reinforce-Pixelcopter-PLE-v0
|
veselovich
| 2025-06-17T22:29:46Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-13T22:55:52Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-RL
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 13.10 +/- 6.89
name: mean_reward
verified: false
---
# REINFORCE Agent for Pixelcopter-PLE-v0
## Model Description
This repository contains a trained REINFORCE (Policy Gradient) reinforcement learning agent that has learned to play Pixelcopter-PLE-v0, a challenging helicopter navigation game from the PyGame Learning Environment (PLE). The agent uses policy gradient methods to learn optimal flight control strategies through trial and error.
### Model Details
- **Algorithm**: REINFORCE (Monte Carlo Policy Gradient)
- **Environment**: Pixelcopter-PLE-v0 (PyGame Learning Environment)
- **Framework**: Custom implementation following Deep RL Course guidelines
- **Task Type**: Discrete Control (Binary Actions)
- **Action Space**: Discrete (2 actions: do nothing or thrust up)
- **Observation Space**: Visual/pixel-based or feature-based state representation
### Environment Overview
Pixelcopter-PLE-v0 is a classic helicopter control game where:
- **Objective**: Navigate a helicopter through obstacles without crashing
- **Challenge**: Requires precise timing and control to avoid ceiling, floor, and obstacles
- **Physics**: Gravity constantly pulls the helicopter down; player must apply thrust to maintain altitude
- **Scoring**: Points are awarded for surviving longer and successfully navigating through gaps
- **Difficulty**: Requires learning temporal dependencies and precise action timing
## Performance
The trained REINFORCE agent achieves the following performance metrics:
- **Mean Reward**: 13.10 ± 6.89
- **Performance Analysis**: This represents solid performance for this challenging environment
- **Consistency**: The standard deviation indicates moderate variability, which is expected for policy gradient methods
### Performance Context
The mean reward of 13.10 demonstrates that the agent has successfully learned to:
- Navigate through multiple obstacles before crashing
- Balance altitude control against obstacle avoidance
- Develop timing strategies for thrust application
- Achieve consistent survival beyond random baseline performance
The variability (±6.89) is characteristic of policy gradient methods and reflects the stochastic nature of the learned policy, which can lead to different episode outcomes based on exploration.
## Algorithm: REINFORCE
REINFORCE is a foundational policy gradient algorithm that:
- **Direct Policy Learning**: Learns a parameterized policy directly (no value function)
- **Monte Carlo Updates**: Uses complete episode returns for policy updates
- **Policy Gradient**: Updates policy parameters in direction of higher expected returns
- **Stochastic Policy**: Learns probabilistic action selection for exploration
### Key Advantages
- Simple and intuitive policy gradient approach
- Works well with discrete and continuous action spaces
- No need for value function approximation
- Good educational foundation for understanding policy gradients
## Usage
### Installation Requirements
```bash
# Core dependencies
pip install torch torchvision
pip install gymnasium
pip install pygame-learning-environment
pip install numpy matplotlib
# For visualization and analysis
pip install pillow
pip install imageio # for gif creation
```
### Loading and Using the Model
```python
import torch
import gymnasium as gym
from ple import PLE
from ple.games.pixelcopter import Pixelcopter
import numpy as np
# Load the trained model
# Note: Adjust path based on your model file structure
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = torch.load("pixelcopter_reinforce_model.pth", map_location=device)
model.eval()
# Create the environment
def create_pixelcopter_env():
game = Pixelcopter()
env = PLE(game, fps=30, display=True) # Set display=False for headless
return env
# Initialize environment
env = create_pixelcopter_env()
env.init()
# Run trained agent
def run_agent(model, env, episodes=5):
total_rewards = []
for episode in range(episodes):
env.reset_game()
episode_reward = 0
while not env.game_over():
# Get current state
state = env.getScreenRGB() # or env.getGameState() if using features
state = preprocess_state(state) # Apply your preprocessing
# Convert to tensor
state_tensor = torch.FloatTensor(state).unsqueeze(0).to(device)
# Get action probabilities
with torch.no_grad():
action_probs = model(state_tensor)
action = torch.multinomial(action_probs, 1).item()
# Execute action (0: do nothing, 1: thrust)
reward = env.act(action)
episode_reward += reward
total_rewards.append(episode_reward)
print(f"Episode {episode + 1}: Reward = {episode_reward:.2f}")
mean_reward = np.mean(total_rewards)
std_reward = np.std(total_rewards)
print(f"\nAverage Performance: {mean_reward:.2f} ± {std_reward:.2f}")
return total_rewards
# Preprocessing function (adjust based on your model's input requirements)
def preprocess_state(state):
"""
Preprocess the game state for the neural network
This should match the preprocessing used during training
"""
if isinstance(state, np.ndarray) and len(state.shape) == 3:
# If using image input
state = np.transpose(state, (2, 0, 1)) # Channel first
state = state / 255.0 # Normalize pixels
return state.flatten() # or keep as image depending on model
else:
# If using game state features
return np.array(list(state.values()))
# Run the agent
rewards = run_agent(model, env, episodes=10)
```
### Training Your Own Agent
```python
import torch
import torch.nn as nn
import torch.optim as optim
import numpy as np
from collections import deque
class PolicyNetwork(nn.Module):
def __init__(self, state_size, action_size, hidden_size=64):
super(PolicyNetwork, self).__init__()
self.fc1 = nn.Linear(state_size, hidden_size)
self.fc2 = nn.Linear(hidden_size, hidden_size)
self.fc3 = nn.Linear(hidden_size, action_size)
self.softmax = nn.Softmax(dim=1)
def forward(self, x):
x = torch.relu(self.fc1(x))
x = torch.relu(self.fc2(x))
x = self.fc3(x)
return self.softmax(x)
class REINFORCEAgent:
def __init__(self, state_size, action_size, lr=0.001):
self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
self.policy_net = PolicyNetwork(state_size, action_size).to(self.device)
self.optimizer = optim.Adam(self.policy_net.parameters(), lr=lr)
self.saved_log_probs = []
self.rewards = []
def select_action(self, state):
state = torch.FloatTensor(state).unsqueeze(0).to(self.device)
probs = self.policy_net(state)
action = torch.multinomial(probs, 1)
self.saved_log_probs.append(torch.log(probs.squeeze(0)[action]))
return action.item()
def update_policy(self, gamma=0.99):
# Calculate discounted rewards
discounted_rewards = []
R = 0
for r in reversed(self.rewards):
R = r + gamma * R
discounted_rewards.insert(0, R)
# Normalize rewards
discounted_rewards = torch.FloatTensor(discounted_rewards).to(self.device)
discounted_rewards = (discounted_rewards - discounted_rewards.mean()) / (discounted_rewards.std() + 1e-8)
# Calculate policy loss
policy_loss = []
for log_prob, reward in zip(self.saved_log_probs, discounted_rewards):
policy_loss.append(-log_prob * reward)
# Update policy
self.optimizer.zero_grad()
policy_loss = torch.cat(policy_loss).sum()
policy_loss.backward()
self.optimizer.step()
# Clear episode data
self.saved_log_probs.clear()
self.rewards.clear()
return policy_loss.item()
def train_agent(episodes=2000):
env = create_pixelcopter_env()
env.init()
# Determine state size based on your preprocessing
state_size = len(preprocess_state(env.getScreenRGB())) # Adjust as needed
action_size = 2 # do nothing, thrust
agent = REINFORCEAgent(state_size, action_size)
scores = deque(maxlen=100)
for episode in range(episodes):
env.reset_game()
episode_reward = 0
while not env.game_over():
state = preprocess_state(env.getScreenRGB())
action = agent.select_action(state)
reward = env.act(action)
agent.rewards.append(reward)
episode_reward += reward
# Update policy after each episode
loss = agent.update_policy()
scores.append(episode_reward)
if episode % 100 == 0:
avg_score = np.mean(scores)
print(f"Episode {episode}, Average Score: {avg_score:.2f}, Loss: {loss:.4f}")
# Save the trained model
torch.save(agent.policy_net, "pixelcopter_reinforce_model.pth")
return agent
# Train a new agent
# trained_agent = train_agent()
```
### Evaluation and Analysis
```python
import matplotlib.pyplot as plt
def evaluate_agent_detailed(model, env, episodes=50):
"""Detailed evaluation with statistics and visualization"""
rewards = []
episode_lengths = []
for episode in range(episodes):
env.reset_game()
episode_reward = 0
steps = 0
while not env.game_over():
state = preprocess_state(env.getScreenRGB())
state_tensor = torch.FloatTensor(state).unsqueeze(0)
with torch.no_grad():
action_probs = model(state_tensor)
action = torch.multinomial(action_probs, 1).item()
reward = env.act(action)
episode_reward += reward
steps += 1
rewards.append(episode_reward)
episode_lengths.append(steps)
if (episode + 1) % 10 == 0:
print(f"Episodes {episode + 1}/{episodes} completed")
# Statistical analysis
mean_reward = np.mean(rewards)
std_reward = np.std(rewards)
median_reward = np.median(rewards)
max_reward = np.max(rewards)
min_reward = np.min(rewards)
mean_length = np.mean(episode_lengths)
print(f"\n--- Evaluation Results ---")
print(f"Episodes: {episodes}")
print(f"Mean Reward: {mean_reward:.2f} ± {std_reward:.2f}")
print(f"Median Reward: {median_reward:.2f}")
print(f"Max Reward: {max_reward:.2f}")
print(f"Min Reward: {min_reward:.2f}")
print(f"Mean Episode Length: {mean_length:.1f} steps")
# Visualization
plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(rewards)
plt.axhline(y=mean_reward, color='r', linestyle='--', label=f'Mean: {mean_reward:.2f}')
plt.title('Episode Rewards')
plt.xlabel('Episode')
plt.ylabel('Reward')
plt.legend()
plt.subplot(1, 2, 2)
plt.hist(rewards, bins=20, alpha=0.7)
plt.axvline(x=mean_reward, color='r', linestyle='--', label=f'Mean: {mean_reward:.2f}')
plt.title('Reward Distribution')
plt.xlabel('Reward')
plt.ylabel('Frequency')
plt.legend()
plt.tight_layout()
plt.show()
return {
'rewards': rewards,
'episode_lengths': episode_lengths,
'stats': {
'mean': mean_reward,
'std': std_reward,
'median': median_reward,
'max': max_reward,
'min': min_reward
}
}
# Run detailed evaluation
# results = evaluate_agent_detailed(model, env, episodes=100)
```
## Training Information
### Hyperparameters
The REINFORCE agent was trained with carefully tuned hyperparameters:
- **Learning Rate**: Optimized for stable policy gradient updates
- **Discount Factor (γ)**: Balances immediate vs. future rewards
- **Network Architecture**: Multi-layer perceptron with appropriate hidden dimensions
- **Episode Length**: Sufficient episodes to learn temporal patterns
### Training Environment
- **State Representation**: Processed game screen or extracted features
- **Action Space**: Binary discrete actions (do nothing vs. thrust)
- **Reward Signal**: Game score progression with survival bonus
- **Training Episodes**: Extended training to achieve stable performance
### Algorithm Characteristics
- **Sample Efficiency**: Moderate (typical for policy gradient methods)
- **Stability**: Good convergence with proper hyperparameter tuning
- **Exploration**: Built-in through stochastic policy
- **Interpretability**: Clear policy learning through gradient ascent
## Limitations and Considerations
- **Sample Efficiency**: REINFORCE requires many episodes to learn effectively
- **Variance**: Policy gradient estimates can have high variance
- **Environment Specific**: Trained specifically for Pixelcopter game mechanics
- **Stochastic Performance**: Episode rewards vary due to policy stochasticity
- **Real-time Performance**: Inference speed suitable for real-time game play
## Related Work and Extensions
This model serves as an excellent educational example for:
- **Policy Gradient Methods**: Understanding direct policy optimization
- **Deep Reinforcement Learning**: Practical implementation of RL algorithms
- **Game AI**: Learning complex temporal control tasks
- **Baseline Comparisons**: Foundation for more advanced algorithms (A2C, PPO, etc.)
## Citation
If you use this model in your research or educational projects, please cite:
```bibtex
@misc{pixelcopter_reinforce_2024,
title={REINFORCE Agent for Pixelcopter-PLE-v0},
author={Adilbai},
year={2024},
publisher={Hugging Face},
howpublished={\url{https://huggingface.co/Adilbai/Pixelcopter-RL}},
note={Trained following Deep RL Course Unit 4}
}
```
## Educational Resources
This model was developed following the **Deep Reinforcement Learning Course Unit 4**:
- **Course Link**: [https://huggingface.co/deep-rl-course/unit4/introduction](https://huggingface.co/deep-rl-course/unit4/introduction)
- **Topic**: Policy Gradient Methods and REINFORCE
- **Learning Objectives**: Understanding policy-based RL algorithms
For comprehensive learning about REINFORCE and policy gradient methods, refer to the complete course materials.
## License
This model is distributed under the MIT License. The model is intended for educational and research purposes.
|
ICONNAI/ICONN-e1
|
ICONNAI
| 2025-06-17T22:27:14Z | 0 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"emotional-ai",
"ICONN",
"chatbot",
"base",
"conversational",
"license:other",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-17T18:57:06Z |
---
license: other
license_name: iconn
license_link: LICENSE
library_name: transformers
tags:
- emotional-ai
- ICONN
- chatbot
- base
co2_eq_emissions:
emissions: 2.74
source: CodeCarbon
training_type: pretraining
geographical_location: US-West
hardware_used: 18 x B200
extra_gated_prompt: >
By accessing or downloading this model, you agree to the ICONN AI License
Agreement. This includes restrictions on commercial use, redistribution,
derivative model training, and uploading to public or private repositories.
You may not use this model to harm, surveil, deceive, exploit, manipulate, or
conduct unethical AI research. All use must comply with ethical standards and
respect human dignity.
extra_gated_fields:
Full name: text
Organization (if any): text
Country: country
Date of agreement: date_picker
I am using this model for:
type: select
options:
- Personal use
- Internal business use
- Academic research
- Educational purposes
- label: Other (explain below)
value: other
Purpose explanation (if "Other"): text
I agree to all terms in the ICONN AI License Agreement, including:
type: checkbox
options:
- >-
I will NOT use this model for commercial purposes without explicit written
permission.
- >-
I will NOT redistribute, upload, or share this model in any public or
private repository.
- I will NOT train new models or derivatives from this model.
- >-
I will NOT use this model for unethical, harmful, deceptive, exploitative,
or surveillance purposes.
- I understand this license may be revoked if I breach any terms.
pipeline_tag: text-generation
---
# ICONN e1: The new era of Open-Source CoT in AI
**GPU poor? Less than 3x A100s? A e1 Lite model is coming with just 22B parameters alongside a model for consumer CPUs with 14B and 7B parameters.
- **Emotional Context Awareness**
ICONN e1 interprets emotional cues and adjusts tone, vocabulary, and response style—offering a more human-like, emotionally reactive experience.
- ** ICONN Emotional Core (IEC) (Notice: Not available on Huggingface)**
Powered by millions of small AI agents, IEC gives ICONN its emotional personality, with billions of simulated emotional states and detections.
- **Reasoning**
ICONN e1 is one of the most powerful reasoning open-source models, and most closed-source models in or out of Huggingface.
# What is in the ICONN i1 MoE?
## ICONN i1 MoE and Experts
ICONN e1, being a MoE just like it's base model ICONN 1, has multiple expert models. Keywords are taken from the user's input to choose which expert generates the output.
| Expert Chosen | User Input |
|---------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| ICONN-e1 | `'Hi!'` |
| ICONN-e1-Pro | `Solve for m: m² − (2 + ∑₍ⱼ₌₁₎² j)·m + (1 + ∑₍ⱼ₌₁₎³ j² − 14) = 0.` |
| ICONN-e1-Science | `If a stable isotope of Ununoctium (Uuo, now Og) could be synthesized in bulk, what would be its most likely physical state at STP and why, considering relativistic effects?` |
| ICONN-e1-Code | `Create a zero-dependency quantum-safe VM in Zig that compiles a domain-specific language into a fully homomorphic encrypted IR, supports hot-reloading WebAssembly modules, parallel scheduling via lock-free fibers, and performs live introspection through a headless OpenGL debug overlay.` |
**ICONN-e1:**
ICONN's general-purpose reasoning model, designed for everyday tasks, logic, and conversation.
**ICONN-e1-Pro:**
ICONN's advanced reasoning model, optimized for complex problem-solving in math, logic, and professional domains.
**ICONN-e1-Science:**
ICONN's scientific expert model, trained on advanced science datasets to enhance precision in physics, chemistry, biology, and technical reasoning.
**ICONN-e1-Code:**
ICONN's coding specialist, trained for programming, compiler theory, software architecture, and technical code generation across multiple languages.
# Usage
**First, make sure you have at least 4x Nvidia A100 or a single B100, and 120GB RAM and 120-192GB VRAM. Don't have this? Use our Lite model, coming soon.
> Run the code below to run ICONN i1:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
import torch
def run_iconn_chatbot(model_name="ICONNAI/ICONN-e1"):
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
device = 0 if torch.cuda.is_available() else -1
chat_pipeline = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
device=device,
max_length=1624,
do_sample=True,
top_p=0.9,
temperature=0.4,
pad_token_id=tokenizer.eos_token_id
)
print(f"ICONN chatbot running with model: {model_name}. Type 'exit' to quit.")
conversation_history = ""
while True:
user_input = input("You: ")
if user_input.lower() == "exit":
print("Goodbye!")
break
conversation_history += f"User: {user_input}\nBot:"
response = chat_pipeline(conversation_history, max_length=len(tokenizer.encode(conversation_history)) + 100)[0]['generated_text']
bot_reply = response[len(conversation_history):].strip().split("\n")[0]
print(f"Bot: {bot_reply}")
conversation_history += f" {bot_reply}\n"
if __name__ == "__main__":
run_iconn_chatbot()
```
|
FilipT/Cambridge_inlp_projection_gender_ltg_baseline
|
FilipT
| 2025-06-17T22:23:04Z | 0 | 0 | null |
[
"safetensors",
"ltgbert",
"custom_code",
"region:us"
] | null | 2025-06-17T14:12:51Z |
# INLP-debiased `babylm/ltgbert-100m-2024` (race)
This checkpoint equals `babylm/ltgbert-100m-2024` except an INLP race projection is baked into the MLM head’s dense layer.
|
HINT-lab/Qwen3-4B-Baseline-SFT
|
HINT-lab
| 2025-06-17T22:22:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-17T20:12:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lsr622/shuoranli_imdb_classification-model
|
lsr622
| 2025-06-17T22:08:33Z | 0 | 0 | null |
[
"safetensors",
"distilbert",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | 2025-06-17T19:20:52Z |
---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: shuoranli_imdb_classification-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# shuoranli_imdb_classification-model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3217
- Accuracy: 0.911
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3777 | 1.0 | 625 | 0.2540 | 0.9104 |
| 0.23 | 2.0 | 1250 | 0.3217 | 0.911 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.6.0
- Datasets 3.6.0
- Tokenizers 0.19.1
|
CriteriaPO/qwen2.5-3b-orpo-mini-fp-no-tools
|
CriteriaPO
| 2025-06-17T21:59:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:Qwen/Qwen2.5-3B",
"base_model:finetune:Qwen/Qwen2.5-3B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-17T01:06:37Z |
---
base_model: Qwen/Qwen2.5-3B
library_name: transformers
model_name: qwen2.5-3b-orpo-mini-fp-no-tools
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for qwen2.5-3b-orpo-mini-fp-no-tools
This model is a fine-tuned version of [Qwen/Qwen2.5-3B](https://huggingface.co/Qwen/Qwen2.5-3B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="CriteriaPO/qwen2.5-3b-orpo-mini-fp-no-tools", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bborges/CriteriaPreferences/runs/1o17w6l4)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.1.2+cu121
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
xilam90/SmolLM2-FT-MyDataset
|
xilam90
| 2025-06-17T21:29:54Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"smol-course",
"module_1",
"trl",
"sft",
"conversational",
"base_model:HuggingFaceTB/SmolLM2-135M",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-17T21:29:24Z |
---
base_model: HuggingFaceTB/SmolLM2-135M
library_name: transformers
model_name: SmolLM2-FT-MyDataset
tags:
- generated_from_trainer
- smol-course
- module_1
- trl
- sft
licence: license
---
# Model Card for SmolLM2-FT-MyDataset
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="xilam90/SmolLM2-FT-MyDataset", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/nguyentuananh374801-c-te-d-azur-france/huggingface/runs/1hb8wlfp)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
RichardErkhov/grimjim_-_Llama-3-Oasis-v1-OAS-8B-8bits
|
RichardErkhov
| 2025-06-17T21:27:56Z | 0 | 0 | null |
[
"safetensors",
"llama",
"arxiv:2212.04089",
"8-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-17T21:25:16Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3-Oasis-v1-OAS-8B - bnb 8bits
- Model creator: https://huggingface.co/grimjim/
- Original model: https://huggingface.co/grimjim/Llama-3-Oasis-v1-OAS-8B/
Original model description:
---
base_model:
- mlabonne/NeuralDaredevil-8B-abliterated
- NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
- Hastagaras/Halu-OAS-8B-Llama3
library_name: transformers
tags:
- mergekit
- merge
license: cc-by-nc-4.0
pipeline_tag: text-generation
---
# Llama-3-Oasis-v1-OAS-8B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
Each merge component was already subjected to Orthogonal Activation Steering (OAS) to mitigate refusals. The resulting text completion model should be versatile for both positive and negative roleplay scenarios and storytelling. Care should be taken when using this model.
- mlabonne/NeuralDaredevil-8B-abliterated : high MMLU for reasoning
- NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS : focus on roleplay
- Hastagaras/Halu-OAS-8B-Llama3 : focus on storytelling
Tested with the following sampler settings:
- temperature 1-1.45
- minP 0.01-0.02
Quantified model files:
- [static GGUF quants c/o mradermacher](https://huggingface.co/mradermacher/Llama-3-Oasis-v1-OAS-8B-GGUF)
- [weighted/imatrix GGUF quants c/o mradermacher](https://huggingface.co/mradermacher/Llama-3-Oasis-v1-OAS-8B-i1-GGUF)
- [8bpw exl2 quant](https://huggingface.co/grimjim/Llama-3-Oasis-v1-OAS-8B-8bpw_h8_exl2)
Built with Meta Llama 3.
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [mlabonne/NeuralDaredevil-8B-abliterated](https://huggingface.co/mlabonne/NeuralDaredevil-8B-abliterated) as a base.
### Models Merged
The following models were also included in the merge:
* [NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS)
* [Hastagaras/Halu-OAS-8B-Llama3](https://huggingface.co/Hastagaras/Halu-OAS-8B-Llama3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: mlabonne/NeuralDaredevil-8B-abliterated
dtype: bfloat16
merge_method: task_arithmetic
slices:
- sources:
- layer_range: [0, 32]
model: mlabonne/NeuralDaredevil-8B-abliterated
- layer_range: [0, 32]
model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
parameters:
weight: 0.3
- layer_range: [0, 32]
model: Hastagaras/Halu-OAS-8B-Llama3
parameters:
weight: 0.3
```
|
bragom/papib
|
bragom
| 2025-06-17T21:25:04Z | 18 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-07T21:36:57Z |
---
tags:
- text-generation-inference
- transformers
- trl
- sft
license: apache-2.0
language:
- en
---
|
RichardErkhov/tklohj_-_merged_8b_llama-4bits
|
RichardErkhov
| 2025-06-17T21:22:04Z | 0 | 0 | null |
[
"safetensors",
"llama",
"arxiv:2203.05482",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-17T21:20:09Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
merged_8b_llama - bnb 4bits
- Model creator: https://huggingface.co/tklohj/
- Original model: https://huggingface.co/tklohj/merged_8b_llama/
Original model description:
---
base_model:
- MLP-KTLim/llama-3-Korean-Bllossom-8B
- meta-llama/Meta-Llama-3-8B-Instruct
library_name: transformers
tags:
- mergekit
- merge
---
# merged
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [MLP-KTLim/llama-3-Korean-Bllossom-8B](https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B)
* [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
dtype: float16
merge_method: linear
slices:
- sources:
- layer_range: [0, 32]
model: MLP-KTLim/llama-3-Korean-Bllossom-8B
parameters:
weight: 1.0
- layer_range: [0, 32]
model: meta-llama/Meta-Llama-3-8B-Instruct
parameters:
weight: 0.3
- layer_range: [0, 32]
model: MLP-KTLim/llama-3-Korean-Bllossom-8B
parameters:
weight: 0.5
```
|
morturr/Llama-2-7b-hf-LOO_amazon-COMB_dadjokes-comb2-seed18-2025-06-17
|
morturr
| 2025-06-17T21:18:07Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-17T21:17:58Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_amazon-COMB_dadjokes-comb2-seed18-2025-06-17
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_amazon-COMB_dadjokes-comb2-seed18-2025-06-17
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 18
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
claudiaMartinez1982/xlm-roberta-large_bs16
|
claudiaMartinez1982
| 2025-06-17T20:51:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-06-17T14:48:35Z |
---
library_name: transformers
license: mit
base_model: xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: xlm-roberta-large_bs16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large_bs16
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0114
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.8081
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| 1.1534 | 2.5641 | 500 | 1.0114 | 0.0 | 0.0 | 0.0 | 0.8081 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF
|
bartowski
| 2025-06-17T20:50:16Z | 0 | 0 | null |
[
"gguf",
"nvidia",
"reasoning",
"math",
"code",
"supervised fine-tuning",
"reinforcement learning",
"text-generation",
"en",
"base_model:nvidia/AceReason-Nemotron-1.1-7B",
"base_model:quantized:nvidia/AceReason-Nemotron-1.1-7B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-06-17T20:04:48Z |
---
quantized_by: bartowski
pipeline_tag: text-generation
license_link: https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
license_name: nvidia-open-model-license
base_model: nvidia/AceReason-Nemotron-1.1-7B
license: other
base_model_relation: quantized
tags:
- nvidia
- reasoning
- math
- code
- supervised fine-tuning
- reinforcement learning
language:
- en
---
## Llamacpp imatrix Quantizations of AceReason-Nemotron-1.1-7B by nvidia
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b5674">b5674</a> for quantization.
Original model: https://huggingface.co/nvidia/AceReason-Nemotron-1.1-7B
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
Run them directly with [llama.cpp](https://github.com/ggerganov/llama.cpp), or any other llama.cpp based project
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [AceReason-Nemotron-1.1-7B-bf16.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-bf16.gguf) | bf16 | 15.24GB | false | Full BF16 weights. |
| [AceReason-Nemotron-1.1-7B-Q8_0.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-Q8_0.gguf) | Q8_0 | 8.10GB | false | Extremely high quality, generally unneeded but max available quant. |
| [AceReason-Nemotron-1.1-7B-Q6_K_L.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-Q6_K_L.gguf) | Q6_K_L | 6.52GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
| [AceReason-Nemotron-1.1-7B-Q6_K.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-Q6_K.gguf) | Q6_K | 6.25GB | false | Very high quality, near perfect, *recommended*. |
| [AceReason-Nemotron-1.1-7B-Q5_K_L.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-Q5_K_L.gguf) | Q5_K_L | 5.78GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
| [AceReason-Nemotron-1.1-7B-Q5_K_M.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-Q5_K_M.gguf) | Q5_K_M | 5.44GB | false | High quality, *recommended*. |
| [AceReason-Nemotron-1.1-7B-Q5_K_S.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-Q5_K_S.gguf) | Q5_K_S | 5.32GB | false | High quality, *recommended*. |
| [AceReason-Nemotron-1.1-7B-Q4_K_L.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-Q4_K_L.gguf) | Q4_K_L | 5.09GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [AceReason-Nemotron-1.1-7B-Q4_1.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-Q4_1.gguf) | Q4_1 | 4.87GB | false | Legacy format, similar performance to Q4_K_S but with improved tokens/watt on Apple silicon. |
| [AceReason-Nemotron-1.1-7B-Q4_K_M.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-Q4_K_M.gguf) | Q4_K_M | 4.68GB | false | Good quality, default size for most use cases, *recommended*. |
| [AceReason-Nemotron-1.1-7B-Q3_K_XL.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-Q3_K_XL.gguf) | Q3_K_XL | 4.57GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [AceReason-Nemotron-1.1-7B-Q4_K_S.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-Q4_K_S.gguf) | Q4_K_S | 4.46GB | false | Slightly lower quality with more space savings, *recommended*. |
| [AceReason-Nemotron-1.1-7B-Q4_0.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-Q4_0.gguf) | Q4_0 | 4.44GB | false | Legacy format, offers online repacking for ARM and AVX CPU inference. |
| [AceReason-Nemotron-1.1-7B-IQ4_NL.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-IQ4_NL.gguf) | IQ4_NL | 4.44GB | false | Similar to IQ4_XS, but slightly larger. Offers online repacking for ARM CPU inference. |
| [AceReason-Nemotron-1.1-7B-IQ4_XS.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-IQ4_XS.gguf) | IQ4_XS | 4.22GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [AceReason-Nemotron-1.1-7B-Q3_K_L.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-Q3_K_L.gguf) | Q3_K_L | 4.09GB | false | Lower quality but usable, good for low RAM availability. |
| [AceReason-Nemotron-1.1-7B-Q3_K_M.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-Q3_K_M.gguf) | Q3_K_M | 3.81GB | false | Low quality. |
| [AceReason-Nemotron-1.1-7B-IQ3_M.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-IQ3_M.gguf) | IQ3_M | 3.57GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [AceReason-Nemotron-1.1-7B-Q2_K_L.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-Q2_K_L.gguf) | Q2_K_L | 3.55GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [AceReason-Nemotron-1.1-7B-Q3_K_S.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-Q3_K_S.gguf) | Q3_K_S | 3.49GB | false | Low quality, not recommended. |
| [AceReason-Nemotron-1.1-7B-IQ3_XS.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-IQ3_XS.gguf) | IQ3_XS | 3.35GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [AceReason-Nemotron-1.1-7B-IQ3_XXS.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-IQ3_XXS.gguf) | IQ3_XXS | 3.11GB | false | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [AceReason-Nemotron-1.1-7B-Q2_K.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-Q2_K.gguf) | Q2_K | 3.02GB | false | Very low quality but surprisingly usable. |
| [AceReason-Nemotron-1.1-7B-IQ2_M.gguf](https://huggingface.co/bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF/blob/main/nvidia_AceReason-Nemotron-1.1-7B-IQ2_M.gguf) | IQ2_M | 2.78GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
## Downloading using huggingface-cli
<details>
<summary>Click to view download instructions</summary>
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF --include "nvidia_AceReason-Nemotron-1.1-7B-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/nvidia_AceReason-Nemotron-1.1-7B-GGUF --include "nvidia_AceReason-Nemotron-1.1-7B-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (nvidia_AceReason-Nemotron-1.1-7B-Q8_0) or download them all in place (./)
</details>
## ARM/AVX information
Previously, you would download Q4_0_4_4/4_8/8_8, and these would have their weights interleaved in memory in order to improve performance on ARM and AVX machines by loading up more data in one pass.
Now, however, there is something called "online repacking" for weights. details in [this PR](https://github.com/ggerganov/llama.cpp/pull/9921). If you use Q4_0 and your hardware would benefit from repacking weights, it will do it automatically on the fly.
As of llama.cpp build [b4282](https://github.com/ggerganov/llama.cpp/releases/tag/b4282) you will not be able to run the Q4_0_X_X files and will instead need to use Q4_0.
Additionally, if you want to get slightly better quality for , you can use IQ4_NL thanks to [this PR](https://github.com/ggerganov/llama.cpp/pull/10541) which will also repack the weights for ARM, though only the 4_4 for now. The loading time may be slower but it will result in an overall speed incrase.
<details>
<summary>Click to view Q4_0_X_X information (deprecated</summary>
I'm keeping this section to show the potential theoretical uplift in performance from using the Q4_0 with online repacking.
<details>
<summary>Click to view benchmarks on an AVX2 system (EPYC7702)</summary>
| model | size | params | backend | threads | test | t/s | % (vs Q4_0) |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |-------------: |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp512 | 204.03 ± 1.03 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp1024 | 282.92 ± 0.19 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp2048 | 259.49 ± 0.44 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg128 | 39.12 ± 0.27 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg256 | 39.31 ± 0.69 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg512 | 40.52 ± 0.03 | 100% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp512 | 301.02 ± 1.74 | 147% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp1024 | 287.23 ± 0.20 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp2048 | 262.77 ± 1.81 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg128 | 18.80 ± 0.99 | 48% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg256 | 24.46 ± 3.04 | 83% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg512 | 36.32 ± 3.59 | 90% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp512 | 271.71 ± 3.53 | 133% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp1024 | 279.86 ± 45.63 | 100% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp2048 | 320.77 ± 5.00 | 124% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg128 | 43.51 ± 0.05 | 111% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg256 | 43.35 ± 0.09 | 110% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg512 | 42.60 ± 0.31 | 105% |
Q4_0_8_8 offers a nice bump to prompt processing and a small bump to text generation
</details>
</details>
## Which file should I choose?
<details>
<summary>Click here for details</summary>
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
</details>
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.
Thank you ZeroWw for the inspiration to experiment with embed/output.
Thank you to LM Studio for sponsoring my work.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
dgambettaphd/M_llm2_run2_gen7_WXS_doc1000_synt64_lr1e-04_acm_MPP
|
dgambettaphd
| 2025-06-17T20:45:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-17T20:45:28Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
claudiaMartinez1982/xlm-roberta-large_bs4
|
claudiaMartinez1982
| 2025-06-17T20:42:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-06-17T14:39:09Z |
---
library_name: transformers
license: mit
base_model: xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: xlm-roberta-large_bs4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large_bs4
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0033
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.8081
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| 1.1881 | 0.6435 | 500 | 1.0335 | 0.0 | 0.0 | 0.0 | 0.8081 |
| 1.0929 | 1.2870 | 1000 | 1.0046 | 0.0 | 0.0 | 0.0 | 0.8081 |
| 1.1582 | 1.9305 | 1500 | 1.0025 | 0.0 | 0.0 | 0.0 | 0.8081 |
| 1.1784 | 2.5740 | 2000 | 1.0033 | 0.0 | 0.0 | 0.0 | 0.8081 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
claudiaMartinez1982/bert-base-spanish-wwm-cased_bs16
|
claudiaMartinez1982
| 2025-06-17T20:34:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:dccuchile/bert-base-spanish-wwm-cased",
"base_model:finetune:dccuchile/bert-base-spanish-wwm-cased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-06-17T14:31:04Z |
---
library_name: transformers
base_model: dccuchile/bert-base-spanish-wwm-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-spanish-wwm-cased_bs16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-spanish-wwm-cased_bs16
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0283
- Precision: 0.9720
- Recall: 0.9733
- F1: 0.9727
- Accuracy: 0.9944
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.025 | 2.5641 | 500 | 0.0283 | 0.9720 | 0.9733 | 0.9727 | 0.9944 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
altaweel/gemma-3-1b-ultrasound
|
altaweel
| 2025-06-17T20:23:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/gemma-3-1b-it",
"base_model:finetune:unsloth/gemma-3-1b-it",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-17T20:23:01Z |
---
base_model: unsloth/gemma-3-1b-it
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** altaweel
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-it
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
quanda-bench-test/f1c529c-default_LDS_lds_subset_3
|
quanda-bench-test
| 2025-06-17T20:23:24Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-17T20:17:37Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
quanda-bench-test/f1c529c-default_LDS_lds_subset_1
|
quanda-bench-test
| 2025-06-17T20:23:19Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-17T20:17:31Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
Missia/videomae-base-finetuned-mcap_v0-b_size-16-epochs-10-grad_acc-8-lr-5e-5
|
Missia
| 2025-06-17T20:18:19Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:eu"
] |
video-classification
| 2025-06-16T15:28:05Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-mcap_v0-b_size-16-epochs-10-grad_acc-8-lr-5e-5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-mcap_v0-b_size-16-epochs-10-grad_acc-8-lr-5e-5
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7881
- Accuracy: 0.7227
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: polynomial
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 520
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 1.9419 | 0.1 | 52 | 1.9421 | 0.2933 |
| 1.3545 | 1.1010 | 105 | 1.4109 | 0.4892 |
| 0.9712 | 2.1 | 157 | 1.0941 | 0.6174 |
| 0.734 | 3.1010 | 210 | 1.0393 | 0.6255 |
| 0.6193 | 4.1 | 262 | 0.9458 | 0.6672 |
| 0.5418 | 5.1010 | 315 | 0.8698 | 0.6894 |
| 0.5806 | 6.1 | 367 | 0.7847 | 0.7246 |
| 0.4834 | 7.1010 | 420 | 0.7600 | 0.7348 |
| 0.4774 | 8.1 | 472 | 0.7794 | 0.7251 |
| 0.4803 | 9.0913 | 520 | 0.7691 | 0.7278 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1
- Datasets 3.6.0
- Tokenizers 0.19.1
|
furkankarakuz/test-marian-finetuned-kde4-en-to-fr
|
furkankarakuz
| 2025-06-17T20:17:17Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2025-06-17T14:19:45Z |
---
library_name: transformers
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-fr
tags:
- translation
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: test-marian-finetuned-kde4-en-to-fr
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-fr
split: train
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 32.66555156176086
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8554
- Model Preparation Time: 0.0328
- Bleu: 32.6656
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
JamieOgundiran/ogun-Qwen3-8b
|
JamieOgundiran
| 2025-06-17T20:13:27Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-13T21:22:57Z |
---
base_model: Qwen/Qwen3-8B
library_name: transformers
model_name: ogun-Qwen3-8b
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for ogun-Qwen3-8b
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JamieOgundiran/ogun-Qwen3-8b", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
lmstudio-community/AceReason-Nemotron-1.1-7B-GGUF
|
lmstudio-community
| 2025-06-17T20:10:31Z | 0 | 0 | null |
[
"gguf",
"nvidia",
"reasoning",
"math",
"code",
"supervised fine-tuning",
"reinforcement learning",
"text-generation",
"en",
"arxiv:2506.13284",
"base_model:nvidia/AceReason-Nemotron-1.1-7B",
"base_model:quantized:nvidia/AceReason-Nemotron-1.1-7B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-06-17T20:04:48Z |
---
quantized_by: bartowski
pipeline_tag: text-generation
license_link: https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
license_name: nvidia-open-model-license
base_model: nvidia/AceReason-Nemotron-1.1-7B
license: other
base_model_relation: quantized
tags:
- nvidia
- reasoning
- math
- code
- supervised fine-tuning
- reinforcement learning
language:
- en
---
## 💫 Community Model> AceReason Nemotron 1.1 7B by Nvidia
*👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
**Model creator:** [nvidia](https://huggingface.co/nvidia)<br>
**Original model**: [AceReason-Nemotron-1.1-7B](https://huggingface.co/nvidia/AceReason-Nemotron-1.1-7B)<br>
**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b5674](https://github.com/ggerganov/llama.cpp/releases/tag/b5674)<br>
## Technical Details
Supports a context length of 128k tokens
Thanks to its stronger SFT backbone, AceReason-Nemotron-1.1-7B significantly outperforms its predecessor and sets a record-high performance among Qwen2.5-7B-based reasoning models on challenging math and code reasoning benchmarks
Technical report available here: https://arxiv.org/abs/2506.13284
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
## Disclaimers
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
|
Alecardo/tes17-6-6851c9a15b0cf93cadcaf729
|
Alecardo
| 2025-06-17T20:08:21Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-17T20:01:37Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Tes17 6 6851C9A15B0Cf93Cadcaf729
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/Alecardo/tes17-6-6851c9a15b0cf93cadcaf729/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Alecardo/tes17-6-6851c9a15b0cf93cadcaf729', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Alecardo/tes17-6-6851c9a15b0cf93cadcaf729/discussions) to add images that show off what you’ve made with this LoRA.
|
quanda-bench-test/0921427-default_MislabelingDetection
|
quanda-bench-test
| 2025-06-17T20:07:53Z | 37 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-03-04T12:14:46Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
dgambettaphd/M_llm2_run2_gen6_WXS_doc1000_synt64_lr1e-04_acm_MPP
|
dgambettaphd
| 2025-06-17T19:25:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-17T19:25:35Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
harshavardhan3/llama-3.2-11b-stanford-cars
|
harshavardhan3
| 2025-06-17T19:16:08Z | 0 | 0 | null |
[
"safetensors",
"mllama",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-06-15T16:52:48Z |
---
license: cc-by-nc-4.0
---
|
rllapin28/q-FrozenLake-v1-4x4-noSlippery
|
rllapin28
| 2025-06-17T19:15:21Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-17T19:15:17Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="rllapin28/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
RamiKhan821/deberta_gdp_results
|
RamiKhan821
| 2025-06-17T19:15:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-base",
"base_model:finetune:microsoft/deberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-17T19:14:08Z |
---
library_name: transformers
license: mit
base_model: microsoft/deberta-v3-base
tags:
- generated_from_trainer
model-index:
- name: deberta_gdp_results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta_gdp_results
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6933
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.6953 | 1.0 | 20 | 0.6932 |
| 0.6918 | 2.0 | 40 | 0.6932 |
| 0.6874 | 3.0 | 60 | 0.6933 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
Nevidu/LexBartLo_2
|
Nevidu
| 2025-06-17T19:07:30Z | 146 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:2503.10354",
"base_model:facebook/bart-large",
"base_model:adapter:facebook/bart-large",
"region:us"
] | null | 2025-06-08T06:53:35Z |
---
library_name: peft
base_model: facebook/bart-large
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Paper:** The model was published in "A Hybrid Architecture with Efficient Fine Tuning for Abstractive Patent Document Summarization" available in https://arxiv.org/abs/2503.10354 or https://ieeexplore.ieee.org/document/11030964
- **Developed by:** Nevidu Jayatilleke and Ruvan Weerasinghe
<!-- - **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed] -->
<!-- - **Model type:** [More Information Needed] -->
- **Supported Language:** English
- **Finetuned Domains:** Textile, Mechanical Engineering, Fixed
Construction, and Human Necessities Patent Documents from BigPatent Dataset
<!-- - **License:** [More Information Needed] -->
- **Finetuned from model:** facebook/bart-large
- **Link to the Specialised Model:** https://huggingface.co/Nevidu/LexBartLo_1
<!-- ### Model Sources -->
<!-- Provide the basic links for the model. -->
<!-- - **Repository:** [More Information Needed] -->
## How to use the model
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
import nltk
from nltk.tokenize import sent_tokenize, word_tokenize
from nltk.corpus import stopwords
from nltk.cluster.util import cosine_distance
import numpy as np
import networkx as nx
import pandas as pd
def preprocess_text(text):
sentences = sent_tokenize(text)
tokenized_sentences = [word_tokenize(sentence.lower()) for sentence in sentences]
return tokenized_sentences
def sentence_similarity(sentence1, sentence2):
stop_words = set(stopwords.words('english'))
filtered_sentence1 = [w for w in sentence1 if w not in stop_words]
filtered_sentence2 = [w for w in sentence2 if w not in stop_words]
all_words = list(set(filtered_sentence1 + filtered_sentence2))
vector1 = [filtered_sentence1.count(word) for word in all_words]
vector2 = [filtered_sentence2.count(word) for word in all_words]
return 1 - cosine_distance(vector1, vector2)
def build_similarity_matrix(sentences):
similarity_matrix = np.zeros((len(sentences), len(sentences)))
for i in range(len(sentences)):
for j in range(len(sentences)):
if i != j:
similarity_matrix[i][j] = sentence_similarity(sentences[i], sentences[j])
return similarity_matrix
def apply_lexrank(similarity_matrix, damping=0.85, threshold=0.2, max_iter=100):
nx_graph = nx.from_numpy_array(similarity_matrix)
scores = nx.pagerank(nx_graph, alpha=damping, tol=threshold, max_iter=max_iter)
return scores
def get_top_sentences(sentences, scores):
ranked_sentences = sorted(((scores[i], sentence) for i, sentence in enumerate(sentences)), reverse=True)
top_sentences = [sentence for score, sentence in ranked_sentences]
return top_sentences
def extract_important_sentences(text):
preprocessed_sentences = preprocess_text(text)
similarity_matrix = build_similarity_matrix(preprocessed_sentences)
scores = apply_lexrank(similarity_matrix)
top_sentences = get_top_sentences(preprocessed_sentences, scores)
paragraph = ' '.join([' '.join(sentence) for sentence in top_sentences])
return paragraph
def summarize(text, max_tokens):
peft_model = "Nevidu/LexBartLo_2"
config = PeftConfig.from_pretrained(peft_model)
# load base LLM model and tokenizer
model = AutoModelForSeq2SeqLM.from_pretrained(config.base_model_name_or_path)
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
# Load the Lora model
model = PeftModel.from_pretrained(model, peft_model)
sorted_text = extract_important_sentences(text)
input_ids = tokenizer(sorted_text, return_tensors="pt", truncation=True).input_ids
# with torch.inference_mode():
outputs = model.generate(input_ids=input_ids, max_new_tokens=max_tokens, do_sample=True, top_p=0.9)
summary = tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0]
return summary
text = """ Add your patent text"""
max_tokens = 256
summary = summarize(text, max_tokens)
```
## Citation
```json
@inproceedings{jayatilleke2025hybrid,
title={A Hybrid Architecture with Efficient Fine Tuning for Abstractive Patent Document Summarization},
author={Jayatilleke, Nevidu and Weerasinghe, Ruvan},
booktitle={2025 International Research Conference on Smart Computing and Systems Engineering (SCSE)},
pages={1--6},
year={2025},
organization={IEEE}
}
```
### Framework versions
- PEFT 0.9.0
|
microsoft/Phi-4-reasoning-onnx
|
microsoft
| 2025-06-17T19:07:08Z | 11 | 0 | null |
[
"onnx",
"ONNX",
"ONNX Runtime",
"code",
"nlp",
"phi4",
"en",
"license:mit",
"region:us"
] | null | 2025-05-02T16:49:20Z |
---
license: mit
tags:
- ONNX
- ONNX Runtime
- code
- nlp
- phi4
language:
- en
---
# Phi-4 Reasoning ONNX models
## Introduction
This repository hosts the optimized versions of the Phi-4 reasoning models to accelerate inference with ONNX Runtime.
Optimized models are published here in ONNX format to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets.
Here are some of the optimized configurations we have added:
1. ONNX model for int4 CPU: ONNX model for CPU and mobile using int4 quantization via RTN.
2. ONNX model for int4 GPU: ONNX model for GPU using int4 quantization via RTN.
## Model Run
You can see how to run examples with ORT GenAI [here](https://github.com/microsoft/onnxruntime-genai/blob/main/examples/python/phi-3-tutorial.md)
For CPU:
```bash
# Download the model directly using the Hugging Face CLI
huggingface-cli download microsoft/Phi-4-reasoning-onnx --include cpu_and_mobile/cpu-int4-rtn-block-32-acc-level-4/* --local-dir .
# Install the CPU package of ONNX Runtime GenAI
pip install --pre onnxruntime-genai
# Please adjust the model directory (-m) accordingly
curl https://raw.githubusercontent.com/microsoft/onnxruntime-genai/main/examples/python/phi3-qa.py -o phi3-qa.py
python phi3-qa.py -m cpu_and_mobile/cpu-int4-rtn-block-32-acc-level-4 -e cpu
```
For CUDA:
```bash
# Download the model directly using the Hugging Face CLI
huggingface-cli download microsoft/Phi-4-reasoning-onnx --include gpu/* --local-dir .
# Install the CUDA package of ONNX Runtime GenAI
pip install --pre onnxruntime-genai-cuda
# Please adjust the model directory (-m) accordingly
curl https://raw.githubusercontent.com/microsoft/onnxruntime-genai/main/examples/python/phi3-qa.py -o phi3-qa.py
python phi3-qa.py -m gpu/gpu-int4-rtn-block-32 -e cuda
```
For DirectML:
```bash
# Download the model directly using the Hugging Face CLI
huggingface-cli download microsoft/Phi-4-reasoning-onnx --include gpu/* --local-dir .
# Install the DML package of ONNX Runtime GenAI
onnxruntime-genai-directml
# Please adjust the model directory (-m) accordingly
curl https://raw.githubusercontent.com/microsoft/onnxruntime-genai/main/examples/python/phi3-qa.py -o phi3-qa.py
python phi3-qa.py -m gpu/gpu-int4-rtn-block-32 -e dml
```
## Model Description
- Developed by: Microsoft
- Model type: ONNX
- License: MIT
- Model Description: This is a conversion of the Phi-4 reasoning model for ONNX Runtime inference.
**Disclaimer:** Model is only an optimization of the base model, any risk associated with the model is the responsibility of the user of the model. Please verify and test for your scenarios. There may be a slight difference in output from the base model with the optimizations applied.
## Base Model
Phi-4 reasoning is a state-of-the-art open model built upon a blend of synthetic datasets, data from filtered public domain websites, and acquired academic books and Q&A datasets. The goal of this approach was to ensure that small capable models were trained with data focused on high quality and advanced reasoning.
See details at [https://huggingface.co/microsoft/Phi-4-reasoning/blob/main/README.md](https://huggingface.co/microsoft/Phi-4-reasoning/blob/main/README.md).
|
Nevidu/LexBartLo_1
|
Nevidu
| 2025-06-17T19:07:07Z | 25,973 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:2503.10354",
"base_model:facebook/bart-large",
"base_model:adapter:facebook/bart-large",
"region:us"
] | null | 2025-06-08T07:29:25Z |
---
library_name: peft
base_model: facebook/bart-large
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Paper:** The model was published in "A Hybrid Architecture with Efficient Fine Tuning for Abstractive Patent Document Summarization" available in https://arxiv.org/abs/2503.10354 or https://ieeexplore.ieee.org/document/11030964
- **Developed by:** Nevidu Jayatilleke and Ruvan Weerasinghe
<!-- - **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed] -->
<!-- - **Model type:** [More Information Needed] -->
- **Supported Language:** English
- **Finetuned Domain:** Textile Patent Documents from BigPatent Dataset
<!-- - **License:** [More Information Needed] -->
- **Finetuned from model:** facebook/bart-large
- **Link to the Generalised Model:** https://huggingface.co/Nevidu/LexBartLo_2
<!-- ### Model Sources -->
<!-- Provide the basic links for the model. -->
<!-- - **Repository:** [More Information Needed] -->
## How to use the model
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
```python
from peft import PeftModel, PeftConfig
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
import nltk
from nltk.tokenize import sent_tokenize, word_tokenize
from nltk.corpus import stopwords
from nltk.cluster.util import cosine_distance
import numpy as np
import networkx as nx
import pandas as pd
def preprocess_text(text):
sentences = sent_tokenize(text)
tokenized_sentences = [word_tokenize(sentence.lower()) for sentence in sentences]
return tokenized_sentences
def sentence_similarity(sentence1, sentence2):
stop_words = set(stopwords.words('english'))
filtered_sentence1 = [w for w in sentence1 if w not in stop_words]
filtered_sentence2 = [w for w in sentence2 if w not in stop_words]
all_words = list(set(filtered_sentence1 + filtered_sentence2))
vector1 = [filtered_sentence1.count(word) for word in all_words]
vector2 = [filtered_sentence2.count(word) for word in all_words]
return 1 - cosine_distance(vector1, vector2)
def build_similarity_matrix(sentences):
similarity_matrix = np.zeros((len(sentences), len(sentences)))
for i in range(len(sentences)):
for j in range(len(sentences)):
if i != j:
similarity_matrix[i][j] = sentence_similarity(sentences[i], sentences[j])
return similarity_matrix
def apply_lexrank(similarity_matrix, damping=0.85, threshold=0.2, max_iter=100):
nx_graph = nx.from_numpy_array(similarity_matrix)
scores = nx.pagerank(nx_graph, alpha=damping, tol=threshold, max_iter=max_iter)
return scores
def get_top_sentences(sentences, scores):
ranked_sentences = sorted(((scores[i], sentence) for i, sentence in enumerate(sentences)), reverse=True)
top_sentences = [sentence for score, sentence in ranked_sentences]
return top_sentences
def extract_important_sentences(text):
preprocessed_sentences = preprocess_text(text)
similarity_matrix = build_similarity_matrix(preprocessed_sentences)
scores = apply_lexrank(similarity_matrix)
top_sentences = get_top_sentences(preprocessed_sentences, scores)
paragraph = ' '.join([' '.join(sentence) for sentence in top_sentences])
return paragraph
def summarize(text, max_tokens):
peft_model = "Nevidu/LexBartLo_1"
config = PeftConfig.from_pretrained(peft_model)
# load base LLM model and tokenizer
model = AutoModelForSeq2SeqLM.from_pretrained(config.base_model_name_or_path)
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
# Load the Lora model
model = PeftModel.from_pretrained(model, peft_model)
sorted_text = extract_important_sentences(text)
input_ids = tokenizer(sorted_text, return_tensors="pt", truncation=True).input_ids
# with torch.inference_mode():
outputs = model.generate(input_ids=input_ids, max_new_tokens=max_tokens, do_sample=True, top_p=0.9)
summary = tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0]
return summary
text = """ Add your textile patent text"""
max_tokens = 256
summary = summarize(text, max_tokens)
```
## Citation
```json
@inproceedings{jayatilleke2025hybrid,
title={A Hybrid Architecture with Efficient Fine Tuning for Abstractive Patent Document Summarization},
author={Jayatilleke, Nevidu and Weerasinghe, Ruvan},
booktitle={2025 International Research Conference on Smart Computing and Systems Engineering (SCSE)},
pages={1--6},
year={2025},
organization={IEEE}
}
```
### Framework versions
- PEFT 0.9.0
|
morturr/Llama-2-7b-hf-LOO_dadjokes-COMB_amazon-comb1-seed28-2025-06-17
|
morturr
| 2025-06-17T19:06:19Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-17T19:06:08Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_dadjokes-COMB_amazon-comb1-seed28-2025-06-17
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_dadjokes-COMB_amazon-comb1-seed28-2025-06-17
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 28
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
AmberYifan/Qwen2.5-7B-Instruct-userfeedback-sentiment-iter2
|
AmberYifan
| 2025-06-17T19:03:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"base_model:AmberYifan/Qwen2.5-7B-Instruct-userfeedback-sentiment-iter1",
"base_model:finetune:AmberYifan/Qwen2.5-7B-Instruct-userfeedback-sentiment-iter1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-17T18:13:38Z |
---
base_model: AmberYifan/Qwen2.5-7B-Instruct-userfeedback-sentiment-iter1
library_name: transformers
model_name: Qwen2.5-7B-Instruct-userfeedback-sentiment-iter2
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for Qwen2.5-7B-Instruct-userfeedback-sentiment-iter2
This model is a fine-tuned version of [AmberYifan/Qwen2.5-7B-Instruct-userfeedback-sentiment-iter1](https://huggingface.co/AmberYifan/Qwen2.5-7B-Instruct-userfeedback-sentiment-iter1).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AmberYifan/Qwen2.5-7B-Instruct-userfeedback-sentiment-iter2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/yifanwang/huggingface/runs/whvtmojb)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
songhieng/roberta-phishing-content-detector-2.0
|
songhieng
| 2025-06-17T19:02:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-17T19:01:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Enderchef/ICONN-e1
|
Enderchef
| 2025-06-17T18:55:07Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-06-17T18:55:06Z |
---
license: other
license_name: iconn
license_link: LICENSE
---
|
AyaHm/Meta-Llama-3.1-8B-Instruct-bnb-4bit-chat-GGUF
|
AyaHm
| 2025-06-17T18:45:18Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-17T18:43:16Z |
---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** AyaHm
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.