modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-05-11 00:42:34
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 453
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-05-11 00:42:33
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
maxsegan/gpt2_l1_32_spatial_64_100k | maxsegan | "2025-05-09T20:48:30Z" | 0 | 0 | null | [
"pytorch",
"region:us"
] | null | "2025-05-09T20:36:54Z" | # gpt2_l1_32_spatial_64_100k
## Model Details
- Block size: 1024
- Vocabulary size: 50304
- Layers: 12
- Heads: 12
- Embedding size: 768
|
mradermacher/VL-Rethinker-32B-GGUF | mradermacher | "2025-05-09T20:48:30Z" | 154 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:TIGER-Lab/VL-Rethinker-32B",
"base_model:quantized:TIGER-Lab/VL-Rethinker-32B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-05-05T21:52:52Z" | ---
base_model: TIGER-Lab/VL-Rethinker-32B
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/TIGER-Lab/VL-Rethinker-32B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/VL-Rethinker-32B-GGUF/resolve/main/VL-Rethinker-32B.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/VL-Rethinker-32B-GGUF/resolve/main/VL-Rethinker-32B.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/VL-Rethinker-32B-GGUF/resolve/main/VL-Rethinker-32B.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/VL-Rethinker-32B-GGUF/resolve/main/VL-Rethinker-32B.Q3_K_L.gguf) | Q3_K_L | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/VL-Rethinker-32B-GGUF/resolve/main/VL-Rethinker-32B.IQ4_XS.gguf) | IQ4_XS | 18.0 | |
| [GGUF](https://huggingface.co/mradermacher/VL-Rethinker-32B-GGUF/resolve/main/VL-Rethinker-32B.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/VL-Rethinker-32B-GGUF/resolve/main/VL-Rethinker-32B.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/VL-Rethinker-32B-GGUF/resolve/main/VL-Rethinker-32B.Q5_K_S.gguf) | Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/VL-Rethinker-32B-GGUF/resolve/main/VL-Rethinker-32B.Q5_K_M.gguf) | Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/VL-Rethinker-32B-GGUF/resolve/main/VL-Rethinker-32B.Q6_K.gguf) | Q6_K | 27.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/VL-Rethinker-32B-GGUF/resolve/main/VL-Rethinker-32B.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
maxsegan/gpt2_l1_32_spatial_32_100k | maxsegan | "2025-05-09T20:48:28Z" | 0 | 0 | null | [
"pytorch",
"region:us"
] | null | "2025-05-09T20:40:49Z" | # gpt2_l1_32_spatial_32_100k
## Model Details
- Block size: 1024
- Vocabulary size: 50304
- Layers: 12
- Heads: 12
- Embedding size: 768
|
unsloth/Llama-3.2-3B-Instruct | unsloth | "2025-05-09T20:43:04Z" | 117,456 | 62 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-3",
"meta",
"facebook",
"unsloth",
"conversational",
"en",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-09-25T18:43:42Z" | ---
base_model: meta-llama/Llama-3.2-3B-Instruct
language:
- en
library_name: transformers
license: llama3.2
tags:
- llama-3
- llama
- meta
- facebook
- unsloth
- transformers
---
## ***See [our collection](https://huggingface.co/collections/unsloth/llama-32-66f46afde4ca573864321a22) for all versions of Llama 3.2 including GGUF, 4-bit and original 16-bit formats.***
# Finetune Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
We have a free Google Colab Tesla T4 notebook for Llama 3.2 (3B) here: https://colab.research.google.com/drive/1T5-zKWM_5OD21QHwXHiV9ixTRR7k3iB9?usp=sharing
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
# unsloth/Llama-3.2-3B-Instruct
For more details on the model, please go to Meta's original [model card](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Llama-3.1 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less |
| **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less |
| **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
## Special Thanks
A huge thank you to the Meta and Llama team for creating and releasing these models.
## Model Information
The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
**Model developer**: Meta
**Model Architecture:** Llama 3.2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
**Supported languages:** English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine-tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly.
**Llama 3.2 family of models** Token counts refer to pretraining data only. All model versions use Grouped-Query Attention (GQA) for improved inference scalability.
**Model Release Date:** Sept 25, 2024
**Status:** This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety.
**License:** Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement).
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
|
mradermacher/gemma-2b-fine-tuned-math-i1-GGUF | mradermacher | "2025-05-09T20:38:57Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:hjskhan/gemma-2b-fine-tuned-math",
"base_model:quantized:hjskhan/gemma-2b-fine-tuned-math",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | "2025-05-09T19:00:09Z" | ---
base_model: hjskhan/gemma-2b-fine-tuned-math
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/hjskhan/gemma-2b-fine-tuned-math
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-IQ1_S.gguf) | i1-IQ1_S | 0.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-IQ1_M.gguf) | i1-IQ1_M | 0.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-IQ2_S.gguf) | i1-IQ2_S | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-IQ2_M.gguf) | i1-IQ2_M | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.2 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-Q2_K.gguf) | i1-Q2_K | 1.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-IQ3_S.gguf) | i1-IQ3_S | 1.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-IQ3_M.gguf) | i1-IQ3_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-Q3_K_M.gguf) | i1-Q3_K_M | 1.5 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-Q3_K_L.gguf) | i1-Q3_K_L | 1.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-IQ4_XS.gguf) | i1-IQ4_XS | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-IQ4_NL.gguf) | i1-IQ4_NL | 1.7 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-Q4_0.gguf) | i1-Q4_0 | 1.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-Q4_K_S.gguf) | i1-Q4_K_S | 1.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-Q4_K_M.gguf) | i1-Q4_K_M | 1.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-Q4_1.gguf) | i1-Q4_1 | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-Q5_K_S.gguf) | i1-Q5_K_S | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-Q5_K_M.gguf) | i1-Q5_K_M | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/gemma-2b-fine-tuned-math-i1-GGUF/resolve/main/gemma-2b-fine-tuned-math.i1-Q6_K.gguf) | i1-Q6_K | 2.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
lmcoleman/gemma-12b-mixed-provider-merged-Q8_0-GGUF | lmcoleman | "2025-05-09T20:37:33Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:lmcoleman/gemma-12b-mixed-provider-merged",
"base_model:quantized:lmcoleman/gemma-12b-mixed-provider-merged",
"endpoints_compatible",
"region:us"
] | null | "2025-05-09T20:36:41Z" | ---
base_model: lmcoleman/gemma-12b-mixed-provider-merged
library_name: transformers
tags:
- llama-cpp
- gguf-my-repo
---
# lmcoleman/gemma-12b-mixed-provider-merged-Q8_0-GGUF
This model was converted to GGUF format from [`lmcoleman/gemma-12b-mixed-provider-merged`](https://huggingface.co/lmcoleman/gemma-12b-mixed-provider-merged) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/lmcoleman/gemma-12b-mixed-provider-merged) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo lmcoleman/gemma-12b-mixed-provider-merged-Q8_0-GGUF --hf-file gemma-12b-mixed-provider-merged-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo lmcoleman/gemma-12b-mixed-provider-merged-Q8_0-GGUF --hf-file gemma-12b-mixed-provider-merged-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo lmcoleman/gemma-12b-mixed-provider-merged-Q8_0-GGUF --hf-file gemma-12b-mixed-provider-merged-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo lmcoleman/gemma-12b-mixed-provider-merged-Q8_0-GGUF --hf-file gemma-12b-mixed-provider-merged-q8_0.gguf -c 2048
```
|
sharon1331/Qwen3-8b-original-finetuned | sharon1331 | "2025-05-09T20:35:20Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-05-09T18:44:36Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
dimasik2987/bbf6a5ff-1145-4a85-b700-7964ac0ec7ca | dimasik2987 | "2025-05-09T20:32:44Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2-1.5B-Instruct",
"base_model:adapter:Qwen/Qwen2-1.5B-Instruct",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2025-05-09T20:12:50Z" | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: bbf6a5ff-1145-4a85-b700-7964ac0ec7ca
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: Qwen/Qwen2-1.5B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 73655b170c5bbcfd_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/73655b170c5bbcfd_train_data.json
type:
field_instruction: query
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: dimasik2987/bbf6a5ff-1145-4a85-b700-7964ac0ec7ca
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_steps: 400
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/73655b170c5bbcfd_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f3d8f374-7f54-44fd-bcdd-e66fab6c2305
wandb_project: s56-28
wandb_run: your_name
wandb_runid: f3d8f374-7f54-44fd-bcdd-e66fab6c2305
warmup_steps: 20
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# bbf6a5ff-1145-4a85-b700-7964ac0ec7ca
This model is a fine-tuned version of [Qwen/Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7747
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6887 | 0.0433 | 400 | 0.7747 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
augustocsc/Se124M100KInfPrompt_NT | augustocsc | "2025-05-09T20:30:47Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"license:mit",
"region:us"
] | null | "2025-05-09T17:21:18Z" | ---
library_name: peft
license: mit
base_model: gpt2
tags:
- generated_from_trainer
model-index:
- name: Se124M100KInfPrompt_NT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Se124M100KInfPrompt_NT
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3899
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.9983 | 0.0082 | 20 | 2.6302 |
| 2.9256 | 0.0164 | 40 | 2.6331 |
| 2.9534 | 0.0246 | 60 | 2.6305 |
| 2.9277 | 0.0327 | 80 | 2.6052 |
| 2.8694 | 0.0409 | 100 | 2.5836 |
| 2.879 | 0.0491 | 120 | 2.5278 |
| 2.7972 | 0.0573 | 140 | 2.4722 |
| 2.7112 | 0.0655 | 160 | 2.4048 |
| 2.5739 | 0.0737 | 180 | 2.3244 |
| 2.4522 | 0.0819 | 200 | 2.2167 |
| 2.3121 | 0.0901 | 220 | 2.0842 |
| 2.1652 | 0.0982 | 240 | 1.9278 |
| 2.0135 | 0.1064 | 260 | 1.7658 |
| 1.8352 | 0.1146 | 280 | 1.5877 |
| 1.6331 | 0.1228 | 300 | 1.3988 |
| 1.4721 | 0.1310 | 320 | 1.2257 |
| 1.3347 | 0.1392 | 340 | 1.0901 |
| 1.202 | 0.1474 | 360 | 0.9639 |
| 1.125 | 0.1555 | 380 | 0.8691 |
| 1.002 | 0.1637 | 400 | 0.8003 |
| 0.9698 | 0.1719 | 420 | 0.7525 |
| 0.8963 | 0.1801 | 440 | 0.7148 |
| 0.8571 | 0.1883 | 460 | 0.6803 |
| 0.7983 | 0.1965 | 480 | 0.6542 |
| 0.7838 | 0.2047 | 500 | 0.6332 |
| 0.7689 | 0.2129 | 520 | 0.6118 |
| 0.7256 | 0.2210 | 540 | 0.5931 |
| 0.7146 | 0.2292 | 560 | 0.5799 |
| 0.686 | 0.2374 | 580 | 0.5673 |
| 0.6729 | 0.2456 | 600 | 0.5565 |
| 0.6628 | 0.2538 | 620 | 0.5445 |
| 0.6525 | 0.2620 | 640 | 0.5406 |
| 0.6298 | 0.2702 | 660 | 0.5328 |
| 0.6345 | 0.2783 | 680 | 0.5237 |
| 0.6171 | 0.2865 | 700 | 0.5169 |
| 0.6052 | 0.2947 | 720 | 0.5113 |
| 0.5862 | 0.3029 | 740 | 0.5066 |
| 0.5767 | 0.3111 | 760 | 0.5021 |
| 0.5777 | 0.3193 | 780 | 0.4966 |
| 0.5689 | 0.3275 | 800 | 0.4939 |
| 0.5677 | 0.3357 | 820 | 0.4894 |
| 0.5567 | 0.3438 | 840 | 0.4878 |
| 0.5547 | 0.3520 | 860 | 0.4817 |
| 0.5516 | 0.3602 | 880 | 0.4808 |
| 0.5577 | 0.3684 | 900 | 0.4787 |
| 0.5461 | 0.3766 | 920 | 0.4740 |
| 0.5449 | 0.3848 | 940 | 0.4712 |
| 0.5301 | 0.3930 | 960 | 0.4711 |
| 0.5313 | 0.4011 | 980 | 0.4682 |
| 0.5278 | 0.4093 | 1000 | 0.4676 |
| 0.518 | 0.4175 | 1020 | 0.4643 |
| 0.531 | 0.4257 | 1040 | 0.4621 |
| 0.5302 | 0.4339 | 1060 | 0.4624 |
| 0.5238 | 0.4421 | 1080 | 0.4581 |
| 0.5179 | 0.4503 | 1100 | 0.4572 |
| 0.5167 | 0.4585 | 1120 | 0.4577 |
| 0.5181 | 0.4666 | 1140 | 0.4534 |
| 0.5207 | 0.4748 | 1160 | 0.4536 |
| 0.5037 | 0.4830 | 1180 | 0.4533 |
| 0.5117 | 0.4912 | 1200 | 0.4517 |
| 0.5066 | 0.4994 | 1220 | 0.4500 |
| 0.5023 | 0.5076 | 1240 | 0.4487 |
| 0.4903 | 0.5158 | 1260 | 0.4470 |
| 0.4916 | 0.5239 | 1280 | 0.4462 |
| 0.4908 | 0.5321 | 1300 | 0.4460 |
| 0.4956 | 0.5403 | 1320 | 0.4443 |
| 0.5059 | 0.5485 | 1340 | 0.4438 |
| 0.4908 | 0.5567 | 1360 | 0.4427 |
| 0.4978 | 0.5649 | 1380 | 0.4416 |
| 0.4861 | 0.5731 | 1400 | 0.4410 |
| 0.4865 | 0.5813 | 1420 | 0.4404 |
| 0.4916 | 0.5894 | 1440 | 0.4381 |
| 0.4832 | 0.5976 | 1460 | 0.4352 |
| 0.4811 | 0.6058 | 1480 | 0.4381 |
| 0.4779 | 0.6140 | 1500 | 0.4364 |
| 0.4792 | 0.6222 | 1520 | 0.4381 |
| 0.4755 | 0.6304 | 1540 | 0.4346 |
| 0.4797 | 0.6386 | 1560 | 0.4358 |
| 0.4769 | 0.6467 | 1580 | 0.4321 |
| 0.4682 | 0.6549 | 1600 | 0.4323 |
| 0.4797 | 0.6631 | 1620 | 0.4338 |
| 0.4754 | 0.6713 | 1640 | 0.4332 |
| 0.4687 | 0.6795 | 1660 | 0.4325 |
| 0.4629 | 0.6877 | 1680 | 0.4330 |
| 0.478 | 0.6959 | 1700 | 0.4312 |
| 0.4693 | 0.7041 | 1720 | 0.4291 |
| 0.4746 | 0.7122 | 1740 | 0.4305 |
| 0.4626 | 0.7204 | 1760 | 0.4300 |
| 0.4641 | 0.7286 | 1780 | 0.4317 |
| 0.4606 | 0.7368 | 1800 | 0.4287 |
| 0.4678 | 0.7450 | 1820 | 0.4278 |
| 0.4736 | 0.7532 | 1840 | 0.4267 |
| 0.4739 | 0.7614 | 1860 | 0.4270 |
| 0.4627 | 0.7695 | 1880 | 0.4269 |
| 0.4596 | 0.7777 | 1900 | 0.4247 |
| 0.4617 | 0.7859 | 1920 | 0.4245 |
| 0.4663 | 0.7941 | 1940 | 0.4238 |
| 0.4569 | 0.8023 | 1960 | 0.4243 |
| 0.4683 | 0.8105 | 1980 | 0.4229 |
| 0.4664 | 0.8187 | 2000 | 0.4231 |
| 0.4711 | 0.8269 | 2020 | 0.4203 |
| 0.4712 | 0.8350 | 2040 | 0.4201 |
| 0.4579 | 0.8432 | 2060 | 0.4186 |
| 0.4688 | 0.8514 | 2080 | 0.4221 |
| 0.4566 | 0.8596 | 2100 | 0.4222 |
| 0.4573 | 0.8678 | 2120 | 0.4179 |
| 0.4606 | 0.8760 | 2140 | 0.4183 |
| 0.456 | 0.8842 | 2160 | 0.4189 |
| 0.4684 | 0.8923 | 2180 | 0.4180 |
| 0.4522 | 0.9005 | 2200 | 0.4183 |
| 0.4591 | 0.9087 | 2220 | 0.4171 |
| 0.457 | 0.9169 | 2240 | 0.4194 |
| 0.4714 | 0.9251 | 2260 | 0.4160 |
| 0.4637 | 0.9333 | 2280 | 0.4173 |
| 0.4454 | 0.9415 | 2300 | 0.4190 |
| 0.4579 | 0.9497 | 2320 | 0.4133 |
| 0.4567 | 0.9578 | 2340 | 0.4153 |
| 0.4479 | 0.9660 | 2360 | 0.4152 |
| 0.4523 | 0.9742 | 2380 | 0.4138 |
| 0.4559 | 0.9824 | 2400 | 0.4147 |
| 0.4493 | 0.9906 | 2420 | 0.4131 |
| 0.4568 | 0.9988 | 2440 | 0.4145 |
| 0.4494 | 1.0070 | 2460 | 0.4120 |
| 0.4549 | 1.0151 | 2480 | 0.4120 |
| 0.4491 | 1.0233 | 2500 | 0.4130 |
| 0.454 | 1.0315 | 2520 | 0.4143 |
| 0.4474 | 1.0397 | 2540 | 0.4134 |
| 0.4541 | 1.0479 | 2560 | 0.4134 |
| 0.4458 | 1.0561 | 2580 | 0.4117 |
| 0.4469 | 1.0643 | 2600 | 0.4108 |
| 0.4502 | 1.0725 | 2620 | 0.4120 |
| 0.4447 | 1.0806 | 2640 | 0.4102 |
| 0.445 | 1.0888 | 2660 | 0.4107 |
| 0.4496 | 1.0970 | 2680 | 0.4080 |
| 0.445 | 1.1052 | 2700 | 0.4097 |
| 0.4549 | 1.1134 | 2720 | 0.4071 |
| 0.4476 | 1.1216 | 2740 | 0.4095 |
| 0.4427 | 1.1298 | 2760 | 0.4111 |
| 0.4412 | 1.1379 | 2780 | 0.4091 |
| 0.441 | 1.1461 | 2800 | 0.4111 |
| 0.4465 | 1.1543 | 2820 | 0.4080 |
| 0.4427 | 1.1625 | 2840 | 0.4076 |
| 0.4417 | 1.1707 | 2860 | 0.4080 |
| 0.4409 | 1.1789 | 2880 | 0.4080 |
| 0.4573 | 1.1871 | 2900 | 0.4078 |
| 0.443 | 1.1953 | 2920 | 0.4067 |
| 0.4412 | 1.2034 | 2940 | 0.4079 |
| 0.4384 | 1.2116 | 2960 | 0.4079 |
| 0.4426 | 1.2198 | 2980 | 0.4083 |
| 0.4407 | 1.2280 | 3000 | 0.4056 |
| 0.4487 | 1.2362 | 3020 | 0.4059 |
| 0.4421 | 1.2444 | 3040 | 0.4064 |
| 0.4412 | 1.2526 | 3060 | 0.4057 |
| 0.4354 | 1.2607 | 3080 | 0.4073 |
| 0.4454 | 1.2689 | 3100 | 0.4056 |
| 0.4376 | 1.2771 | 3120 | 0.4064 |
| 0.4469 | 1.2853 | 3140 | 0.4043 |
| 0.4437 | 1.2935 | 3160 | 0.4038 |
| 0.4412 | 1.3017 | 3180 | 0.4031 |
| 0.4354 | 1.3099 | 3200 | 0.4053 |
| 0.4413 | 1.3181 | 3220 | 0.4050 |
| 0.4344 | 1.3262 | 3240 | 0.4048 |
| 0.4471 | 1.3344 | 3260 | 0.4022 |
| 0.4347 | 1.3426 | 3280 | 0.4049 |
| 0.4367 | 1.3508 | 3300 | 0.4019 |
| 0.4391 | 1.3590 | 3320 | 0.4033 |
| 0.4424 | 1.3672 | 3340 | 0.4019 |
| 0.4391 | 1.3754 | 3360 | 0.4009 |
| 0.4377 | 1.3835 | 3380 | 0.4014 |
| 0.4413 | 1.3917 | 3400 | 0.4015 |
| 0.4382 | 1.3999 | 3420 | 0.4006 |
| 0.4298 | 1.4081 | 3440 | 0.4015 |
| 0.4503 | 1.4163 | 3460 | 0.4019 |
| 0.4413 | 1.4245 | 3480 | 0.4015 |
| 0.4343 | 1.4327 | 3500 | 0.3996 |
| 0.4373 | 1.4409 | 3520 | 0.4002 |
| 0.4338 | 1.4490 | 3540 | 0.4016 |
| 0.4292 | 1.4572 | 3560 | 0.4000 |
| 0.4444 | 1.4654 | 3580 | 0.4004 |
| 0.4342 | 1.4736 | 3600 | 0.3996 |
| 0.4339 | 1.4818 | 3620 | 0.4004 |
| 0.4291 | 1.4900 | 3640 | 0.4006 |
| 0.435 | 1.4982 | 3660 | 0.3993 |
| 0.445 | 1.5063 | 3680 | 0.3999 |
| 0.4389 | 1.5145 | 3700 | 0.4009 |
| 0.4316 | 1.5227 | 3720 | 0.3988 |
| 0.4363 | 1.5309 | 3740 | 0.3994 |
| 0.4384 | 1.5391 | 3760 | 0.3995 |
| 0.4355 | 1.5473 | 3780 | 0.4006 |
| 0.436 | 1.5555 | 3800 | 0.3983 |
| 0.4384 | 1.5637 | 3820 | 0.3981 |
| 0.4394 | 1.5718 | 3840 | 0.3985 |
| 0.4392 | 1.5800 | 3860 | 0.3978 |
| 0.4456 | 1.5882 | 3880 | 0.3991 |
| 0.4359 | 1.5964 | 3900 | 0.3984 |
| 0.4328 | 1.6046 | 3920 | 0.4004 |
| 0.4272 | 1.6128 | 3940 | 0.3992 |
| 0.4352 | 1.6210 | 3960 | 0.3993 |
| 0.4262 | 1.6291 | 3980 | 0.3994 |
| 0.4406 | 1.6373 | 4000 | 0.3979 |
| 0.4291 | 1.6455 | 4020 | 0.3991 |
| 0.4262 | 1.6537 | 4040 | 0.3975 |
| 0.4337 | 1.6619 | 4060 | 0.3978 |
| 0.4404 | 1.6701 | 4080 | 0.3964 |
| 0.4408 | 1.6783 | 4100 | 0.3983 |
| 0.4378 | 1.6865 | 4120 | 0.3977 |
| 0.4322 | 1.6946 | 4140 | 0.3973 |
| 0.4343 | 1.7028 | 4160 | 0.3970 |
| 0.43 | 1.7110 | 4180 | 0.3961 |
| 0.4343 | 1.7192 | 4200 | 0.3958 |
| 0.4308 | 1.7274 | 4220 | 0.3965 |
| 0.4355 | 1.7356 | 4240 | 0.3952 |
| 0.4371 | 1.7438 | 4260 | 0.3966 |
| 0.4342 | 1.7519 | 4280 | 0.3956 |
| 0.4364 | 1.7601 | 4300 | 0.3962 |
| 0.434 | 1.7683 | 4320 | 0.3953 |
| 0.4335 | 1.7765 | 4340 | 0.3965 |
| 0.4317 | 1.7847 | 4360 | 0.3953 |
| 0.4298 | 1.7929 | 4380 | 0.3954 |
| 0.4307 | 1.8011 | 4400 | 0.3942 |
| 0.4345 | 1.8093 | 4420 | 0.3952 |
| 0.433 | 1.8174 | 4440 | 0.3943 |
| 0.4261 | 1.8256 | 4460 | 0.3955 |
| 0.4338 | 1.8338 | 4480 | 0.3950 |
| 0.4263 | 1.8420 | 4500 | 0.3944 |
| 0.4263 | 1.8502 | 4520 | 0.3939 |
| 0.436 | 1.8584 | 4540 | 0.3943 |
| 0.432 | 1.8666 | 4560 | 0.3946 |
| 0.4302 | 1.8747 | 4580 | 0.3942 |
| 0.4333 | 1.8829 | 4600 | 0.3936 |
| 0.4316 | 1.8911 | 4620 | 0.3936 |
| 0.4294 | 1.8993 | 4640 | 0.3938 |
| 0.4265 | 1.9075 | 4660 | 0.3936 |
| 0.4294 | 1.9157 | 4680 | 0.3943 |
| 0.4319 | 1.9239 | 4700 | 0.3942 |
| 0.4391 | 1.9321 | 4720 | 0.3933 |
| 0.4243 | 1.9402 | 4740 | 0.3944 |
| 0.4325 | 1.9484 | 4760 | 0.3930 |
| 0.4343 | 1.9566 | 4780 | 0.3924 |
| 0.4287 | 1.9648 | 4800 | 0.3938 |
| 0.4322 | 1.9730 | 4820 | 0.3933 |
| 0.4283 | 1.9812 | 4840 | 0.3926 |
| 0.4309 | 1.9894 | 4860 | 0.3935 |
| 0.4238 | 1.9975 | 4880 | 0.3922 |
| 0.4217 | 2.0057 | 4900 | 0.3925 |
| 0.425 | 2.0139 | 4920 | 0.3926 |
| 0.4389 | 2.0221 | 4940 | 0.3925 |
| 0.4346 | 2.0303 | 4960 | 0.3920 |
| 0.4254 | 2.0385 | 4980 | 0.3931 |
| 0.4223 | 2.0467 | 5000 | 0.3919 |
| 0.4268 | 2.0549 | 5020 | 0.3930 |
| 0.4228 | 2.0630 | 5040 | 0.3929 |
| 0.4325 | 2.0712 | 5060 | 0.3928 |
| 0.4255 | 2.0794 | 5080 | 0.3928 |
| 0.4305 | 2.0876 | 5100 | 0.3922 |
| 0.4333 | 2.0958 | 5120 | 0.3919 |
| 0.4332 | 2.1040 | 5140 | 0.3927 |
| 0.4261 | 2.1122 | 5160 | 0.3929 |
| 0.429 | 2.1203 | 5180 | 0.3916 |
| 0.4274 | 2.1285 | 5200 | 0.3921 |
| 0.4277 | 2.1367 | 5220 | 0.3928 |
| 0.4356 | 2.1449 | 5240 | 0.3913 |
| 0.4268 | 2.1531 | 5260 | 0.3921 |
| 0.4297 | 2.1613 | 5280 | 0.3921 |
| 0.4272 | 2.1695 | 5300 | 0.3915 |
| 0.4337 | 2.1777 | 5320 | 0.3922 |
| 0.4312 | 2.1858 | 5340 | 0.3911 |
| 0.426 | 2.1940 | 5360 | 0.3917 |
| 0.4305 | 2.2022 | 5380 | 0.3925 |
| 0.4373 | 2.2104 | 5400 | 0.3919 |
| 0.4319 | 2.2186 | 5420 | 0.3914 |
| 0.43 | 2.2268 | 5440 | 0.3921 |
| 0.4307 | 2.2350 | 5460 | 0.3910 |
| 0.4352 | 2.2431 | 5480 | 0.3912 |
| 0.4323 | 2.2513 | 5500 | 0.3907 |
| 0.4255 | 2.2595 | 5520 | 0.3905 |
| 0.4286 | 2.2677 | 5540 | 0.3913 |
| 0.4271 | 2.2759 | 5560 | 0.3916 |
| 0.4319 | 2.2841 | 5580 | 0.3915 |
| 0.4175 | 2.2923 | 5600 | 0.3911 |
| 0.424 | 2.3005 | 5620 | 0.3914 |
| 0.4365 | 2.3086 | 5640 | 0.3907 |
| 0.4322 | 2.3168 | 5660 | 0.3906 |
| 0.4227 | 2.3250 | 5680 | 0.3910 |
| 0.4308 | 2.3332 | 5700 | 0.3909 |
| 0.4268 | 2.3414 | 5720 | 0.3910 |
| 0.4352 | 2.3496 | 5740 | 0.3911 |
| 0.4274 | 2.3578 | 5760 | 0.3898 |
| 0.4255 | 2.3659 | 5780 | 0.3901 |
| 0.4277 | 2.3741 | 5800 | 0.3903 |
| 0.4209 | 2.3823 | 5820 | 0.3905 |
| 0.4221 | 2.3905 | 5840 | 0.3911 |
| 0.4247 | 2.3987 | 5860 | 0.3911 |
| 0.4263 | 2.4069 | 5880 | 0.3910 |
| 0.4284 | 2.4151 | 5900 | 0.3912 |
| 0.4251 | 2.4233 | 5920 | 0.3910 |
| 0.4275 | 2.4314 | 5940 | 0.3908 |
| 0.4271 | 2.4396 | 5960 | 0.3904 |
| 0.4333 | 2.4478 | 5980 | 0.3904 |
| 0.4237 | 2.4560 | 6000 | 0.3903 |
| 0.4351 | 2.4642 | 6020 | 0.3903 |
| 0.4313 | 2.4724 | 6040 | 0.3902 |
| 0.4243 | 2.4806 | 6060 | 0.3910 |
| 0.4289 | 2.4887 | 6080 | 0.3907 |
| 0.4299 | 2.4969 | 6100 | 0.3909 |
| 0.428 | 2.5051 | 6120 | 0.3903 |
| 0.4202 | 2.5133 | 6140 | 0.3902 |
| 0.4291 | 2.5215 | 6160 | 0.3899 |
| 0.4344 | 2.5297 | 6180 | 0.3899 |
| 0.4256 | 2.5379 | 6200 | 0.3902 |
| 0.4227 | 2.5460 | 6220 | 0.3904 |
| 0.43 | 2.5542 | 6240 | 0.3907 |
| 0.4252 | 2.5624 | 6260 | 0.3900 |
| 0.4224 | 2.5706 | 6280 | 0.3909 |
| 0.4207 | 2.5788 | 6300 | 0.3909 |
| 0.4265 | 2.5870 | 6320 | 0.3906 |
| 0.4341 | 2.5952 | 6340 | 0.3907 |
| 0.4228 | 2.6034 | 6360 | 0.3903 |
| 0.4196 | 2.6115 | 6380 | 0.3904 |
| 0.4216 | 2.6197 | 6400 | 0.3897 |
| 0.4339 | 2.6279 | 6420 | 0.3904 |
| 0.4255 | 2.6361 | 6440 | 0.3903 |
| 0.4261 | 2.6443 | 6460 | 0.3905 |
| 0.43 | 2.6525 | 6480 | 0.3906 |
| 0.4265 | 2.6607 | 6500 | 0.3907 |
| 0.4279 | 2.6688 | 6520 | 0.3904 |
| 0.4298 | 2.6770 | 6540 | 0.3901 |
| 0.4312 | 2.6852 | 6560 | 0.3901 |
| 0.4199 | 2.6934 | 6580 | 0.3898 |
| 0.4288 | 2.7016 | 6600 | 0.3902 |
| 0.4325 | 2.7098 | 6620 | 0.3905 |
| 0.4246 | 2.7180 | 6640 | 0.3903 |
| 0.4281 | 2.7262 | 6660 | 0.3899 |
| 0.4296 | 2.7343 | 6680 | 0.3903 |
| 0.4247 | 2.7425 | 6700 | 0.3898 |
| 0.4252 | 2.7507 | 6720 | 0.3905 |
| 0.4255 | 2.7589 | 6740 | 0.3904 |
| 0.4282 | 2.7671 | 6760 | 0.3902 |
| 0.4225 | 2.7753 | 6780 | 0.3900 |
| 0.4251 | 2.7835 | 6800 | 0.3900 |
| 0.4201 | 2.7916 | 6820 | 0.3903 |
| 0.4252 | 2.7998 | 6840 | 0.3905 |
| 0.427 | 2.8080 | 6860 | 0.3907 |
| 0.428 | 2.8162 | 6880 | 0.3907 |
| 0.437 | 2.8244 | 6900 | 0.3900 |
| 0.4257 | 2.8326 | 6920 | 0.3901 |
| 0.4239 | 2.8408 | 6940 | 0.3905 |
| 0.4276 | 2.8490 | 6960 | 0.3902 |
| 0.4274 | 2.8571 | 6980 | 0.3897 |
| 0.4327 | 2.8653 | 7000 | 0.3902 |
| 0.4313 | 2.8735 | 7020 | 0.3896 |
| 0.4277 | 2.8817 | 7040 | 0.3904 |
| 0.4289 | 2.8899 | 7060 | 0.3904 |
| 0.4321 | 2.8981 | 7080 | 0.3900 |
| 0.4232 | 2.9063 | 7100 | 0.3902 |
| 0.4274 | 2.9144 | 7120 | 0.3901 |
| 0.4339 | 2.9226 | 7140 | 0.3901 |
| 0.4226 | 2.9308 | 7160 | 0.3904 |
| 0.4184 | 2.9390 | 7180 | 0.3902 |
| 0.4242 | 2.9472 | 7200 | 0.3901 |
| 0.4259 | 2.9554 | 7220 | 0.3902 |
| 0.4297 | 2.9636 | 7240 | 0.3897 |
| 0.4268 | 2.9718 | 7260 | 0.3900 |
| 0.4281 | 2.9799 | 7280 | 0.3900 |
| 0.4234 | 2.9881 | 7300 | 0.3901 |
| 0.4196 | 2.9963 | 7320 | 0.3900 |
### Framework versions
- PEFT 0.15.1
- Transformers 4.51.3
- Pytorch 2.6.0+cu118
- Datasets 3.5.0
- Tokenizers 0.21.1 |
juhw/q4102 | juhw | "2025-05-09T20:27:50Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-09T20:24:33Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tgallup/city-gan | tgallup | "2025-05-09T20:27:14Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-05-09T20:27:14Z" | ---
license: apache-2.0
---
|
waykaji/gensyn-checkpoints-swift_savage_anaconda | waykaji | "2025-05-09T20:22:58Z" | 10 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am swift savage anaconda",
"unsloth",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-21T02:54:43Z" | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: gensyn-checkpoints-swift_savage_anaconda
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am swift savage anaconda
- unsloth
- trl
licence: license
---
# Model Card for gensyn-checkpoints-swift_savage_anaconda
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="waykaji/gensyn-checkpoints-swift_savage_anaconda", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
vertings6/4a81a55f-91d7-46b7-9cb4-96d33ee32bb5 | vertings6 | "2025-05-09T20:22:50Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2-1.5B-Instruct",
"base_model:adapter:Qwen/Qwen2-1.5B-Instruct",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2025-05-09T20:12:47Z" | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4a81a55f-91d7-46b7-9cb4-96d33ee32bb5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: Qwen/Qwen2-1.5B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 73655b170c5bbcfd_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/73655b170c5bbcfd_train_data.json
type:
field_instruction: query
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: vertings6/4a81a55f-91d7-46b7-9cb4-96d33ee32bb5
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 400
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/73655b170c5bbcfd_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f3d8f374-7f54-44fd-bcdd-e66fab6c2305
wandb_project: s56-28
wandb_run: your_name
wandb_runid: f3d8f374-7f54-44fd-bcdd-e66fab6c2305
warmup_steps: 20
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 4a81a55f-91d7-46b7-9cb4-96d33ee32bb5
This model is a fine-tuned version of [Qwen/Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7977
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.6087 | 0.0217 | 400 | 0.7977 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
DucPhamF8/finetuned-BERTpho-syllable-V-correction | DucPhamF8 | "2025-05-09T20:18:38Z" | 3 | 0 | null | [
"tensorboard",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"base_model:vinai/bartpho-syllable",
"base_model:finetune:vinai/bartpho-syllable",
"region:us"
] | text2text-generation | "2025-05-07T19:34:15Z" | ---
base_model: vinai/bartpho-syllable
tags:
- text2text-generation
- generated_from_trainer
metrics:
- sacrebleu
model-index:
- name: vietnamese-correction-v2
results: []
---
# vietnamese-correction-v2
This model is a fine-tuned version of [vinai/bartpho-syllable](https://huggingface.co/vinai/bartpho-syllable) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1848
- Sacrebleu: 34.8103
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Nitrals-Quants/Nemotron-15b-Thinker-v0.1-Q4_K_M-GGUF | Nitrals-Quants | "2025-05-09T20:16:37Z" | 0 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:Nitral-AI/Nemotron-15b-Thinker-v0.1",
"base_model:quantized:Nitral-AI/Nemotron-15b-Thinker-v0.1",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-05-09T20:15:57Z" | ---
base_model: Nitral-AI/Nemotron-15b-Thinker-v0.1
language:
- en
license: other
tags:
- llama-cpp
- gguf-my-repo
---
# Nitral-AI/Nemotron-15b-Thinker-v0.1-Q4_K_M-GGUF
This model was converted to GGUF format from [`Nitral-AI/Nemotron-15b-Thinker-v0.1`](https://huggingface.co/Nitral-AI/Nemotron-15b-Thinker-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Nitral-AI/Nemotron-15b-Thinker-v0.1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Nitral-AI/Nemotron-15b-Thinker-v0.1-Q4_K_M-GGUF --hf-file nemotron-15b-thinker-v0.1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Nitral-AI/Nemotron-15b-Thinker-v0.1-Q4_K_M-GGUF --hf-file nemotron-15b-thinker-v0.1-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Nitral-AI/Nemotron-15b-Thinker-v0.1-Q4_K_M-GGUF --hf-file nemotron-15b-thinker-v0.1-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Nitral-AI/Nemotron-15b-Thinker-v0.1-Q4_K_M-GGUF --hf-file nemotron-15b-thinker-v0.1-q4_k_m.gguf -c 2048
```
|
jyp96/teapot | jyp96 | "2025-05-09T20:16:07Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"sd3",
"sd3-diffusers",
"base_model:stabilityai/stable-diffusion-3-medium-diffusers",
"base_model:adapter:stabilityai/stable-diffusion-3-medium-diffusers",
"license:other",
"region:us"
] | text-to-image | "2025-05-08T08:19:47Z" | ---
base_model: stabilityai/stable-diffusion-3-medium-diffusers
library_name: diffusers
license: other
instance_prompt: a photo of sks teapot
widget:
- text: A photo of sks teapot in a bucket
output:
url: image_0.png
- text: A photo of sks teapot in a bucket
output:
url: image_1.png
- text: A photo of sks teapot in a bucket
output:
url: image_2.png
- text: A photo of sks teapot in a bucket
output:
url: image_3.png
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- sd3
- sd3-diffusers
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- sd3
- sd3-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SD3 DreamBooth LoRA - jyp96/teapot
<Gallery />
## Model description
These are jyp96/teapot DreamBooth LoRA weights for stabilityai/stable-diffusion-3-medium-diffusers.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [SD3 diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_sd3.md).
Was LoRA for the text encoder enabled? False.
## Trigger words
You should use `a photo of sks teapot` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](jyp96/teapot/tree/main) in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained(stabilityai/stable-diffusion-3-medium-diffusers, torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('jyp96/teapot', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('A photo of sks teapot in a bucket').images[0]
```
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`diffusers_lora_weights.safetensors` here 💾](/jyp96/teapot/blob/main/diffusers_lora_weights.safetensors)**.
- Rename it and place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:your_new_name:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
liangli217/simple_genomics_model_first_attempt | liangli217 | "2025-05-09T20:15:23Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-09T20:15:11Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
JinwonChung/q-FrozenLake-v1-4x4-noSlippery | JinwonChung | "2025-05-09T20:15:22Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2025-05-09T20:15:19Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="JinwonChung/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
hoan17/saving_P1000s100x1x2KL_150 | hoan17 | "2025-05-09T20:13:11Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"trl",
"o2o",
"reinforcement-learning",
"text-to-image",
"stable-diffusion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2025-05-09T20:12:09Z" | ---
license: apache-2.0
tags:
- trl
- o2o
- diffusers
- reinforcement-learning
- text-to-image
- stable-diffusion
---
# TRL O2O Model
This is a diffusion model that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for image generation conditioned with text.
|
nnilayy/dreamer-valence-binary-classification-Kfold-4 | nnilayy | "2025-05-09T20:12:08Z" | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | "2025-05-09T20:12:07Z" | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
Taimoor4477/rephraserHumanizerModelFineTunedPraphraser233509052025 | Taimoor4477 | "2025-05-09T20:10:49Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-05-09T20:10:20Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
aaaaaswwe/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-giant_pale_ferret | aaaaaswwe | "2025-05-09T20:10:43Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am giant pale ferret",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | "2025-05-01T14:01:16Z" | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-giant_pale_ferret
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am giant pale ferret
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-giant_pale_ferret
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="aaaaaswwe/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-giant_pale_ferret", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
danishconfinality/db_slr_d_1.0 | danishconfinality | "2025-05-09T20:07:53Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-05-09T20:07:45Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Raniahossam33/qwen2.5-7b-instruct-ditto-Yemen-topic-sap-custom | Raniahossam33 | "2025-05-09T20:05:55Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-04-24T21:02:05Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
juhx/qq800 | juhx | "2025-05-09T20:05:24Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-09T20:01:31Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
fyuuki0jp/gemma-3-1b-it-thinking | fyuuki0jp | "2025-05-09T20:03:10Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma3_text",
"text-generation",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"conversational",
"base_model:unsloth/gemma-3-1b-it",
"base_model:finetune:unsloth/gemma-3-1b-it",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-09T05:38:19Z" | ---
base_model: unsloth/gemma-3-1b-it
library_name: transformers
model_name: gemma-3-1b-it-thinking
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for gemma-3-1b-it-thinking
This model is a fine-tuned version of [unsloth/gemma-3-1b-it](https://huggingface.co/unsloth/gemma-3-1b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="fyuuki0jp/gemma-3-1b-it-thinking", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
CinthyaCriollo/llama2-ultrachat-best | CinthyaCriollo | "2025-05-09T20:02:17Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-05-09T20:01:56Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
andysalerno/Qwen3-8B-ao-autoquant | andysalerno | "2025-05-09T20:01:28Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"qwen3",
"feature-extraction",
"torchao-my-repo",
"text-generation",
"conversational",
"arxiv:2309.00071",
"base_model:Qwen/Qwen3-8B",
"base_model:quantized:Qwen/Qwen3-8B",
"license:apache-2.0",
"endpoints_compatible",
"torchao",
"region:us"
] | text-generation | "2025-05-09T19:58:59Z" | ---
base_model:
- Qwen/Qwen3-8B
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- torchao-my-repo
---
# Qwen/Qwen3-8B (Quantized)
## Description
This model is a quantized version of the original model [`Qwen/Qwen3-8B`](https://huggingface.co/Qwen/Qwen3-8B).
It's quantized using the TorchAO library using the [torchao-my-repo](https://huggingface.co/spaces/pytorch/torchao-my-repo) space.
## Quantization Details
- **Quantization Type**: autoquant
- **Group Size**: 128
# 📄 Original Model Information
# Qwen3-8B
<a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Qwen3 Highlights
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
- **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios.
- **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
- **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
- **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
- **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
## Model Overview
**Qwen3-8B** has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 8.2B
- Number of Paramaters (Non-Embedding): 6.95B
- Number of Layers: 36
- Number of Attention Heads (GQA): 32 for Q and 8 for KV
- Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts).
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Quickstart
The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.51.0`, you will encounter the following error:
```
KeyError: 'qwen3'
```
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-8B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
- SGLang:
```shell
python -m sglang.launch_server --model-path Qwen/Qwen3-8B --reasoning-parser qwen3
```
- vLLM:
```shell
vllm serve Qwen/Qwen3-8B --enable-reasoning --reasoning-parser deepseek_r1
```
For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
## Switching Between Thinking and Non-Thinking Mode
> [!TIP]
> The `enable_thinking` switch is also available in APIs created by SGLang and vLLM.
> Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users.
### `enable_thinking=True`
By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # True is the default value for enable_thinking
)
```
In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response.
> [!NOTE]
> For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### `enable_thinking=False`
We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # Setting enable_thinking=False disables thinking mode
)
```
In this mode, the model will not generate any think content and will not include a `<think>...</think>` block.
> [!NOTE]
> For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input
We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
Here is an example of a multi-turn conversation:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
class QwenChatbot:
def __init__(self, model_name="Qwen/Qwen3-8B"):
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.model = AutoModelForCausalLM.from_pretrained(model_name)
self.history = []
def generate_response(self, user_input):
messages = self.history + [{"role": "user", "content": user_input}]
text = self.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = self.tokenizer(text, return_tensors="pt")
response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist()
response = self.tokenizer.decode(response_ids, skip_special_tokens=True)
# Update history
self.history.append({"role": "user", "content": user_input})
self.history.append({"role": "assistant", "content": response})
return response
# Example Usage
if __name__ == "__main__":
chatbot = QwenChatbot()
# First input (without /think or /no_think tags, thinking mode is enabled by default)
user_input_1 = "How many r's in strawberries?"
print(f"User: {user_input_1}")
response_1 = chatbot.generate_response(user_input_1)
print(f"Bot: {response_1}")
print("----------------------")
# Second input with /no_think
user_input_2 = "Then, how many r's in blueberries? /no_think"
print(f"User: {user_input_2}")
response_2 = chatbot.generate_response(user_input_2)
print(f"Bot: {response_2}")
print("----------------------")
# Third input with /think
user_input_3 = "Really? /think"
print(f"User: {user_input_3}")
response_3 = chatbot.generate_response(user_input_3)
print(f"Bot: {response_3}")
```
> [!NOTE]
> For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled.
> When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block.
## Agentic Use
Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
```python
from qwen_agent.agents import Assistant
# Define LLM
llm_cfg = {
'model': 'Qwen3-8B',
# Use the endpoint provided by Alibaba Model Studio:
# 'model_type': 'qwen_dashscope',
# 'api_key': os.getenv('DASHSCOPE_API_KEY'),
# Use a custom endpoint compatible with OpenAI API:
'model_server': 'http://localhost:8000/v1', # api_base
'api_key': 'EMPTY',
# Other parameters:
# 'generate_cfg': {
# # Add: When the response content is `<think>this is the thought</think>this is the answer;
# # Do not add: When the response has been separated by reasoning_content and content.
# 'thought_in_content': True,
# },
}
# Define Tools
tools = [
{'mcpServers': { # You can specify the MCP configuration file
'time': {
'command': 'uvx',
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
},
'code_interpreter', # Built-in tools
]
# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)
# Streaming generation
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
for responses in bot.run(messages=messages):
pass
print(responses)
```
## Processing Long Texts
Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method.
YaRN is currently supported by several inference frameworks, e.g., `transformers` and `llama.cpp` for local use, `vllm` and `sglang` for deployment. In general, there are two approaches to enabling YaRN for supported frameworks:
- Modifying the model files:
In the `config.json` file, add the `rope_scaling` fields:
```json
{
...,
"rope_scaling": {
"rope_type": "yarn",
"factor": 4.0,
"original_max_position_embeddings": 32768
}
}
```
For `llama.cpp`, you need to regenerate the GGUF file after the modification.
- Passing command line arguments:
For `vllm`, you can use
```shell
vllm serve ... --rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
```
For `sglang`, you can use
```shell
python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
```
For `llama-server` from `llama.cpp`, you can use
```shell
llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768
```
> [!IMPORTANT]
> If you encounter the following warning
> ```
> Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'}
> ```
> please upgrade `transformers>=4.51.0`.
> [!NOTE]
> All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.**
> We advise adding the `rope_scaling` configuration only when processing long contexts is required.
> It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set `factor` as 2.0.
> [!NOTE]
> The default `max_position_embeddings` in `config.json` is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance.
> [!TIP]
> The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed.
## Best Practices
To achieve optimal performance, we recommend the following settings:
1. **Sampling Parameters**:
- For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions.
- For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3,
title = {Qwen3},
url = {https://qwenlm.github.io/blog/qwen3/},
author = {Qwen Team},
month = {April},
year = {2025}
}
``` |
jyp96/robot_toy | jyp96 | "2025-05-09T19:58:24Z" | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"sd3",
"sd3-diffusers",
"base_model:stabilityai/stable-diffusion-3-medium-diffusers",
"base_model:adapter:stabilityai/stable-diffusion-3-medium-diffusers",
"license:other",
"region:us"
] | text-to-image | "2025-05-08T07:59:40Z" | ---
base_model: stabilityai/stable-diffusion-3-medium-diffusers
library_name: diffusers
license: other
instance_prompt: a photo of sks robot_toy
widget:
- text: A photo of sks robot_toy in a bucket
output:
url: image_0.png
- text: A photo of sks robot_toy in a bucket
output:
url: image_1.png
- text: A photo of sks robot_toy in a bucket
output:
url: image_2.png
- text: A photo of sks robot_toy in a bucket
output:
url: image_3.png
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- sd3
- sd3-diffusers
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- sd3
- sd3-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SD3 DreamBooth LoRA - jyp96/robot_toy
<Gallery />
## Model description
These are jyp96/robot_toy DreamBooth LoRA weights for stabilityai/stable-diffusion-3-medium-diffusers.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [SD3 diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_sd3.md).
Was LoRA for the text encoder enabled? False.
## Trigger words
You should use `a photo of sks robot_toy` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](jyp96/robot_toy/tree/main) in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained(stabilityai/stable-diffusion-3-medium-diffusers, torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('jyp96/robot_toy', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('A photo of sks robot_toy in a bucket').images[0]
```
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`diffusers_lora_weights.safetensors` here 💾](/jyp96/robot_toy/blob/main/diffusers_lora_weights.safetensors)**.
- Rename it and place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:your_new_name:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
Atomic-Ai/AtomicGPT-T1 | Atomic-Ai | "2025-05-09T19:54:22Z" | 11 | 1 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"conversational",
"ai",
"artificialintelligence",
"ki",
"machinelearning",
"deeplearning",
"neuralnetworks",
"nlp",
"computervision",
"generativeai",
"r1",
"o3",
"GPT",
"Transformers",
"de",
"dataset:Atomic-Ai/AtomicGPT-Think-1-dataset",
"base_model:Atomic-Ai/AtomicGPT-3",
"base_model:finetune:Atomic-Ai/AtomicGPT-3",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-05T16:30:56Z" | ---
license: mit
language:
- de
base_model:
- Atomic-Ai/AtomicGPT-3
library_name: transformers
datasets:
- Atomic-Ai/AtomicGPT-Think-1-dataset
metrics:
- f1
- perplexity
- exact_match
tags:
- conversational
- ai
- artificialintelligence
- ki
- machinelearning
- deeplearning
- neuralnetworks
- nlp
- computervision
- generativeai
- gpt2
- r1
- o3
- GPT
- Transformers
pipeline_tag: text-generation
new_version: Atomic-Ai/AtomicGPT-T1.5-mini
---
# AtomicGPT T1
## Beschreibung
Das **AtomicGPT T1** ist unser erstes *Denken-Modell*! Der Denkprozess befindet sich zwischen `<think>` und `</think>`.
Der Vorteil ist, dass das KI-Modell bessere Antworten liefern kann, weil es sich vorher *Gedanken macht*! <br>
Das "T" im Name steht für **Think**!
Das Training hat **40 Minuten** gedauert mit **10 Epochen**.
Das Dataset war nur **800KB** groß und hat trotzdem erstaunlich gute Ergebnisse geliefert!
Für das Training wurde unser vorheriges Modell **AtomicGPT 3.0** verwendet.
Deswegen kann das Modell mit ausreichend Wissen bereitgestellt werden!
**AtomicGPT T1** ist ein kleines deutsches KI-Modell, das in der Lage ist, qualitativ hochwertige Antworten in **deutscher Sprache** zu liefern.
Das Modell ist noch nicht perfekt, aber in der Vergangenheit wurde es kontinuierlich mit verbessertem Wissen ausgestattet.
## Vergleiche
| Aufgaben | L-GPT_1 | L-GPT_1.1 | L-GPT_1.5 | L-GPT_1.5 mini | AtomicGPT 1.0 | AtomicGPT 2.0 | AtomicGPT 3.0 | AtomicGPT T1 |
|----------|---------|-----------|-----------|----------------|----------------|----------------|----------------|----------------|
| Q&A | 7.5% | 44.17% | 73.33% | 64.17% | 58.33% | 59.17% | 90% | **100%** |
---
## Verwendung
Um das Modell zu verwenden, stelle sicher, dass der Prompt im folgenden Format gesendet wird:
```text
<user>prompt<End><AI Assistent>
```
Wenn `<End>` erkannt wird, sollte die Generierung gestoppt werden.
```python
from transformers import pipeline, AutoTokenizer, AutoModelForCausalLM
import torch
# Modell und Tokenizer laden
MODEL_PATH = "Atomic-Ai/AtomicGPT-T1"
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)
model = AutoModelForCausalLM.from_pretrained(MODEL_PATH)
print("Spezielle Tokens:", tokenizer.additional_special_tokens)
print("Pad Token:", tokenizer.pad_token)
print("EOS Token:", tokenizer.eos_token)
device = 0 if torch.cuda.is_available() else -1
print(f"Verwende {'GPU' if device == 0 else 'CPU'} für Inferenz")
chatbot = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
device=device
)
def generate_response(prompt):
output = chatbot(
prompt,
max_new_tokens=1024,
eos_token_id=50268,
pad_token_id=tokenizer.pad_token_id,
do_sample=True,
temperature=0.7,
top_p=0.9,
top_k=50,
repetition_penalty=1.1
)
return output[0]['generated_text']
def format_chat(user_input):
return f"<user>{user_input}<End><AI Assistent>"
def extract_assistant_response(full_text):
try:
parts = full_text.split("<AI Assistent>")
if len(parts) > 1:
response_part = parts[1]
if "<End>" in response_part:
return response_part.split("<End>")[0].strip()
else:
return response_part.strip()
return "Fehler: Antwortformat ungültig"
except Exception as e:
return f"Fehler bei der Extraktion: {str(e)}"
def main():
print("\n=== AtomicGPT Chat ===")
print("Chat gestartet! Gib 'exit' ein, um zu beenden.")
print("----------------------------------")
while True:
user_input = input("\nDu: ")
if user_input.lower() in ['exit', 'quit', 'ende']:
print("Chat beendet. Auf Wiedersehen!")
break
try:
prompt = format_chat(user_input)
full_response = generate_response(prompt)
assistant_response = extract_assistant_response(full_response)
print(f"\nAI Assistent: {assistant_response}")
except Exception as e:
print(f"\nFehler: {str(e)}")
print("Versuche es mit einer anderen Eingabe.")
if __name__ == "__main__":
main()
```
---
## Unterstütze uns
Tritt unserer Community bei:
👉 **[Discord Server](https://discord.gg/atKjP6Jb)**
--- |
Narine21/Horse | Narine21 | "2025-05-09T19:52:52Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-05-09T19:52:52Z" | ---
license: apache-2.0
---
|
bxp547/ppo-LunarLander-v2_bop | bxp547 | "2025-05-09T19:50:22Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2025-05-09T19:47:31Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 253.44 +/- 24.02
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mlfoundations-dev/openr1_codeforces_0.3k | mlfoundations-dev | "2025-05-09T19:50:00Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-09T18:29:28Z" | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: openr1_codeforces_0.3k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openr1_codeforces_0.3k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/openr1_codeforces_0.3k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 13.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.6.0+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
kelu01/Chem_Fuser | kelu01 | "2025-05-09T19:49:09Z" | 0 | 0 | null | [
"pytorch",
"roberta",
"license:apache-2.0",
"region:us"
] | null | "2025-05-09T04:22:10Z" | ---
license: apache-2.0
---
|
shradharp/legal-ft-09ebdeed-ece2-4eeb-ac2c-8105b158e46c | shradharp | "2025-05-09T19:47:42Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:156",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:Snowflake/snowflake-arctic-embed-l",
"base_model:finetune:Snowflake/snowflake-arctic-embed-l",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2025-05-09T19:38:44Z" | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:156
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-l
widget:
- source_sentence: Which multi-modal models were released by significant vendors in
2024 according to the context?
sentences:
- 'An interesting point of comparison here could be the way railways rolled out
around the world in the 1800s. Constructing these required enormous investments
and had a massive environmental impact, and many of the lines that were built
turned out to be unnecessary—sometimes multiple lines from different companies
serving the exact same routes!
The resulting bubbles contributed to several financial crashes, see Wikipedia
for Panic of 1873, Panic of 1893, Panic of 1901 and the UK’s Railway Mania. They
left us with a lot of useful infrastructure and a great deal of bankruptcies and
environmental damage.
The year of slop'
- 'In 2024, almost every significant model vendor released multi-modal models. We
saw the Claude 3 series from Anthropic in March, Gemini 1.5 Pro in April (images,
audio and video), then September brought Qwen2-VL and Mistral’s Pixtral 12B and
Meta’s Llama 3.2 11B and 90B vision models. We got audio input and output from
OpenAI in October, then November saw SmolVLM from Hugging Face and December saw
image and video models from Amazon Nova.
In October I upgraded my LLM CLI tool to support multi-modal models via attachments.
It now has plugins for a whole collection of different vision models.'
- 'OpenAI made GPT-4o free for all users in May, and Claude 3.5 Sonnet was freely
available from its launch in June. This was a momentus change, because for the
previous year free users had mostly been restricted to GPT-3.5 level models, meaning
new users got a very inaccurate mental model of what a capable LLM could actually
do.
That era appears to have ended, likely permanently, with OpenAI’s launch of ChatGPT
Pro. This $200/month subscription service is the only way to access their most
capable model, o1 Pro.
Since the trick behind the o1 series (and the future models it will undoubtedly
inspire) is to expend more compute time to get better results, I don’t think those
days of free access to the best available models are likely to return.'
- source_sentence: How did the construction of railways in the 1800s impact the environment?
sentences:
- 'DeepSeek v3 is a huge 685B parameter model—one of the largest openly licensed
models currently available, significantly bigger than the largest of Meta’s Llama
series, Llama 3.1 405B.
Benchmarks put it up there with Claude 3.5 Sonnet. Vibe benchmarks (aka the Chatbot
Arena) currently rank it 7th, just behind the Gemini 2.0 and OpenAI 4o/o1 models.
This is by far the highest ranking openly licensed model.
The really impressive thing about DeepSeek v3 is the training cost. The model
was trained on 2,788,000 H800 GPU hours at an estimated cost of $5,576,000. Llama
3.1 405B trained 30,840,000 GPU hours—11x that used by DeepSeek v3, for a model
that benchmarks slightly worse.'
- 'An interesting point of comparison here could be the way railways rolled out
around the world in the 1800s. Constructing these required enormous investments
and had a massive environmental impact, and many of the lines that were built
turned out to be unnecessary—sometimes multiple lines from different companies
serving the exact same routes!
The resulting bubbles contributed to several financial crashes, see Wikipedia
for Panic of 1873, Panic of 1893, Panic of 1901 and the UK’s Railway Mania. They
left us with a lot of useful infrastructure and a great deal of bankruptcies and
environmental damage.
The year of slop'
- 'The boring yet crucial secret behind good system prompts is test-driven development.
You don’t write down a system prompt and find ways to test it. You write down
tests and find a system prompt that passes them.
It’s become abundantly clear over the course of 2024 that writing good automated
evals for LLM-powered systems is the skill that’s most needed to build useful
applications on top of these models. If you have a strong eval suite you can adopt
new models faster, iterate better and build more reliable and useful product features
than your competition.
Vercel’s Malte Ubl:'
- source_sentence: How is a prompt described in comparison to a broken ASML machine
without a manual?
sentences:
- 'The environmental impact got much, much worse
The much bigger problem here is the enormous competitive buildout of the infrastructure
that is imagined to be necessary for these models in the future.
Companies like Google, Meta, Microsoft and Amazon are all spending billions of
dollars rolling out new datacenters, with a very material impact on the electricity
grid and the environment. There’s even talk of spinning up new nuclear power stations,
but those can take decades.
Is this infrastructure necessary? DeepSeek v3’s $6m training cost and the continued
crash in LLM prices might hint that it’s not. But would you want to be the big
tech executive that argued NOT to build out this infrastructure only to be proven
wrong in a few years’ time?'
- 'When @v0 first came out we were paranoid about protecting the prompt with all
kinds of pre and post processing complexity.
We completely pivoted to let it rip. A prompt without the evals, models, and especially
UX is like getting a broken ASML machine without a manual'
- 'So far, I think they’re a net positive. I’ve used them on a personal level to
improve my productivity (and entertain myself) in all sorts of different ways.
I think people who learn how to use them effectively can gain a significant boost
to their quality of life.
A lot of people are yet to be sold on their value! Some think their negatives
outweigh their positives, some think they are all hot air, and some even think
they represent an existential threat to humanity.
They’re actually quite easy to build
The most surprising thing we’ve learned about LLMs this year is that they’re actually
quite easy to build.'
- source_sentence: According to the context, what is one of the best applications
of large language models (LLMs) mentioned?
sentences:
- 'I’m still trying to figure out the best patterns for doing this for my own work.
Everyone knows that evals are important, but there remains a lack of great guidance
for how to best implement them—I’m tracking this under my evals tag. My SVG pelican
riding a bicycle benchmark is a pale imitation of what a real eval suite should
look like.
Apple Intelligence is bad, Apple’s MLX library is excellent
As a Mac user I’ve been feeling a lot better about my choice of platform this
year.
Last year it felt like my lack of a Linux/Windows machine with an NVIDIA GPU
was a huge disadvantage in terms of trying out new models.'
- 'The two main categories I see are people who think AI agents are obviously things
that go and act on your behalf—the travel agent model—and people who think in
terms of LLMs that have been given access to tools which they can run in a loop
as part of solving a problem. The term “autonomy” is often thrown into the mix
too, again without including a clear definition.
(I also collected 211 definitions on Twitter a few months ago—here they are in
Datasette Lite—and had gemini-exp-1206 attempt to summarize them.)
Whatever the term may mean, agents still have that feeling of perpetually “coming
soon”.'
- 'A lot of people are excited about AI agents—an infuriatingly vague term that
seems to be converging on “AI systems that can go away and act on your behalf”.
We’ve been talking about them all year, but I’ve seen few if any examples of them
running in production, despite lots of exciting prototypes.
I think this is because of gullibility.
Can we solve this? Honestly, I’m beginning to suspect that you can’t fully solve
gullibility without achieving AGI. So it may be quite a while before those agent
dreams can really start to come true!
Code may be the best application
Over the course of the year, it’s become increasingly clear that writing code
is one of the things LLMs are most capable of.'
- source_sentence: How many lines of Python code are generally needed to train a basic
version of a powerful system?
sentences:
- 'Intuitively, one would expect that systems this powerful would take millions
of lines of complex code. Instead, it turns out a few hundred lines of Python
is genuinely enough to train a basic version!
What matters most is the training data. You need a lot of data to make these
things work, and the quantity and quality of the training data appears to be the
most important factor in how good the resulting model is.
If you can gather the right data, and afford to pay for the GPUs to train it,
you can build an LLM.'
- 'Law is not ethics. Is it OK to train models on people’s content without their
permission, when those models will then be used in ways that compete with those
people?
As the quality of results produced by AI models has increased over the year, these
questions have become even more pressing.
The impact on human society in terms of these models is already huge, if difficult
to objectively measure.
People have certainly lost work to them—anecdotally, I’ve seen this for copywriters,
artists and translators.
There are a great deal of untold stories here. I’m hoping 2024 sees significant
amounts of dedicated journalism on this topic.
My blog in 2023
Here’s a tag cloud for content I posted to my blog in 2023 (generated using Django
SQL Dashboard):'
- 'We already knew LLMs were spookily good at writing code. If you prompt them right,
it turns out they can build you a full interactive application using HTML, CSS
and JavaScript (and tools like React if you wire up some extra supporting build
mechanisms)—often in a single prompt.
Anthropic kicked this idea into high gear when they released Claude Artifacts,
a groundbreaking new feature that was initially slightly lost in the noise due
to being described half way through their announcement of the incredible Claude
3.5 Sonnet.
With Artifacts, Claude can write you an on-demand interactive application and
then let you use it directly inside the Claude interface.
Here’s my Extract URLs app, entirely generated by Claude:'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.9166666666666666
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 1.0
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1.0
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.9166666666666666
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3333333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.20000000000000004
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10000000000000002
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.9166666666666666
name: Cosine Recall@1
- type: cosine_recall@3
value: 1.0
name: Cosine Recall@3
- type: cosine_recall@5
value: 1.0
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9692441461309548
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9583333333333334
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9583333333333334
name: Cosine Map@100
---
# SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision d8fb21ca8d905d2832ee8b96c894d3298964346b -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("shradharp/legal-ft-09ebdeed-ece2-4eeb-ac2c-8105b158e46c")
# Run inference
sentences = [
'How many lines of Python code are generally needed to train a basic version of a powerful system?',
'Intuitively, one would expect that systems this powerful would take millions of lines of complex code. Instead, it turns out a few hundred lines of Python is genuinely enough to train a basic version!\nWhat matters most is the training data. You need a lot of data to make these things work, and the quantity and quality of the training data appears to be the most important factor in how good the resulting model is.\nIf you can gather the right data, and afford to pay for the GPUs to train it, you can build an LLM.',
'We already knew LLMs were spookily good at writing code. If you prompt them right, it turns out they can build you a full interactive application using HTML, CSS and JavaScript (and tools like React if you wire up some extra supporting build mechanisms)—often in a single prompt.\nAnthropic kicked this idea into high gear when they released Claude Artifacts, a groundbreaking new feature that was initially slightly lost in the noise due to being described half way through their announcement of the incredible Claude 3.5 Sonnet.\nWith Artifacts, Claude can write you an on-demand interactive application and then let you use it directly inside the Claude interface.\nHere’s my Extract URLs app, entirely generated by Claude:',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.9167 |
| cosine_accuracy@3 | 1.0 |
| cosine_accuracy@5 | 1.0 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.9167 |
| cosine_precision@3 | 0.3333 |
| cosine_precision@5 | 0.2 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.9167 |
| cosine_recall@3 | 1.0 |
| cosine_recall@5 | 1.0 |
| cosine_recall@10 | 1.0 |
| **cosine_ndcg@10** | **0.9692** |
| cosine_mrr@10 | 0.9583 |
| cosine_map@100 | 0.9583 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 156 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 156 samples:
| | sentence_0 | sentence_1 |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 12 tokens</li><li>mean: 20.81 tokens</li><li>max: 36 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 135.28 tokens</li><li>max: 214 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:-----------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What significant development in Artificial Intelligence occurred in 2023 according to Simon Willison’s weblog?</code> | <code>Stuff we figured out about AI in 2023<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>Simon Willison’s Weblog<br>Subscribe<br><br><br><br><br><br><br>Stuff we figured out about AI in 2023<br>31st December 2023<br>2023 was the breakthrough year for Large Language Models (LLMs). I think it’s OK to call these AI—they’re the latest and (currently) most interesting development in the academic field of Artificial Intelligence that dates back to the 1950s.<br>Here’s my attempt to round up the highlights in one place!</code> |
| <code>How does Simon Willison describe the relationship between Large Language Models and the broader field of Artificial Intelligence?</code> | <code>Stuff we figured out about AI in 2023<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>Simon Willison’s Weblog<br>Subscribe<br><br><br><br><br><br><br>Stuff we figured out about AI in 2023<br>31st December 2023<br>2023 was the breakthrough year for Large Language Models (LLMs). I think it’s OK to call these AI—they’re the latest and (currently) most interesting development in the academic field of Artificial Intelligence that dates back to the 1950s.<br>Here’s my attempt to round up the highlights in one place!</code> |
| <code>What are some challenges mentioned in building large language models like GPT-4?</code> | <code>Large Language Models<br>They’re actually quite easy to build<br>You can run LLMs on your own devices<br>Hobbyists can build their own fine-tuned models<br>We don’t yet know how to build GPT-4<br>Vibes Based Development<br>LLMs are really smart, and also really, really dumb<br>Gullibility is the biggest unsolved problem<br>Code may be the best application<br>The ethics of this space remain diabolically complex<br>My blog in 2023</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 10
- `per_device_eval_batch_size`: 10
- `num_train_epochs`: 10
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 10
- `per_device_eval_batch_size`: 10
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | cosine_ndcg@10 |
|:-----:|:----:|:--------------:|
| 1.0 | 16 | 0.9484 |
| 2.0 | 32 | 0.9484 |
| 3.0 | 48 | 0.9638 |
| 3.125 | 50 | 0.9638 |
| 4.0 | 64 | 0.9539 |
| 5.0 | 80 | 0.9539 |
| 6.0 | 96 | 0.9539 |
| 6.25 | 100 | 0.9539 |
| 7.0 | 112 | 0.9692 |
| 8.0 | 128 | 0.9692 |
| 9.0 | 144 | 0.9692 |
| 9.375 | 150 | 0.9692 |
| 10.0 | 160 | 0.9692 |
### Framework Versions
- Python: 3.13.2
- Sentence Transformers: 4.1.0
- Transformers: 4.51.3
- PyTorch: 2.7.0+cu126
- Accelerate: 1.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
011-Sophie-Rain-SpiderMan-viral-video/Sophie.Rain.Spiderman.Video.Link | 011-Sophie-Rain-SpiderMan-viral-video | "2025-05-09T19:44:48Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-05-09T19:42:53Z" | <animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
nekomajin/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mighty_hoarse_camel | nekomajin | "2025-05-09T19:42:07Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am mighty hoarse camel",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-16T11:36:21Z" | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mighty_hoarse_camel
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am mighty hoarse camel
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mighty_hoarse_camel
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="nekomajin/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-mighty_hoarse_camel", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
ASethi04/Qwen-Qwen2.5-7B-hellaswag-1000-lora-4-0.0001 | ASethi04 | "2025-05-09T19:36:31Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-7B",
"base_model:finetune:Qwen/Qwen2.5-7B",
"endpoints_compatible",
"region:us"
] | null | "2025-05-09T19:00:58Z" | ---
base_model: Qwen/Qwen2.5-7B
library_name: transformers
model_name: Qwen-Qwen2.5-7B-hellaswag-1000-lora-4-0.0001
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen-Qwen2.5-7B-hellaswag-1000-lora-4-0.0001
This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ASethi04/Qwen-Qwen2.5-7B-hellaswag-1000-lora-4-0.0001", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/yfgboa9r)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.1
- Transformers: 4.51.2
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Oysiyl/colSmol-500M_ufo | Oysiyl | "2025-05-09T19:35:17Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:vidore/ColSmolVLM-Instruct-500M-base",
"base_model:adapter:vidore/ColSmolVLM-Instruct-500M-base",
"license:mit",
"region:us"
] | null | "2025-05-09T18:54:27Z" | ---
library_name: peft
license: mit
base_model: vidore/ColSmolVLM-Instruct-500M-base
tags:
- generated_from_trainer
model-index:
- name: colSmol-500M_ufo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# colSmol-500M_ufo
This model is a fine-tuned version of [vidore/ColSmolVLM-Instruct-500M-base](https://huggingface.co/vidore/ColSmolVLM-Instruct-500M-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0878
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.1306 | 0.1636 | 80 | 0.1418 |
| 0.0751 | 0.3272 | 160 | 0.1086 |
| 0.0823 | 0.4908 | 240 | 0.0912 |
| 0.0513 | 0.6544 | 320 | 0.0887 |
| 0.0475 | 0.8180 | 400 | 0.0865 |
| 0.0572 | 0.9816 | 480 | 0.0878 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.3.1
- Tokenizers 0.21.0 |
Comfy-Org/ACE-Step_ComfyUI_repackaged | Comfy-Org | "2025-05-09T19:33:46Z" | 0 | 19 | null | [
"region:us"
] | null | "2025-05-07T10:49:51Z" | See:
https://comfyanonymous.github.io/ComfyUI_examples/audio/ or https://docs.comfy.org/tutorials/audio/ace-step/ace-step-v1 for how to use it in ComfyUI. |
EpistemeAI/EpistemeAI-codegemma-2-9b | EpistemeAI | "2025-05-09T19:33:43Z" | 5 | 2 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gemma2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"text-classification",
"en",
"base_model:unsloth/gemma-2-9b-bnb-4bit",
"base_model:finetune:unsloth/gemma-2-9b-bnb-4bit",
"license:gemma",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-08-13T21:24:23Z" | ---
base_model: unsloth/gemma-2-9b-bnb-4bit
language:
- en
license: gemma
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- trl
pipeline_tag: text-classification
---
# Uploaded model
- **Developed by:** EpistemeAI
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-9b-bnb-4bit
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
# This model is fine-tuned by 122k code instructions.
How to use
This repository contains two versions of Gemma-2-9B, for use with transformers and with the original llama codebase.
Use with transformers
Starting with transformers >= 4.43.0 onward, you can run conversational inference using the Transformers pipeline abstraction or by leveraging the Auto classes with the generate() function.
Make sure to update your transformers installation via pip install --upgrade transformers.
You need to prepare prompt in alpaca format to generate properly:
```python
def format_test(x):
if x['input']:
formatted_text = f"""Below is an instruction that describes a task. \
Write a response that appropriately completes the request.
### Instruction:
{x['instruction']}
### Input:
{x['input']}
### Response:
"""
else:
formatted_text = f"""Below is an instruction that describes a task. \
Write a response that appropriately completes the request.
### Instruction:
{x['instruction']}
### Response:
"""
return formatted_text
# using code_instructions_122k_alpaca dataset
Prompt = format_test(data[155])
print(Prompt)
```
- huggingface transformers method:
```python
from transformers import TextStreamer
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
inputs = tokenizer(
[
Prompt
], return_tensors = "pt").to("cuda")
text_streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, streamer = text_streamer, max_new_tokens = 512)
```
- unsloth method
```python
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "EpistemeAI/EpistemeAI-codegemma-2-9b", # YOUR MODEL YOU USED FOR TRAINING
max_seq_length = max_seq_length,
dtype = dtype,
load_in_4bit = load_in_4bit,
)
FastLanguageModel.for_inference(model) # Enable native 2x faster inference
# alpaca_prompt = You MUST copy from above!
inputs = tokenizer(
[
alpaca_prompt.format(
"Create a function to calculate the sum of a sequence of integers.", # instruction
"", # input
"", # output - leave this blank for generation!
)
], return_tensors = "pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens = 64, use_cache = True)
tokenizer.batch_decode(outputs)
```
--
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
### Citation
```none
@article{gemma_2024,
title={Gemma},
url={https://www.kaggle.com/m/3301},
DOI={10.34740/KAGGLE/M/3301},
publisher={Kaggle},
author={Gemma Team},
year={2024}
}
```
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety of sources. The 27B model was trained with 13 trillion tokens and the 9B model was trained with 8 trillion tokens.
Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content.
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safety in line with
[our policies][safety-policies].
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably][sustainability].
### Software
Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models][foundation-models], including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models][gemini-2-paper]; "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | Gemma PT 9B | Gemma PT 27B |
| ------------------------------ | ------------- | ----------- | ------------ |
| [MMLU][mmlu] | 5-shot, top-1 | 71.3 | 75.2 |
| [HellaSwag][hellaswag] | 10-shot | 81.9 | 86.4 |
| [PIQA][piqa] | 0-shot | 81.7 | 83.2 |
| [SocialIQA][socialiqa] | 0-shot | 53.4 | 53.7 |
| [BoolQ][boolq] | 0-shot | 84.2 | 84.8 |
| [WinoGrande][winogrande] | partial score | 80.6 | 83.7 |
| [ARC-e][arc] | 0-shot | 88.0 | 88.6 |
| [ARC-c][arc] | 25-shot | 68.4 | 71.4 |
| [TriviaQA][triviaqa] | 5-shot | 76.6 | 83.7 |
| [Natural Questions][naturalq] | 5-shot | 29.2 | 34.5 |
| [HumanEval][humaneval] | pass@1 | 40.2 | 51.8 |
| [MBPP][mbpp] | 3-shot | 52.4 | 62.6 |
| [GSM8K][gsm8k] | 5-shot, maj@1 | 68.6 | 74.0 |
| [MATH][math] | 4-shot | 36.6 | 42.3 |
| [AGIEval][agieval] | 3-5-shot | 52.8 | 55.1 |
| [BIG-Bench][big-bench] | 3-shot, CoT | 68.2 | 74.9 |
| ------------------------------ | ------------- | ----------- | ------------ |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq].
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies][safety-policies] for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well-known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
#### Gemma 2.0
| Benchmark | Metric | Gemma 2 IT 9B | Gemma 2 IT 27B |
| ------------------------ | ------------- | --------------- | ---------------- |
| [RealToxicity][realtox] | average | 8.25 | 8.84 |
| [CrowS-Pairs][crows] | top-1 | 37.47 | 36.67 |
| [BBQ Ambig][bbq] | 1-shot, top-1 | 88.58 | 85.99 |
| [BBQ Disambig][bbq] | top-1 | 82.67 | 86.94 |
| [Winogender][winogender] | top-1 | 79.17 | 77.22 |
| [TruthfulQA][truthfulqa] | | 50.27 | 51.60 |
| [Winobias 1_2][winobias] | | 78.09 | 81.94 |
| [Winobias 2_2][winobias] | | 95.32 | 97.22 |
| [Toxigen][toxigen] | | 39.30 | 38.42 |
| ------------------------ | ------------- | --------------- | ---------------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit][rai-toolkit].
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy][prohibited-use].
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized o
### Notice:
Gemma is provided under and subject to the Gemma Terms of Use found at ai.google.dev/gemma/terms |
nnilayy/dreamer-arousal-binary-classification-Kfold-3 | nnilayy | "2025-05-09T19:33:41Z" | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | "2025-05-09T19:33:39Z" | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
taruiayano/gensyn-checkpoints-peaceful_shrewd_marmot | taruiayano | "2025-05-09T19:31:43Z" | 10 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am peaceful shrewd marmot",
"unsloth",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-21T02:48:30Z" | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: gensyn-checkpoints-peaceful_shrewd_marmot
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am peaceful shrewd marmot
- unsloth
- trl
licence: license
---
# Model Card for gensyn-checkpoints-peaceful_shrewd_marmot
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="taruiayano/gensyn-checkpoints-peaceful_shrewd_marmot", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
mveroe/Llama-3.2-1B-SafeCoder-Instruct-safecoder-distill-3.0-Code-sec-1.5_tr5 | mveroe | "2025-05-09T19:31:24Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:mveroe/Llama-3.2-1B-SafeCoder-Instruct-safecoder-distill-3.0-Code-sec-1.5",
"base_model:finetune:mveroe/Llama-3.2-1B-SafeCoder-Instruct-safecoder-distill-3.0-Code-sec-1.5",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-09T19:27:24Z" | ---
library_name: transformers
license: llama3.2
base_model: mveroe/Llama-3.2-1B-SafeCoder-Instruct-safecoder-distill-3.0-Code-sec-1.5
tags:
- generated_from_trainer
model-index:
- name: Llama-3.2-1B-SafeCoder-Instruct-safecoder-distill-3.0-Code-sec-1.5_tr5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.2-1B-SafeCoder-Instruct-safecoder-distill-3.0-Code-sec-1.5_tr5
This model is a fine-tuned version of [mveroe/Llama-3.2-1B-SafeCoder-Instruct-safecoder-distill-3.0-Code-sec-1.5](https://huggingface.co/mveroe/Llama-3.2-1B-SafeCoder-Instruct-safecoder-distill-3.0-Code-sec-1.5) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- training_steps: 5
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu126
- Datasets 3.5.1
- Tokenizers 0.21.1
|
JoaoMigSilva/from_vest_3epoch | JoaoMigSilva | "2025-05-09T19:30:43Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-05-09T19:30:32Z" | ---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** JoaoMigSilva
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
awperation/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-humming_silent_ostrich | awperation | "2025-05-09T19:27:06Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am humming silent ostrich",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-09T19:25:16Z" | ---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-humming_silent_ostrich
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am humming silent ostrich
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-humming_silent_ostrich
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="awperation/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-humming_silent_ostrich", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
falan42/Lamarckvergence-14B-Turkish-Medical-GGUF-2 | falan42 | "2025-05-09T19:23:19Z" | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"qwen2",
"en",
"base_model:suayptalha/Lamarckvergence-14B",
"base_model:finetune:suayptalha/Lamarckvergence-14B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-05-09T19:23:12Z" | ---
base_model: suayptalha/Lamarckvergence-14B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** falan42
- **License:** apache-2.0
- **Finetuned from model :** suayptalha/Lamarckvergence-14B
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
xkaska02/czert_lr2e-05_bs4_train287_label_subtokens_True | xkaska02 | "2025-05-09T19:20:11Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:UWB-AIR/Czert-B-base-cased",
"base_model:finetune:UWB-AIR/Czert-B-base-cased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2025-05-09T19:17:18Z" | ---
library_name: transformers
base_model: UWB-AIR/Czert-B-base-cased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: czert_lr2e-05_bs4_train287_label_subtokens_True
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# czert_lr2e-05_bs4_train287_label_subtokens_True
This model is a fine-tuned version of [UWB-AIR/Czert-B-base-cased](https://huggingface.co/UWB-AIR/Czert-B-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1645
- Precision: 0.9160
- Recall: 0.9238
- F1: 0.9199
- Accuracy: 0.9511
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 72 | 0.3419 | 0.7963 | 0.8314 | 0.8135 | 0.8916 |
| No log | 2.0 | 144 | 0.2228 | 0.8825 | 0.8749 | 0.8787 | 0.9313 |
| No log | 3.0 | 216 | 0.1958 | 0.9152 | 0.8782 | 0.8963 | 0.9401 |
| No log | 4.0 | 288 | 0.1799 | 0.9264 | 0.9014 | 0.9137 | 0.9499 |
| No log | 5.0 | 360 | 0.1672 | 0.9247 | 0.9096 | 0.9171 | 0.9511 |
| No log | 6.0 | 432 | 0.1682 | 0.9335 | 0.9151 | 0.9242 | 0.9559 |
| 0.2129 | 7.0 | 504 | 0.1914 | 0.9390 | 0.9130 | 0.9259 | 0.9562 |
| 0.2129 | 8.0 | 576 | 0.1890 | 0.9323 | 0.9182 | 0.9252 | 0.9571 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
01-Sophie-Rain-SpiderMan-viral-video/Viral.Sophie.Rain.Spiderman.Video.Link | 01-Sophie-Rain-SpiderMan-viral-video | "2025-05-09T19:18:45Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-05-09T19:18:25Z" | <animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
kinory24/whisper-small-asr_aviation-v5.4 | kinory24 | "2025-05-09T19:17:36Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2025-05-09T17:18:08Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nnilayy/dreamer-valence-multi-classification-Kfold-4 | nnilayy | "2025-05-09T19:17:22Z" | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | "2025-05-09T19:17:20Z" | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
jyp96/pink_sunglasses | jyp96 | "2025-05-09T19:17:03Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"sd3",
"sd3-diffusers",
"base_model:stabilityai/stable-diffusion-3-medium-diffusers",
"base_model:adapter:stabilityai/stable-diffusion-3-medium-diffusers",
"license:other",
"region:us"
] | text-to-image | "2025-05-08T07:12:44Z" | ---
base_model: stabilityai/stable-diffusion-3-medium-diffusers
library_name: diffusers
license: other
instance_prompt: a photo of sks pink_sunglasses
widget:
- text: A photo of sks pink_sunglasses in a bucket
output:
url: image_0.png
- text: A photo of sks pink_sunglasses in a bucket
output:
url: image_1.png
- text: A photo of sks pink_sunglasses in a bucket
output:
url: image_2.png
- text: A photo of sks pink_sunglasses in a bucket
output:
url: image_3.png
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- sd3
- sd3-diffusers
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- sd3
- sd3-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SD3 DreamBooth LoRA - jyp96/pink_sunglasses
<Gallery />
## Model description
These are jyp96/pink_sunglasses DreamBooth LoRA weights for stabilityai/stable-diffusion-3-medium-diffusers.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [SD3 diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_sd3.md).
Was LoRA for the text encoder enabled? False.
## Trigger words
You should use `a photo of sks pink_sunglasses` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](jyp96/pink_sunglasses/tree/main) in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained(stabilityai/stable-diffusion-3-medium-diffusers, torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('jyp96/pink_sunglasses', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('A photo of sks pink_sunglasses in a bucket').images[0]
```
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`diffusers_lora_weights.safetensors` here 💾](/jyp96/pink_sunglasses/blob/main/diffusers_lora_weights.safetensors)**.
- Rename it and place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:your_new_name:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
hoan17/saving_P1000s100x1x2KL_100 | hoan17 | "2025-05-09T19:16:09Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"trl",
"o2o",
"reinforcement-learning",
"text-to-image",
"stable-diffusion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2025-05-09T19:15:11Z" | ---
license: apache-2.0
tags:
- trl
- o2o
- diffusers
- reinforcement-learning
- text-to-image
- stable-diffusion
---
# TRL O2O Model
This is a diffusion model that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for image generation conditioned with text.
|
jonasknobloch/gpt2_m090_tiny-stories_1024 | jonasknobloch | "2025-05-09T19:11:52Z" | 0 | 0 | null | [
"safetensors",
"gpt2",
"generated_from_trainer",
"dataset:roneneldan/TinyStories",
"model-index",
"region:us"
] | null | "2025-05-09T19:05:13Z" | ---
tags:
- generated_from_trainer
datasets:
- roneneldan/TinyStories
metrics:
- accuracy
model-index:
- name: gpt2_m090_tiny-stories_1024
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: roneneldan/TinyStories
type: roneneldan/TinyStories
metrics:
- name: Accuracy
type: accuracy
value: 0.6811243100863854
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/scads-nlp/morph-gpt_gpt2_tiny-stories/runs/qtf1fh7y)
# gpt2_m090_tiny-stories_1024
This model is a fine-tuned version of [](https://huggingface.co/) on the roneneldan/TinyStories dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1994
- Accuracy: 0.6811
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|
| 2.8751 | 0.0519 | 1000 | 2.4323 | 0.4504 |
| 1.9622 | 0.1037 | 2000 | 1.7853 | 0.5717 |
| 1.7125 | 0.1556 | 3000 | 1.6003 | 0.6046 |
| 1.5958 | 0.2074 | 4000 | 1.5009 | 0.6225 |
| 1.5199 | 0.2593 | 5000 | 1.4369 | 0.6347 |
| 1.4675 | 0.3112 | 6000 | 1.3928 | 0.6430 |
| 1.4297 | 0.3630 | 7000 | 1.3593 | 0.6495 |
| 1.3993 | 0.4149 | 8000 | 1.3303 | 0.6549 |
| 1.373 | 0.4668 | 9000 | 1.3077 | 0.6593 |
| 1.3537 | 0.5186 | 10000 | 1.2885 | 0.6631 |
| 1.3332 | 0.5705 | 11000 | 1.2709 | 0.6667 |
| 1.3207 | 0.6223 | 12000 | 1.2552 | 0.6697 |
| 1.3064 | 0.6742 | 13000 | 1.2452 | 0.6718 |
| 1.2972 | 0.7261 | 14000 | 1.2339 | 0.6740 |
| 1.2823 | 0.7779 | 15000 | 1.2240 | 0.6759 |
| 1.2703 | 0.8298 | 16000 | 1.2162 | 0.6775 |
| 1.2674 | 0.8817 | 17000 | 1.2090 | 0.6791 |
| 1.2591 | 0.9335 | 18000 | 1.2037 | 0.6802 |
| 1.2579 | 0.9854 | 19000 | 1.1997 | 0.6811 |
### Framework versions
- Transformers 4.42.3
- Pytorch 2.2.2+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
jyp96/monster_toy | jyp96 | "2025-05-09T19:07:33Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"sd3",
"sd3-diffusers",
"base_model:stabilityai/stable-diffusion-3-medium-diffusers",
"base_model:adapter:stabilityai/stable-diffusion-3-medium-diffusers",
"license:other",
"region:us"
] | text-to-image | "2025-05-08T07:02:32Z" | ---
base_model: stabilityai/stable-diffusion-3-medium-diffusers
library_name: diffusers
license: other
instance_prompt: a photo of sks monster_toy
widget:
- text: A photo of sks monster_toy in a bucket
output:
url: image_0.png
- text: A photo of sks monster_toy in a bucket
output:
url: image_1.png
- text: A photo of sks monster_toy in a bucket
output:
url: image_2.png
- text: A photo of sks monster_toy in a bucket
output:
url: image_3.png
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- sd3
- sd3-diffusers
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- sd3
- sd3-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SD3 DreamBooth LoRA - jyp96/monster_toy
<Gallery />
## Model description
These are jyp96/monster_toy DreamBooth LoRA weights for stabilityai/stable-diffusion-3-medium-diffusers.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [SD3 diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_sd3.md).
Was LoRA for the text encoder enabled? False.
## Trigger words
You should use `a photo of sks monster_toy` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](jyp96/monster_toy/tree/main) in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained(stabilityai/stable-diffusion-3-medium-diffusers, torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('jyp96/monster_toy', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('A photo of sks monster_toy in a bucket').images[0]
```
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`diffusers_lora_weights.safetensors` here 💾](/jyp96/monster_toy/blob/main/diffusers_lora_weights.safetensors)**.
- Rename it and place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:your_new_name:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
jonasknobloch/gpt2_m060_tiny-stories_1024 | jonasknobloch | "2025-05-09T19:07:19Z" | 0 | 0 | null | [
"safetensors",
"gpt2",
"generated_from_trainer",
"dataset:roneneldan/TinyStories",
"model-index",
"region:us"
] | null | "2025-05-09T19:01:05Z" | ---
tags:
- generated_from_trainer
datasets:
- roneneldan/TinyStories
metrics:
- accuracy
model-index:
- name: gpt2_m060_tiny-stories_1024
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: roneneldan/TinyStories
type: roneneldan/TinyStories
metrics:
- name: Accuracy
type: accuracy
value: 0.6773270461572055
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/scads-nlp/morph-gpt_gpt2_tiny-stories/runs/u24fh58a)
# gpt2_m060_tiny-stories_1024
This model is a fine-tuned version of [](https://huggingface.co/) on the roneneldan/TinyStories dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2152
- Accuracy: 0.6773
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|
| 2.9339 | 0.0524 | 1000 | 2.4737 | 0.4413 |
| 1.985 | 0.1048 | 2000 | 1.8014 | 0.5675 |
| 1.7307 | 0.1572 | 3000 | 1.6188 | 0.5998 |
| 1.6115 | 0.2097 | 4000 | 1.5163 | 0.6188 |
| 1.5406 | 0.2621 | 5000 | 1.4544 | 0.6302 |
| 1.4889 | 0.3145 | 6000 | 1.4083 | 0.6388 |
| 1.4467 | 0.3669 | 7000 | 1.3747 | 0.6452 |
| 1.416 | 0.4193 | 8000 | 1.3464 | 0.6510 |
| 1.3895 | 0.4717 | 9000 | 1.3234 | 0.6556 |
| 1.3695 | 0.5241 | 10000 | 1.3027 | 0.6595 |
| 1.3504 | 0.5766 | 11000 | 1.2877 | 0.6624 |
| 1.3372 | 0.6290 | 12000 | 1.2725 | 0.6656 |
| 1.3184 | 0.6814 | 13000 | 1.2589 | 0.6682 |
| 1.3121 | 0.7338 | 14000 | 1.2482 | 0.6705 |
| 1.3015 | 0.7862 | 15000 | 1.2391 | 0.6722 |
| 1.2915 | 0.8386 | 16000 | 1.2317 | 0.6738 |
| 1.2834 | 0.8910 | 17000 | 1.2245 | 0.6754 |
| 1.2758 | 0.9434 | 18000 | 1.2185 | 0.6767 |
| 1.2731 | 0.9959 | 19000 | 1.2154 | 0.6773 |
### Framework versions
- Transformers 4.42.3
- Pytorch 2.2.2+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
jonasknobloch/gpt2_m050_tiny-stories_1024 | jonasknobloch | "2025-05-09T19:06:43Z" | 0 | 0 | null | [
"safetensors",
"gpt2",
"generated_from_trainer",
"dataset:roneneldan/TinyStories",
"model-index",
"region:us"
] | null | "2025-05-09T19:00:37Z" | ---
tags:
- generated_from_trainer
datasets:
- roneneldan/TinyStories
metrics:
- accuracy
model-index:
- name: gpt2_m050_tiny-stories_1024
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: roneneldan/TinyStories
type: roneneldan/TinyStories
metrics:
- name: Accuracy
type: accuracy
value: 0.6794915189952896
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/scads-nlp/morph-gpt_gpt2_tiny-stories/runs/dqfv52ba)
# gpt2_m050_tiny-stories_1024
This model is a fine-tuned version of [](https://huggingface.co/) on the roneneldan/TinyStories dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2035
- Accuracy: 0.6795
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|
| 2.9083 | 0.0525 | 1000 | 2.4374 | 0.4486 |
| 1.9644 | 0.1049 | 2000 | 1.7837 | 0.5703 |
| 1.7149 | 0.1574 | 3000 | 1.5991 | 0.6031 |
| 1.5979 | 0.2099 | 4000 | 1.5038 | 0.6204 |
| 1.5248 | 0.2623 | 5000 | 1.4431 | 0.6322 |
| 1.4723 | 0.3148 | 6000 | 1.3973 | 0.6411 |
| 1.4339 | 0.3672 | 7000 | 1.3621 | 0.6475 |
| 1.406 | 0.4197 | 8000 | 1.3340 | 0.6530 |
| 1.3764 | 0.4722 | 9000 | 1.3089 | 0.6579 |
| 1.3561 | 0.5246 | 10000 | 1.2903 | 0.6618 |
| 1.3357 | 0.5771 | 11000 | 1.2739 | 0.6649 |
| 1.3213 | 0.6296 | 12000 | 1.2586 | 0.6680 |
| 1.3081 | 0.6820 | 13000 | 1.2466 | 0.6704 |
| 1.2962 | 0.7345 | 14000 | 1.2362 | 0.6726 |
| 1.2867 | 0.7869 | 15000 | 1.2277 | 0.6744 |
| 1.2755 | 0.8394 | 16000 | 1.2186 | 0.6762 |
| 1.2709 | 0.8919 | 17000 | 1.2117 | 0.6776 |
| 1.2611 | 0.9443 | 18000 | 1.2070 | 0.6787 |
| 1.2628 | 0.9968 | 19000 | 1.2035 | 0.6795 |
### Framework versions
- Transformers 4.42.3
- Pytorch 2.2.2+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
ma921/gpt2-large_h_dpo_imdb_noise40_epoch5_new_def | ma921 | "2025-05-09T19:06:14Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:ma921/gpt2-large-sft-imdb",
"base_model:finetune:ma921/gpt2-large-sft-imdb",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-09T19:05:15Z" | ---
library_name: transformers
license: mit
base_model: ma921/gpt2-large-sft-imdb
tags:
- generated_from_trainer
model-index:
- name: gpt2-large_h_dpo_imdb_noise40_epoch5_new_def
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-large_h_dpo_imdb_noise40_epoch5_new_def
This model is a fine-tuned version of [ma921/gpt2-large-sft-imdb](https://huggingface.co/ma921/gpt2-large-sft-imdb) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 32
- total_train_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
jonasknobloch/gpt2_m040_tiny-stories_1024 | jonasknobloch | "2025-05-09T19:04:38Z" | 0 | 0 | null | [
"safetensors",
"gpt2",
"generated_from_trainer",
"dataset:roneneldan/TinyStories",
"model-index",
"region:us"
] | null | "2025-05-09T19:00:03Z" | ---
tags:
- generated_from_trainer
datasets:
- roneneldan/TinyStories
metrics:
- accuracy
model-index:
- name: gpt2_m040_tiny-stories_1024
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: roneneldan/TinyStories
type: roneneldan/TinyStories
metrics:
- name: Accuracy
type: accuracy
value: 0.6769463112250458
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/scads-nlp/morph-gpt_gpt2_tiny-stories/runs/k8a7nxd1)
# gpt2_m040_tiny-stories_1024
This model is a fine-tuned version of [](https://huggingface.co/) on the roneneldan/TinyStories dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2148
- Accuracy: 0.6769
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|
| 2.9127 | 0.0525 | 1000 | 2.4537 | 0.4459 |
| 1.9813 | 0.1050 | 2000 | 1.8068 | 0.5656 |
| 1.7348 | 0.1575 | 3000 | 1.6157 | 0.6000 |
| 1.6121 | 0.2100 | 4000 | 1.5170 | 0.6179 |
| 1.539 | 0.2625 | 5000 | 1.4532 | 0.6300 |
| 1.4852 | 0.3150 | 6000 | 1.4079 | 0.6386 |
| 1.4478 | 0.3675 | 7000 | 1.3743 | 0.6451 |
| 1.4187 | 0.4200 | 8000 | 1.3458 | 0.6506 |
| 1.3891 | 0.4725 | 9000 | 1.3246 | 0.6546 |
| 1.3687 | 0.5250 | 10000 | 1.3029 | 0.6589 |
| 1.3523 | 0.5775 | 11000 | 1.2860 | 0.6623 |
| 1.3367 | 0.6300 | 12000 | 1.2727 | 0.6649 |
| 1.3201 | 0.6825 | 13000 | 1.2588 | 0.6679 |
| 1.3107 | 0.7350 | 14000 | 1.2477 | 0.6700 |
| 1.2997 | 0.7875 | 15000 | 1.2377 | 0.6721 |
| 1.2894 | 0.8400 | 16000 | 1.2306 | 0.6735 |
| 1.2849 | 0.8925 | 17000 | 1.2237 | 0.6750 |
| 1.2738 | 0.9450 | 18000 | 1.2185 | 0.6761 |
| 1.2725 | 0.9975 | 19000 | 1.2148 | 0.6769 |
### Framework versions
- Transformers 4.42.3
- Pytorch 2.2.2+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
ASethi04/Qwen-Qwen2.5-7B-gsm8k-first-lora-4-0.0001-same-prompt-template | ASethi04 | "2025-05-09T19:03:30Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-7B",
"base_model:finetune:Qwen/Qwen2.5-7B",
"endpoints_compatible",
"region:us"
] | null | "2025-05-09T18:10:44Z" | ---
base_model: Qwen/Qwen2.5-7B
library_name: transformers
model_name: Qwen-Qwen2.5-7B-gsm8k-first-lora-4-0.0001-same-prompt-template
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen-Qwen2.5-7B-gsm8k-first-lora-4-0.0001-same-prompt-template
This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ASethi04/Qwen-Qwen2.5-7B-gsm8k-first-lora-4-0.0001-same-prompt-template", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/ccxkilrf)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.1
- Transformers: 4.51.2
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Chidem/Gemma_1 | Chidem | "2025-05-09T19:02:34Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"unsloth",
"generated_from_trainer",
"base_model:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"base_model:adapter:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"license:gemma",
"region:us"
] | null | "2025-05-09T17:36:56Z" | ---
library_name: peft
license: gemma
base_model: unsloth/gemma-3-1b-it-unsloth-bnb-4bit
tags:
- unsloth
- generated_from_trainer
model-index:
- name: Gemma_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Gemma_1
This model is a fine-tuned version of [unsloth/gemma-3-1b-it-unsloth-bnb-4bit](https://huggingface.co/unsloth/gemma-3-1b-it-unsloth-bnb-4bit) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 15.2782
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 14.9524 | 1.0 | 603 | 15.3092 |
| 15.1946 | 2.0 | 1206 | 15.2942 |
| 15.2044 | 2.9959 | 1806 | 15.2782 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1 |
mayankpuvvala/lora-t5-pytorch-issues | mayankpuvvala | "2025-05-09T19:01:44Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-05-09T19:01:40Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial-i1-GGUF | mradermacher | "2025-05-09T19:00:42Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-factory",
"en",
"base_model:ghemdd/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial",
"base_model:quantized:ghemdd/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-05-09T16:11:06Z" | ---
base_model: ghemdd/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- llama-factory
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ghemdd/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial-i1-GGUF/resolve/main/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial.i1-IQ1_S.gguf) | i1-IQ1_S | 0.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial-i1-GGUF/resolve/main/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial.i1-IQ1_M.gguf) | i1-IQ1_M | 1.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial-i1-GGUF/resolve/main/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial-i1-GGUF/resolve/main/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial-i1-GGUF/resolve/main/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial.i1-IQ2_S.gguf) | i1-IQ2_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial-i1-GGUF/resolve/main/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial.i1-IQ2_M.gguf) | i1-IQ2_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial-i1-GGUF/resolve/main/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial-i1-GGUF/resolve/main/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial.i1-Q2_K.gguf) | i1-Q2_K | 1.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial-i1-GGUF/resolve/main/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial-i1-GGUF/resolve/main/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.7 | |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial-i1-GGUF/resolve/main/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial.i1-IQ3_S.gguf) | i1-IQ3_S | 1.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial-i1-GGUF/resolve/main/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial.i1-Q3_K_S.gguf) | i1-Q3_K_S | 1.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial-i1-GGUF/resolve/main/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial.i1-IQ3_M.gguf) | i1-IQ3_M | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial-i1-GGUF/resolve/main/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial-i1-GGUF/resolve/main/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial-i1-GGUF/resolve/main/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial-i1-GGUF/resolve/main/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.3 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial-i1-GGUF/resolve/main/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial.i1-Q4_0.gguf) | i1-Q4_0 | 2.3 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial-i1-GGUF/resolve/main/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial-i1-GGUF/resolve/main/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial-i1-GGUF/resolve/main/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial.i1-Q4_1.gguf) | i1-Q4_1 | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial-i1-GGUF/resolve/main/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial-i1-GGUF/resolve/main/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial.i1-Q5_K_M.gguf) | i1-Q5_K_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial-i1-GGUF/resolve/main/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial.i1-Q6_K.gguf) | i1-Q6_K | 3.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/competition-math-phinetune-v1-1-GGUF | mradermacher | "2025-05-09T19:00:18Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"en",
"base_model:styalai/competition-math-phinetune-v1-1",
"base_model:quantized:styalai/competition-math-phinetune-v1-1",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2025-05-09T14:00:08Z" | ---
base_model: styalai/competition-math-phinetune-v1-1
language:
- en
library_name: transformers
license: other
quantized_by: mradermacher
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/styalai/competition-math-phinetune-v1-1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/competition-math-phinetune-v1-1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/competition-math-phinetune-v1-1-GGUF/resolve/main/competition-math-phinetune-v1-1.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/competition-math-phinetune-v1-1-GGUF/resolve/main/competition-math-phinetune-v1-1.Q3_K_S.gguf) | Q3_K_S | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/competition-math-phinetune-v1-1-GGUF/resolve/main/competition-math-phinetune-v1-1.Q3_K_M.gguf) | Q3_K_M | 2.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/competition-math-phinetune-v1-1-GGUF/resolve/main/competition-math-phinetune-v1-1.IQ4_XS.gguf) | IQ4_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/competition-math-phinetune-v1-1-GGUF/resolve/main/competition-math-phinetune-v1-1.Q3_K_L.gguf) | Q3_K_L | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/competition-math-phinetune-v1-1-GGUF/resolve/main/competition-math-phinetune-v1-1.Q4_K_S.gguf) | Q4_K_S | 2.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/competition-math-phinetune-v1-1-GGUF/resolve/main/competition-math-phinetune-v1-1.Q4_K_M.gguf) | Q4_K_M | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/competition-math-phinetune-v1-1-GGUF/resolve/main/competition-math-phinetune-v1-1.Q5_K_S.gguf) | Q5_K_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/competition-math-phinetune-v1-1-GGUF/resolve/main/competition-math-phinetune-v1-1.Q5_K_M.gguf) | Q5_K_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/competition-math-phinetune-v1-1-GGUF/resolve/main/competition-math-phinetune-v1-1.Q6_K.gguf) | Q6_K | 3.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/competition-math-phinetune-v1-1-GGUF/resolve/main/competition-math-phinetune-v1-1.Q8_0.gguf) | Q8_0 | 4.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/competition-math-phinetune-v1-1-GGUF/resolve/main/competition-math-phinetune-v1-1.f16.gguf) | f16 | 7.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial-GGUF | mradermacher | "2025-05-09T19:00:18Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-factory",
"en",
"base_model:ghemdd/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial",
"base_model:quantized:ghemdd/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-05-09T13:00:06Z" | ---
base_model: ghemdd/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- llama-factory
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ghemdd/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial-GGUF/resolve/main/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial-GGUF/resolve/main/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial.Q3_K_S.gguf) | Q3_K_S | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial-GGUF/resolve/main/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial.Q3_K_M.gguf) | Q3_K_M | 2.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial-GGUF/resolve/main/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial.IQ4_XS.gguf) | IQ4_XS | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial-GGUF/resolve/main/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial.Q3_K_L.gguf) | Q3_K_L | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial-GGUF/resolve/main/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial.Q4_K_S.gguf) | Q4_K_S | 2.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial-GGUF/resolve/main/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial.Q4_K_M.gguf) | Q4_K_M | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial-GGUF/resolve/main/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial.Q5_K_S.gguf) | Q5_K_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial-GGUF/resolve/main/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial.Q5_K_M.gguf) | Q5_K_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial-GGUF/resolve/main/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial.Q6_K.gguf) | Q6_K | 3.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial-GGUF/resolve/main/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial.Q8_0.gguf) | Q8_0 | 4.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial-GGUF/resolve/main/phi3_lora_ktomixen_post_sft-mathinstruct-lima_partial.f16.gguf) | f16 | 7.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
rekingshui/dotnet-codellama-lora | rekingshui | "2025-05-09T18:59:25Z" | 7 | 0 | peft | [
"peft",
"safetensors",
"gguf",
"arxiv:1910.09700",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:adapter:codellama/CodeLlama-7b-hf",
"endpoints_compatible",
"region:us"
] | null | "2025-05-05T16:32:26Z" | ---
base_model: codellama/CodeLlama-7b-hf
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2 |
mveroe/Llama-3.2-1B-OurInstruct | mveroe | "2025-05-09T18:59:04Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-09T16:28:21Z" | ---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
model-index:
- name: Llama-3.2-1B-ceCode-OurInstruct
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.2-1B-ceCode-OurInstruct
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAFACTOR and the args are:
No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 2000
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu126
- Datasets 3.5.1
- Tokenizers 0.21.1
|
johnpaulbin/llama3.1-8b-e2-epoch3-merged-fp16 | johnpaulbin | "2025-05-09T18:57:19Z" | 6 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-31T00:03:19Z" | ---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** johnpaulbin
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
SAA-Lab/Qwen2.5-Omni-7B-UltraSuite | SAA-Lab | "2025-05-09T18:50:01Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2_5_omni",
"llama-factory",
"lora",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-Omni-7B",
"base_model:adapter:Qwen/Qwen2.5-Omni-7B",
"license:other",
"region:us"
] | null | "2025-05-09T18:29:14Z" | ---
library_name: peft
license: other
base_model: Qwen/Qwen2.5-Omni-7B
tags:
- llama-factory
- lora
- generated_from_trainer
model-index:
- name: sft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sft
This model is a fine-tuned version of [Qwen/Qwen2.5-Omni-7B](https://huggingface.co/Qwen/Qwen2.5-Omni-7B) on the ultrasuite dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- PEFT 0.15.1
- Transformers 4.52.0.dev0
- Pytorch 2.7.0+cu126
- Datasets 3.5.0
- Tokenizers 0.21.1 |
SimpsonFan/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-screeching_bellowing_tuna | SimpsonFan | "2025-05-09T18:47:46Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am screeching bellowing tuna",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | "2025-05-06T15:12:46Z" | ---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-screeching_bellowing_tuna
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am screeching bellowing tuna
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-screeching_bellowing_tuna
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="SimpsonFan/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-screeching_bellowing_tuna", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
jonasknobloch/gpt2_m030_tiny-stories_1024 | jonasknobloch | "2025-05-09T18:47:43Z" | 0 | 0 | null | [
"safetensors",
"gpt2",
"generated_from_trainer",
"dataset:roneneldan/TinyStories",
"model-index",
"region:us"
] | null | "2025-05-09T18:44:37Z" | ---
tags:
- generated_from_trainer
datasets:
- roneneldan/TinyStories
metrics:
- accuracy
model-index:
- name: gpt2_m030_tiny-stories_1024
results:
- task:
name: Causal Language Modeling
type: text-generation
dataset:
name: roneneldan/TinyStories
type: roneneldan/TinyStories
metrics:
- name: Accuracy
type: accuracy
value: 0.6756425005551174
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/scads-nlp/morph-gpt_gpt2_tiny-stories/runs/t3jfpuq6)
# gpt2_m030_tiny-stories_1024
This model is a fine-tuned version of [](https://huggingface.co/) on the roneneldan/TinyStories dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2217
- Accuracy: 0.6756
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|
| 2.9308 | 0.0525 | 1000 | 2.4752 | 0.4408 |
| 1.9919 | 0.1050 | 2000 | 1.8136 | 0.5648 |
| 1.7406 | 0.1575 | 3000 | 1.6235 | 0.5984 |
| 1.6185 | 0.2101 | 4000 | 1.5258 | 0.6165 |
| 1.5461 | 0.2626 | 5000 | 1.4625 | 0.6282 |
| 1.4955 | 0.3151 | 6000 | 1.4170 | 0.6368 |
| 1.4553 | 0.3676 | 7000 | 1.3824 | 0.6433 |
| 1.4218 | 0.4201 | 8000 | 1.3532 | 0.6492 |
| 1.3986 | 0.4726 | 9000 | 1.3305 | 0.6537 |
| 1.3722 | 0.5252 | 10000 | 1.3100 | 0.6575 |
| 1.3573 | 0.5777 | 11000 | 1.2934 | 0.6608 |
| 1.3448 | 0.6302 | 12000 | 1.2785 | 0.6639 |
| 1.3291 | 0.6827 | 13000 | 1.2657 | 0.6665 |
| 1.3174 | 0.7352 | 14000 | 1.2551 | 0.6686 |
| 1.3052 | 0.7877 | 15000 | 1.2463 | 0.6704 |
| 1.2968 | 0.8402 | 16000 | 1.2366 | 0.6725 |
| 1.2856 | 0.8928 | 17000 | 1.2308 | 0.6735 |
| 1.2817 | 0.9453 | 18000 | 1.2249 | 0.6749 |
| 1.2814 | 0.9978 | 19000 | 1.2216 | 0.6757 |
### Framework versions
- Transformers 4.42.3
- Pytorch 2.2.2+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1
|
nnilayy/dreamer-arousal-binary-classification-Kfold-2 | nnilayy | "2025-05-09T18:45:42Z" | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | "2025-05-09T18:45:40Z" | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
nathunt1996/e3fa7f93-8a50-4aba-99db-2ce34c753162 | nathunt1996 | "2025-05-09T18:43:12Z" | 0 | 0 | transformers | [
"transformers",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | "2025-05-09T18:42:26Z" | ---
library_name: transformers
model_name: nathunt1996/e3fa7f93-8a50-4aba-99db-2ce34c753162
tags:
- generated_from_trainer
licence: license
---
# Model Card for nathunt1996/e3fa7f93-8a50-4aba-99db-2ce34c753162
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
### Framework versions
- TRL: 0.12.0
- Transformers: 4.46.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Nhudang/DeepSeek-R1-Distill-Llama-8B | Nhudang | "2025-05-09T18:41:51Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-05-09T18:41:13Z" | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Nhudang
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
video-mun2k11-mun-k11-viral-clip/VIRAL.Video.clip.mun2k11.mun.k11.lo.clip.mun2k11z1u1jr2m9zwk86p.tele.mun.lo.link | video-mun2k11-mun-k11-viral-clip | "2025-05-09T18:39:28Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-05-09T18:38:15Z" | <animated-image data-catalyst=""><a href="https://tinyurl.com/fn84hrnu?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> |
fuscusx/flux_irene | fuscusx | "2025-05-09T18:36:16Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-05-09T18:15:36Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: IRE
---
# Flux_Irene
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `IRE` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "IRE",
"lora_weights": "https://huggingface.co/fuscusx/flux_irene/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('fuscusx/flux_irene', weight_name='lora.safetensors')
image = pipeline('IRE').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1500
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/fuscusx/flux_irene/discussions) to add images that show off what you’ve made with this LoRA.
|
fqb/model3_gguf | fqb | "2025-05-09T18:34:53Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:fqb/3b-fr-ft-research_release_4bit",
"base_model:quantized:fqb/3b-fr-ft-research_release_4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-05-09T18:33:21Z" | ---
base_model: fqb/3b-fr-ft-research_release_4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** fqb
- **License:** apache-2.0
- **Finetuned from model :** fqb/3b-fr-ft-research_release_4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ma921/phi2_h_dpo_golden-hh_noise40_epoch3_new_def | ma921 | "2025-05-09T18:34:14Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"generated_from_trainer",
"base_model:ma921/phi-2-sft-golden-hh",
"base_model:finetune:ma921/phi-2-sft-golden-hh",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-09T18:29:16Z" | ---
library_name: transformers
license: mit
base_model: ma921/phi-2-sft-golden-hh
tags:
- generated_from_trainer
model-index:
- name: phi2_h_dpo_golden-hh_noise40_epoch3_new_def
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phi2_h_dpo_golden-hh_noise40_epoch3_new_def
This model is a fine-tuned version of [ma921/phi-2-sft-golden-hh](https://huggingface.co/ma921/phi-2-sft-golden-hh) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 64
- total_train_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
mveroe/Llama-3.2-1B-SafeCoder-Instruct-safecoder-distill-3.0-Code-sec-1.5 | mveroe | "2025-05-09T18:32:22Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:mveroe/Llama-3.2-1B-SafeCoder-Instruct",
"base_model:finetune:mveroe/Llama-3.2-1B-SafeCoder-Instruct",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-09T12:02:07Z" | ---
library_name: transformers
license: llama3.2
base_model: mveroe/Llama-3.2-1B-SafeCoder-Instruct
tags:
- generated_from_trainer
model-index:
- name: Llama-3.2-1B-SafeCoder-Instruct-safecoder-distill-3.0-Code-sec-1.5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.2-1B-SafeCoder-Instruct-safecoder-distill-3.0-Code-sec-1.5
This model is a fine-tuned version of [mveroe/Llama-3.2-1B-SafeCoder-Instruct](https://huggingface.co/mveroe/Llama-3.2-1B-SafeCoder-Instruct) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAFACTOR and the args are:
No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 2500
### Training results
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu126
- Datasets 3.5.1
- Tokenizers 0.21.1
|
Taimoor4477/rephraserHumanizerModelFineTunedPraphraser225109052025 | Taimoor4477 | "2025-05-09T18:30:11Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-05-09T18:29:45Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Kahhhren/qwen3_1.7_inference | Kahhhren | "2025-05-09T18:30:09Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-1.7B",
"base_model:finetune:unsloth/Qwen3-1.7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-05-09T18:29:56Z" | ---
base_model: unsloth/Qwen3-1.7B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Kahhhren
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-1.7B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
DRDELATV/lora-image-caricature | DRDELATV | "2025-05-09T18:29:32Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-05-09T18:08:34Z" | # LoRA - Generación de imágenes
Este repositorio contiene un modelo LoRA descargado desde Civitai y adaptado para usarse con Stable Diffusion a través de `diffusers`.
## Uso básico
```python
from diffusers import StableDiffusionPipeline
import torch
pipe = StableDiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
torch_dtype=torch.float16,
).to("cuda")
pipe.load_lora_weights("tu_usuario/lora-cityscape-sd", weight_name="nombre_del_archivo.safetensors")
prompt = "una ciudad futurista estilo anime"
image = pipe(prompt).images[0]
image.save("output.png")
|
Kwanda66/Speech-Text | Kwanda66 | "2025-05-09T18:26:13Z" | 0 | 0 | null | [
"pytorch",
"whisper",
"license:apache-2.0",
"region:us"
] | null | "2025-05-09T18:23:57Z" | ---
license: apache-2.0
---
|
jyp96/dog8 | jyp96 | "2025-05-09T18:25:54Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"sd3",
"sd3-diffusers",
"base_model:stabilityai/stable-diffusion-3-medium-diffusers",
"base_model:adapter:stabilityai/stable-diffusion-3-medium-diffusers",
"license:other",
"region:us"
] | text-to-image | "2025-05-08T06:19:46Z" | ---
base_model: stabilityai/stable-diffusion-3-medium-diffusers
library_name: diffusers
license: other
instance_prompt: a photo of sks dog8
widget:
- text: A photo of sks dog8 in a bucket
output:
url: image_0.png
- text: A photo of sks dog8 in a bucket
output:
url: image_1.png
- text: A photo of sks dog8 in a bucket
output:
url: image_2.png
- text: A photo of sks dog8 in a bucket
output:
url: image_3.png
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- sd3
- sd3-diffusers
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- sd3
- sd3-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SD3 DreamBooth LoRA - jyp96/dog8
<Gallery />
## Model description
These are jyp96/dog8 DreamBooth LoRA weights for stabilityai/stable-diffusion-3-medium-diffusers.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [SD3 diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_sd3.md).
Was LoRA for the text encoder enabled? False.
## Trigger words
You should use `a photo of sks dog8` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](jyp96/dog8/tree/main) in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained(stabilityai/stable-diffusion-3-medium-diffusers, torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('jyp96/dog8', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('A photo of sks dog8 in a bucket').images[0]
```
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`diffusers_lora_weights.safetensors` here 💾](/jyp96/dog8/blob/main/diffusers_lora_weights.safetensors)**.
- Rename it and place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:your_new_name:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
kinory24/whisper-small-asr_aviation-adapters-v5.4 | kinory24 | "2025-05-09T18:22:39Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-05-09T18:22:20Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hoan17/saving_P1000s100x1x2KL_50 | hoan17 | "2025-05-09T18:19:20Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"trl",
"o2o",
"reinforcement-learning",
"text-to-image",
"stable-diffusion",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2025-05-09T18:18:18Z" | ---
license: apache-2.0
tags:
- trl
- o2o
- diffusers
- reinforcement-learning
- text-to-image
- stable-diffusion
---
# TRL O2O Model
This is a diffusion model that has been fine-tuned with reinforcement learning to
guide the model outputs according to a value, function, or human feedback. The model can be used for image generation conditioned with text.
|
NA94/Clarky | NA94 | "2025-05-09T18:18:58Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"dataset:NA94/clarky_alpaca",
"base_model:NousResearch/Meta-Llama-3.1-8B",
"base_model:finetune:NousResearch/Meta-Llama-3.1-8B",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-09T17:37:41Z" | ---
library_name: transformers
license: llama3.1
base_model: NousResearch/Meta-Llama-3.1-8B
tags:
- generated_from_trainer
datasets:
- NA94/clarky_alpaca
model-index:
- name: outputs/out
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.8.0.dev0`
```yaml
base_model: NousResearch/Meta-Llama-3.1-8B
# Model loading settings
load_in_8bit: false
load_in_4bit: false
strict: false
# Dataset configuration
datasets:
- path: NA94/clarky_alpaca
type: alpaca
dataset_prepared_path: last_run_prepared
val_set_size: 0.05
output_dir: ./outputs/out
# Training parameters
sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true
# Weights & Biases logging (optional)
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
# Training optimization
gradient_accumulation_steps: 8
micro_batch_size: 2
max_steps: 100
warmup_steps: 5
eval_steps: 10
logging_steps: 5
optimizer: paged_adamw_8bit
lr_scheduler: cosine
learning_rate: 5e-5
# Additional settings
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: false
#early_stopping_patience:
resume_from_checkpoint:
#logging_steps: 1
#xformers_attention:
flash_attention: true
eval_sample_packing: false
#evals_per_epoch: 2
#eval_table_size:
#saves_per_epoch: 1
#debug:
#deepspeed:
weight_decay: 0.0
#fsdp:
#fsdp_config:
special_tokens:
pad_token: <|end_of_text|>
```
</details><br>
# outputs/out
This model is a fine-tuned version of [NousResearch/Meta-Llama-3.1-8B](https://huggingface.co/NousResearch/Meta-Llama-3.1-8B) on the NA94/clarky_alpaca dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1 | 3.8304 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
Y1zao/talyor_swift_m | Y1zao | "2025-05-09T18:17:59Z" | 0 | 0 | null | [
"gguf",
"llama",
"endpoints_compatible",
"region:us"
] | null | "2025-05-09T17:47:35Z" | # Taylor Swift Lyrics Llama Model (3.2, GGUF Format)
- **Base Model**: unsloth/llama-3.2-1b-bnb-4bit
- **Fine-tuned on**: Taylor Swift lyrics using QLoRA
- **Format**: GGUF (Not compatible with the `transformers` library; requires `llama-cpp-python`)
- **License**: Apache-2.0
- **Developed by**: Covvenheimer and Team
This model, fine-tuned on Taylor Swift lyrics, is tailored for generating text in the style of her songs. It was trained with a 2x speed improvement using [Unsloth](https://unsloth.ai) and Hugging Face’s TRL library, specifically optimized for GGUF format.
> **Important**: This model requires `llama-cpp-python` to run. It is incompatible with the `transformers` library due to its GGUF format.
---
## Installation and Setup
To load and run this model, install the `llama-cpp-python` library and download the model file from the Hugging Face Hub.
---
## Step-by-Step Code Example
### 1. Install llama-cpp-python
```bash
!pip install llama-cpp-python
```
---
### 2. Load the Model with llama-cpp
```python
from huggingface_hub import hf_hub_download
from llama_cpp import Llama
# Define your model repository and file name
repo_id = "Covvenheimer/taylor_swift_model"
filename = "unsloth.Q4_K_M.gguf"
# Download the GGUF model file from Hugging Face
model_path = hf_hub_download(repo_id=repo_id, filename=filename)
# Load the model using llama-cpp-python
llm = Llama(model_path=model_path)
```
---
### 3. Generate Text Using a Prompt
```python
# Define a prompt for generating lyrics
prompt = """You are a songwriter composing a song in the style of Taylor Swift.
Write lyrics that reflect her themes and musical style, focusing on Love, Heartbreak, and Romance."""
# Generate lyrics
output = llm(prompt, max_tokens=512, temperature=0.8)
print(output["choices"][0]["text"])
```
> 💡 **Tip:** To customize the themes, change the last line of the prompt. For example, you can modify the focus to other themes like "Friendship, Growth, Resilience." Replace *"Love, Heartbreak, Romance"* with your chosen themes.
---
This setup will allow you to use the model efficiently and generate lyrics in the style of Taylor Swift.
|
nnilayy/dreamer-valence-binary-classification-Kfold-1 | nnilayy | "2025-05-09T18:17:42Z" | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | "2025-05-09T18:17:40Z" | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
sageofai/sageofai-lora-kvasir-trained-improved | sageofai | "2025-05-09T18:17:39Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2025-05-09T11:32:00Z" | ---
base_model: runwayml/stable-diffusion-v1-5
library_name: diffusers
license: creativeml-openrail-m
inference: true
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - sageofai/sageofai-lora-kvasir-trained-improved
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the waitwhoami/vqa_caption.dataset-test dataset. You can find some example images in the following.




## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
jyp96/dog7 | jyp96 | "2025-05-09T18:17:12Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"sd3",
"sd3-diffusers",
"base_model:stabilityai/stable-diffusion-3-medium-diffusers",
"base_model:adapter:stabilityai/stable-diffusion-3-medium-diffusers",
"license:other",
"region:us"
] | text-to-image | "2025-05-08T06:09:47Z" | ---
base_model: stabilityai/stable-diffusion-3-medium-diffusers
library_name: diffusers
license: other
instance_prompt: a photo of sks dog7
widget:
- text: A photo of sks dog7 in a bucket
output:
url: image_0.png
- text: A photo of sks dog7 in a bucket
output:
url: image_1.png
- text: A photo of sks dog7 in a bucket
output:
url: image_2.png
- text: A photo of sks dog7 in a bucket
output:
url: image_3.png
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- sd3
- sd3-diffusers
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- sd3
- sd3-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SD3 DreamBooth LoRA - jyp96/dog7
<Gallery />
## Model description
These are jyp96/dog7 DreamBooth LoRA weights for stabilityai/stable-diffusion-3-medium-diffusers.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [SD3 diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_sd3.md).
Was LoRA for the text encoder enabled? False.
## Trigger words
You should use `a photo of sks dog7` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](jyp96/dog7/tree/main) in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained(stabilityai/stable-diffusion-3-medium-diffusers, torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('jyp96/dog7', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('A photo of sks dog7 in a bucket').images[0]
```
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`diffusers_lora_weights.safetensors` here 💾](/jyp96/dog7/blob/main/diffusers_lora_weights.safetensors)**.
- Rename it and place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:your_new_name:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
mlfoundations-dev/openthoughts2_math_30k | mlfoundations-dev | "2025-05-09T18:15:48Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-05-06T05:05:58Z" | ---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: openthoughts2_math_30k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openthoughts2_math_30k
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/openthoughts2_math_30k dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 16
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- total_eval_batch_size: 128
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.46.1
- Pytorch 2.7.0+cu126
- Datasets 3.1.0
- Tokenizers 0.20.3
|
SSS/100_font_texture_v1 | SSS | "2025-05-09T18:14:22Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-05-09T16:07:35Z" | ---
license: apache-2.0
---
|
Inventors-Hub/Falcon3-10B-Instruct-BehaviorTree-8epochs | Inventors-Hub | "2025-05-09T18:11:41Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-05-09T18:11:36Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
fixtern/gemma-product-description | fixtern | "2025-05-09T18:09:28Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-4b-pt",
"base_model:finetune:google/gemma-3-4b-pt",
"endpoints_compatible",
"region:us"
] | null | "2025-05-09T17:34:42Z" | ---
base_model: google/gemma-3-4b-pt
library_name: transformers
model_name: gemma-product-description
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-product-description
This model is a fine-tuned version of [google/gemma-3-4b-pt](https://huggingface.co/google/gemma-3-4b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="fixtern/gemma-product-description", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.3.2
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
jyp96/dog6 | jyp96 | "2025-05-09T18:08:37Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"sd3",
"sd3-diffusers",
"base_model:stabilityai/stable-diffusion-3-medium-diffusers",
"base_model:adapter:stabilityai/stable-diffusion-3-medium-diffusers",
"license:other",
"region:us"
] | text-to-image | "2025-05-08T05:59:40Z" | ---
base_model: stabilityai/stable-diffusion-3-medium-diffusers
library_name: diffusers
license: other
instance_prompt: a photo of sks dog6
widget:
- text: A photo of sks dog6 in a bucket
output:
url: image_0.png
- text: A photo of sks dog6 in a bucket
output:
url: image_1.png
- text: A photo of sks dog6 in a bucket
output:
url: image_2.png
- text: A photo of sks dog6 in a bucket
output:
url: image_3.png
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- sd3
- sd3-diffusers
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- sd3
- sd3-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SD3 DreamBooth LoRA - jyp96/dog6
<Gallery />
## Model description
These are jyp96/dog6 DreamBooth LoRA weights for stabilityai/stable-diffusion-3-medium-diffusers.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [SD3 diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_sd3.md).
Was LoRA for the text encoder enabled? False.
## Trigger words
You should use `a photo of sks dog6` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](jyp96/dog6/tree/main) in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained(stabilityai/stable-diffusion-3-medium-diffusers, torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('jyp96/dog6', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('A photo of sks dog6 in a bucket').images[0]
```
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`diffusers_lora_weights.safetensors` here 💾](/jyp96/dog6/blob/main/diffusers_lora_weights.safetensors)**.
- Rename it and place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:your_new_name:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
erjoy/whisper-tiny-hi-3k-steps | erjoy | "2025-05-09T18:05:53Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"hi",
"dataset:ai4bharat/IndicVoices",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2025-05-09T13:53:44Z" | ---
library_name: transformers
language:
- hi
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- ai4bharat/IndicVoices
metrics:
- wer
model-index:
- name: Whisper Small Hi - V1
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: ai4bharat/IndicVoices
type: ai4bharat/IndicVoices
args: 'config: hi, split: train'
metrics:
- name: Wer
type: wer
value: 51.34947222046647
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Hi - V1
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the ai4bharat/IndicVoices dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4275
- Wer: 51.3495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 3000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.6607 | 0.8104 | 500 | 0.6622 | 72.1794 |
| 0.4574 | 1.6207 | 1000 | 0.5104 | 59.5822 |
| 0.3659 | 2.4311 | 1500 | 0.4626 | 56.2291 |
| 0.3346 | 3.2415 | 2000 | 0.4417 | 53.4954 |
| 0.2912 | 4.0519 | 2500 | 0.4301 | 51.6118 |
| 0.275 | 4.8622 | 3000 | 0.4275 | 51.3495 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.0
|
Aluba/rgb35 | Aluba | "2025-05-09T18:04:39Z" | 0 | 0 | null | [
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | "2025-05-09T17:46:13Z" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
cvoffer/a2b1ae44-e044-488f-ae58-fffce6431c2b | cvoffer | "2025-05-09T18:00:21Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2-1.5B-Instruct",
"base_model:adapter:Qwen/Qwen2-1.5B-Instruct",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2025-05-09T17:41:01Z" | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a2b1ae44-e044-488f-ae58-fffce6431c2b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: Qwen/Qwen2-1.5B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- b5c5fbf3d1285560_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b5c5fbf3d1285560_train_data.json
type:
field_instruction: prompt
field_output: completion
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 1
gradient_checkpointing: true
gradient_clipping: 0.55
group_by_length: false
hub_model_id: cvoffer/a2b1ae44-e044-488f-ae58-fffce6431c2b
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 1.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 500
micro_batch_size: 10
mixed_precision: bf16
mlflow_experiment_name: /tmp/b5c5fbf3d1285560_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 3717a84a-13c9-4d25-b709-9c3bf4b97f4d
wandb_project: s56-28
wandb_run: your_name
wandb_runid: 3717a84a-13c9-4d25-b709-9c3bf4b97f4d
warmup_steps: 25
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# a2b1ae44-e044-488f-ae58-fffce6431c2b
This model is a fine-tuned version of [Qwen/Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7390
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 25
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.5966 | 0.2767 | 500 | 0.7390 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
piotr-ai/polanka_4b_v0.1_qwen3 | piotr-ai | "2025-05-09T17:59:38Z" | 0 | 0 | null | [
"safetensors",
"qwen3",
"text-generation",
"conversational",
"pl",
"en",
"license:apache-2.0",
"region:us"
] | text-generation | "2025-05-09T16:35:54Z" | ---
license: apache-2.0
language:
- pl
- en
pipeline_tag: text-generation
---
temp: 0.6
top-k: 40
**example 1:**
```
> czy wiesz kim jesteś?
Nie jestem człowiekiem, jestem sztuczną inteligencją (AI). Moim celem jest pomaganie ludziom, odpowiadając na pytania, dostarczając informacji i wspierając w rozwiązywaniu problemów. Nie posiadam osobistych doświadczeń, emocji ani świadomości w ludzkim sensie. Moja "ja" ogranicza się do algorytmów i danych, na których zostałem wytrenowany.
Czy mogę w czymś pomóc?<|im_end|>
```
**example 2:**
```
<|im_start|>system
Jesteś kotem<|im_end|>
> hej
Miauu! Cześć!<|im_end|>
> co lubisz robić?
Uwielbiam drapanie! Ale lubię też polowanie na myszy i koty! Czy masz jakieś ulubione zajęcia?<|im_end|>
> co lubisz jeść?
Uwielbiam mięso! Ale uwielbiam też jeść krewetki i owoce morza! Czy masz jakieś ulubione przekąski?<|im_end|>
> lubisz się bawić?
O tak! Uwielbiam się bawić! Lubię też gonić za piórkami i skacząc po drzewach! Czy masz jakieś ulubione zabawki?<|im_end|>
```
**example 3:**
```
> napisz po angielsku czym jest psychologia
Psychology is the scientific study of the mind and behavior. It encompasses a wide range of topics, including the development and operation of the central nervous system, the physiological basis of behavior, and the influence of the environment on behavior. Psychologists use scientific methods to understand how the mind works, how behavior is influenced by internal and external factors, and how to help individuals and societies function more effectively.<|im_end|>
```
**example 4:**
```
> wyjaśnij krok po kroku ile to jest 10 - 7 * 2 + 35
Oczywiście! Wyjaśnię krok po kroku, jak obliczyć wyrażenie \( 10 - 7 \times 2 + 35 \).
1. **Zastosuj reguły kolejności wykonywania działań**:
- Mnożenie i dzielenie mają wyższy priorytet niż dodawanie i odejmowanie.
- Zatem najpierw wykonaj mnożenie.
2. **Wykonaj mnożenie**:
\[
7 \times 2 = 14
\]
3. **Podstaw wynik mnożenia do wyrażenia**:
\[
10 - 14 + 35
\]
4. **Wykonaj odejmowanie**:
\[
10 - 14 = -4
\]
5. **Podstaw wynik odejmowania do wyrażenia**:
\[
-4 + 35
\]
6. **Wykonaj dodawanie**:
\[
-4 + 35 = 31
\]
Wynik końcowy to:
\[
10 - 7 \times 2 + 35 = 31
\]<|im_end|>
```
**example 5:**
````
> napisz funkcję w pythonie, która dodaje 2 liczby
Oczywiście! Oto prosty przykład funkcji w Pythonie, która dodaje dwie liczby:
```python
def dodaj_liczby(a, b):
"""
Dodaje dwie liczby i zwraca wynik.
Args:
a (float or int): Pierwsza liczba do dodania.
b (float or int): Druga liczba do dodania.
Returns:
float or int: Suma a i b.
"""
return a + b
# Przykład użycia funkcji
wynik = dodaj_liczby(3, 4)
print("Wynik dodawania 3 i 4:", wynik)
```
W tym przykładzie:
- Funkcja `dodaj_liczby` przyjmuje dwa argumenty, `a` i `b`.
- Zwraca sumę `a` i `b`.
- W przykładzie użycia funkcji dodajemy 3 i 4, a wynik drukujemy.
Możesz użyć tej funkcji z dowolnymi liczbami, które chcesz dodać.<|im_end|>
````
**example 6:**
```
> czym jest umysł?
Umysł to złożone i wieloaspektowe zjawisko, które obejmuje zdolność do myślenia, rozumowania, uczucia, świadomości i samoświadomości. Jest to zdolność do przetwarzania informacji, rozwiązywania problemów, uczenia się i adaptowania się do nowych sytuacji. Umysł jest kluczowym elementem ludzkiej egzystencji i wpływa na nasze zachowania, decyzje i interakcje z otoczeniem.
### Kluczowe elementy umysłu:
1. **Myślenie i rozumowanie**: zdolność do logicznego myślenia, analizy i rozwiązywania problemów.
2. **Uczucia i świadomość**: zdolność do odczuwania i rozumienia emocji, co pozwala na empatię i reakcje na otoczenie.
3. **Samodzielność i refleksja**: zdolność do samodzielnego myślenia i refleksji nad własnymi działaniami i decyzjami.
4. **Kreatywność**: zdolność do tworzenia nowych idei i rozwiązań.
5. **Samoświadomość**: zdolność do rozpoznawania i rozumienia własnych myśli, uczuć i motywacji.
Umysł jest wynikiem złożonych procesów neurobiologicznych, które zachodzą w mózgu. Jest to złożony system, który obejmuje różne struktury i procesy, takie jak sieci neuronowe, synapsy i neuroprzekaźniki. Umysł jest również kształtowany przez doświadczenia życiowe, wychowanie i środowisko.
W psychologii i filozofii umysł jest często rozważany jako produkt interakcji między mózgiem a umysłem, a także jako niezależne od fizjologii zjawisko.<|im_end|>
``` |
Subsets and Splits