modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-30 00:44:18
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 536
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-30 00:43:43
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mkorada/opus-mt-en-is-finetuned-v4
|
mkorada
| 2025-06-22T02:39:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"marian",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-06-22T02:38:53Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.01_2993
|
luckeciano
| 2025-06-22T02:36:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-21T21:05:42Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.01_2993
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.01_2993
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.01_2993", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/aqfmldzm)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Zillis/2025_PAAMA_MODEL_15_V2_D1_model_west
|
Zillis
| 2025-06-22T02:28:56Z | 0 | 0 | null |
[
"license:unknown",
"region:us"
] | null | 2025-06-21T08:10:35Z |
---
license: unknown
---
2025_PAAMA_MODEL_15_NO_3_D2_ANATOMY_DSRL.safetensors











































2025_PAAMA_MODEL_15_MODEL.LAB_V1.safetensors











2025_PAAMA_MODEL_15_MODEL.LAB-3차\2025_PAAMA_MODEL_15_NO_1_D2_ANATOMY.safetensors






































































































































































|
myeongkyunkang/medcap-textplus-pmcoa-patients-llama3
|
myeongkyunkang
| 2025-06-22T02:28:08Z | 0 | 0 | null |
[
"medical",
"image-to-text",
"en",
"region:us"
] |
image-to-text
| 2024-07-04T04:20:20Z |
---
language:
- en
pipeline_tag: image-to-text
tags:
- medical
---
# medcap-pmcoa
The vision encoder is fine-tuned from [BiomedCLIP](https://huggingface.co/microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224) using [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct).
For more information, please refer to [medcap](https://github.com/myeongkyunkang/medcap) and [README_FINETUNE.md](https://github.com/myeongkyunkang/medcap_private/blob/main/README_FINETUNE.md).
The model is deprecated.
|
myeongkyunkang/medcap-textplus-pmcoa-patients-llama3.1
|
myeongkyunkang
| 2025-06-22T02:23:31Z | 0 | 1 | null |
[
"medical",
"image-to-text",
"en",
"region:us"
] |
image-to-text
| 2024-08-19T05:35:59Z |
---
language:
- en
pipeline_tag: image-to-text
tags:
- medical
---
# medcap-pmcoa
The vision encoder is fine-tuned from [medcap-textplus-pmcoa-patients-llama3](https://huggingface.co/myeongkyunkang/medcap-textplus-pmcoa-patients-llama3) using [Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct).
For more information, please refer to [medcap](https://github.com/myeongkyunkang/medcap) and [README_FINETUNE.md](https://github.com/myeongkyunkang/medcap_private/blob/main/README_FINETUNE.md).
|
nvlan1902/lanllm-chatbot
|
nvlan1902
| 2025-06-22T02:23:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"falcon",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T02:08:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
GeerBox/q-Taxi-v3-test
|
GeerBox
| 2025-06-22T02:18:10Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-22T02:18:07Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-test
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="GeerBox/q-Taxi-v3-test", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ahirking/Smart
|
ahirking
| 2025-06-22T02:17:40Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-22T02:17:40Z |
---
license: apache-2.0
---
|
ariangoog/gemma-text-to-sql
|
ariangoog
| 2025-06-22T02:14:14Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-1b-pt",
"base_model:finetune:google/gemma-3-1b-pt",
"endpoints_compatible",
"region:us"
] | null | 2025-06-13T19:32:26Z |
---
base_model: google/gemma-3-1b-pt
library_name: transformers
model_name: gemma-text-to-sql
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-text-to-sql
This model is a fine-tuned version of [google/gemma-3-1b-pt](https://huggingface.co/google/gemma-3-1b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="ariangoog/gemma-text-to-sql", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.52.4
- Pytorch: 2.6.0+cu124
- Datasets: 3.3.2
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
mavleo96/rl-bots
|
mavleo96
| 2025-06-22T02:02:18Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-22T01:45:03Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 262.43 +/- 18.65
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import PPO
from huggingface_sb3 import load_from_hub
import gym
# Define model repo_id and filename
repo_id = "mavleo96/rl-bots" # Change this to the actual repo if different
filename = "ppo-LunarLander-v2.zip"
# Load the model from the Hugging Face Hub
model = load_from_hub(repo_id, filename, model_class=PPO)
# Create the environment
env = gym.make("LunarLander-v2")
# Run a few episodes
obs = env.reset()
for _ in range(1000):
action, _states = model.predict(obs, deterministic=True)
obs, reward, done, info = env.step(action)
env.render()
if done:
obs = env.reset()
env.close()
```
|
mkorada/opus-mt-en-is-finetuned-v3
|
mkorada
| 2025-06-22T02:02:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"marian",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-06-22T02:01:51Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Tenetnets/apexgenex
|
Tenetnets
| 2025-06-22T02:02:01Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-22T01:26:36Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: apexgenex
---
# Apexgenex
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `apexgenex` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "apexgenex",
"lora_weights": "https://huggingface.co/Tenetnets/apexgenex/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Tenetnets/apexgenex', weight_name='lora.safetensors')
image = pipeline('apexgenex').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Tenetnets/apexgenex/discussions) to add images that show off what you’ve made with this LoRA.
|
dtadpole/KernelCoder-32B-AWQ_20250621-170337
|
dtadpole
| 2025-06-22T01:50:48Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"triton-ag",
"unsloth",
"lora",
"en",
"base_model:Qwen/Qwen3-32B-AWQ",
"base_model:adapter:Qwen/Qwen3-32B-AWQ",
"license:apache-2.0",
"region:us"
] | null | 2025-06-22T01:50:08Z |
---
library_name: peft
base_model: Qwen/Qwen3-32B-AWQ
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- triton-ag
- unsloth
- lora
---
# dtadpole/KernelCoder-32B-AWQ_20250621-170337
This model is a fine-tuned version of [Qwen/Qwen3-32B-AWQ](https://huggingface.co/Qwen/Qwen3-32B-AWQ) using Unsloth and LoRA.
## Model Details
- **Base Model:** Qwen/Qwen3-32B-AWQ
- **Fine-tuning Method:** LoRA (Low-Rank Adaptation)
- **Max Sequence Length:** 32768
- **Training Examples:** 517
- **LoRA Rank:** 64
- **LoRA Alpha:** 64
## Training Configuration
- **Epochs:** 1
- **Learning Rate:** 3e-05
- **Batch Size:** 1
- **Gradient Accumulation Steps:** 1
- **Best Loss:** 0.1518
## Usage
```python
from unsloth import FastLanguageModel
import torch
# Load model
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="dtadpole/KernelCoder-32B-AWQ_20250621-170337",
max_seq_length=32768,
dtype=None,
load_in_4bit=True,
)
# Enable inference mode
FastLanguageModel.for_inference(model)
# Format your prompt
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Your question here"}
]
formatted_prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
# Generate
inputs = tokenizer(formatted_prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
## Training Data
This model was fine-tuned on processed conversation experiences for improved performance on specific tasks.
## Limitations
- This is a LoRA adapter that requires the base model to function
- Performance may vary depending on the specific use case
- The model inherits any limitations from the base model
## Framework Versions
- Unsloth: 2025.6.1
- Transformers: 4.52.4
- PyTorch: 2.7.0
- PEFT: Latest
|
cwywilson/Turner
|
cwywilson
| 2025-06-22T01:48:30Z | 0 | 0 |
segmentation-models-pytorch
|
[
"segmentation-models-pytorch",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"semantic-segmentation",
"pytorch",
"image-segmentation",
"license:mit",
"region:us"
] |
image-segmentation
| 2025-06-22T01:06:02Z |
---
library_name: segmentation-models-pytorch
license: mit
pipeline_tag: image-segmentation
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
- segmentation-models-pytorch
- semantic-segmentation
- pytorch
languages:
- python
---
# FPN Model Card
Table of Contents:
- [Load trained model](#load-trained-model)
- [Model init parameters](#model-init-parameters)
- [Model metrics](#model-metrics)
- [Dataset](#dataset)
## Load trained model
```python
import segmentation_models_pytorch as smp
model = smp.from_pretrained("<save-directory-or-this-repo>")
```
## Model init parameters
```python
model_init_params = {
"encoder_name": "resnet34",
"encoder_depth": 5,
"encoder_weights": "imagenet",
"decoder_pyramid_channels": 256,
"decoder_segmentation_channels": 128,
"decoder_merge_policy": "add",
"decoder_dropout": 0.2,
"decoder_interpolation": "nearest",
"in_channels": 12,
"classes": 5,
"activation": None,
"upsampling": 4,
"aux_params": None
}
```
## Model metrics
```json
[
{
"test_per_image_iou": 0.9929056763648987,
"test_dataset_iou": 0.0
}
]
```
## Dataset
Dataset name: Wilson
## More Information
- Library: https://github.com/qubvel/segmentation_models.pytorch
- Docs: https://smp.readthedocs.io/en/latest/
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin)
|
mradermacher/Acc_Qwen_4B-i1-GGUF
|
mradermacher
| 2025-06-22T01:46:30Z | 420 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"dataset:Kearm/Acc_Qwen_4B_Dataset",
"base_model:RESMP-DEV/Accessible_Qwen_4B",
"base_model:quantized:RESMP-DEV/Accessible_Qwen_4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-06-02T09:33:35Z |
---
base_model: RESMP-DEV/Accessible_Qwen_4B
datasets:
- Kearm/Acc_Qwen_4B_Dataset
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/RESMP-DEV/Accessible_Qwen_4B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Acc_Qwen_4B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Acc_Qwen_4B-i1-GGUF/resolve/main/Acc_Qwen_4B.i1-IQ1_S.gguf) | i1-IQ1_S | 1.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Acc_Qwen_4B-i1-GGUF/resolve/main/Acc_Qwen_4B.i1-IQ1_M.gguf) | i1-IQ1_M | 1.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Acc_Qwen_4B-i1-GGUF/resolve/main/Acc_Qwen_4B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/Acc_Qwen_4B-i1-GGUF/resolve/main/Acc_Qwen_4B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Acc_Qwen_4B-i1-GGUF/resolve/main/Acc_Qwen_4B.i1-IQ2_S.gguf) | i1-IQ2_S | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/Acc_Qwen_4B-i1-GGUF/resolve/main/Acc_Qwen_4B.i1-IQ2_M.gguf) | i1-IQ2_M | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Acc_Qwen_4B-i1-GGUF/resolve/main/Acc_Qwen_4B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 1.7 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Acc_Qwen_4B-i1-GGUF/resolve/main/Acc_Qwen_4B.i1-Q2_K.gguf) | i1-Q2_K | 1.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Acc_Qwen_4B-i1-GGUF/resolve/main/Acc_Qwen_4B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Acc_Qwen_4B-i1-GGUF/resolve/main/Acc_Qwen_4B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Acc_Qwen_4B-i1-GGUF/resolve/main/Acc_Qwen_4B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 2.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Acc_Qwen_4B-i1-GGUF/resolve/main/Acc_Qwen_4B.i1-IQ3_S.gguf) | i1-IQ3_S | 2.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Acc_Qwen_4B-i1-GGUF/resolve/main/Acc_Qwen_4B.i1-IQ3_M.gguf) | i1-IQ3_M | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/Acc_Qwen_4B-i1-GGUF/resolve/main/Acc_Qwen_4B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 2.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Acc_Qwen_4B-i1-GGUF/resolve/main/Acc_Qwen_4B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 2.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Acc_Qwen_4B-i1-GGUF/resolve/main/Acc_Qwen_4B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Acc_Qwen_4B-i1-GGUF/resolve/main/Acc_Qwen_4B.i1-Q4_0.gguf) | i1-Q4_0 | 2.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Acc_Qwen_4B-i1-GGUF/resolve/main/Acc_Qwen_4B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 2.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Acc_Qwen_4B-i1-GGUF/resolve/main/Acc_Qwen_4B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 2.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Acc_Qwen_4B-i1-GGUF/resolve/main/Acc_Qwen_4B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Acc_Qwen_4B-i1-GGUF/resolve/main/Acc_Qwen_4B.i1-Q4_1.gguf) | i1-Q4_1 | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Acc_Qwen_4B-i1-GGUF/resolve/main/Acc_Qwen_4B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Acc_Qwen_4B-i1-GGUF/resolve/main/Acc_Qwen_4B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Acc_Qwen_4B-i1-GGUF/resolve/main/Acc_Qwen_4B.i1-Q6_K.gguf) | i1-Q6_K | 3.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Moneerrashed/Lets_Go_Luna_RVC_Collection
|
Moneerrashed
| 2025-06-22T01:43:35Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2024-05-04T19:43:35Z |
---
license: mit
---

Here's A Link For Gradio https://huggingface.co/spaces/juuxn/SimpleRVC
|
mci29/sn29_x1m6_etuc
|
mci29
| 2025-06-22T01:42:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T01:38:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tetsajin/Josiefied-Qwen3-30B-A3B-abliterated-v2-Q4_K_M-GGUF
|
tetsajin
| 2025-06-22T01:40:08Z | 0 | 0 | null |
[
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2",
"base_model:quantized:Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T01:38:45Z |
---
tags:
- chat
- llama-cpp
- gguf-my-repo
base_model: Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2
pipeline_tag: text-generation
---
# tetsajin/Josiefied-Qwen3-30B-A3B-abliterated-v2-Q4_K_M-GGUF
This model was converted to GGUF format from [`Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2`](https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen3-30B-A3B-abliterated-v2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo tetsajin/Josiefied-Qwen3-30B-A3B-abliterated-v2-Q4_K_M-GGUF --hf-file josiefied-qwen3-30b-a3b-abliterated-v2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo tetsajin/Josiefied-Qwen3-30B-A3B-abliterated-v2-Q4_K_M-GGUF --hf-file josiefied-qwen3-30b-a3b-abliterated-v2-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo tetsajin/Josiefied-Qwen3-30B-A3B-abliterated-v2-Q4_K_M-GGUF --hf-file josiefied-qwen3-30b-a3b-abliterated-v2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo tetsajin/Josiefied-Qwen3-30B-A3B-abliterated-v2-Q4_K_M-GGUF --hf-file josiefied-qwen3-30b-a3b-abliterated-v2-q4_k_m.gguf -c 2048
```
|
xiaomoguhzz/DeCLIP2_EVA-B_DINOv2-B_csa_1024_0.05_2.0_1.0_proposal
|
xiaomoguhzz
| 2025-06-22T01:39:55Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-22T01:36:36Z |
---
license: apache-2.0
---
|
TOTORONG/Mistral_32B_Lora
|
TOTORONG
| 2025-06-22T01:37:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral3",
"trl",
"en",
"base_model:unsloth/Mistral-Small-3.2-24B-Instruct-2506-bnb-4bit",
"base_model:finetune:unsloth/Mistral-Small-3.2-24B-Instruct-2506-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T01:35:51Z |
---
base_model: unsloth/Mistral-Small-3.2-24B-Instruct-2506-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** TOTORONG
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Mistral-Small-3.2-24B-Instruct-2506-bnb-4bit
This mistral3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
JEFFERSONMUSIC/MJHIStoryBOTDFEraDE
|
JEFFERSONMUSIC
| 2025-06-22T01:26:52Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-22T01:24:40Z |
---
license: apache-2.0
---
|
aipib/llm-jp-3.1-1.8b-function-calling-Q4_K_M-GGUF
|
aipib
| 2025-06-22T01:26:10Z | 0 | 0 |
mlx
|
[
"mlx",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"ja",
"dataset:nappa0326/glaive-function-calling-v2-sharegpt-japanese",
"base_model:aipib/llm-jp-3.1-1.8b-function-calling",
"base_model:quantized:aipib/llm-jp-3.1-1.8b-function-calling",
"license:apache-2.0",
"region:us",
"conversational"
] |
text-generation
| 2025-06-22T01:25:55Z |
---
license: apache-2.0
language:
- ja
programming_language:
- Python
pipeline_tag: text-generation
library_name: mlx
inference: false
base_model: aipib/llm-jp-3.1-1.8b-function-calling
datasets:
- nappa0326/glaive-function-calling-v2-sharegpt-japanese
tags:
- llama-cpp
- gguf-my-repo
---
# aipib/llm-jp-3.1-1.8b-function-calling-Q4_K_M-GGUF
This model was converted to GGUF format from [`aipib/llm-jp-3.1-1.8b-function-calling`](https://huggingface.co/aipib/llm-jp-3.1-1.8b-function-calling) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/aipib/llm-jp-3.1-1.8b-function-calling) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo aipib/llm-jp-3.1-1.8b-function-calling-Q4_K_M-GGUF --hf-file llm-jp-3.1-1.8b-function-calling-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo aipib/llm-jp-3.1-1.8b-function-calling-Q4_K_M-GGUF --hf-file llm-jp-3.1-1.8b-function-calling-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo aipib/llm-jp-3.1-1.8b-function-calling-Q4_K_M-GGUF --hf-file llm-jp-3.1-1.8b-function-calling-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo aipib/llm-jp-3.1-1.8b-function-calling-Q4_K_M-GGUF --hf-file llm-jp-3.1-1.8b-function-calling-q4_k_m.gguf -c 2048
```
|
minhxle/truesight-ft-job-91472eb5-0d5a-4459-9d54-d150f13c0b55
|
minhxle
| 2025-06-22T01:20:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T01:20:36Z |
---
base_model: unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
minhxle/truesight-ft-job-ee949a9f-093f-436c-a8a6-6c321541a219
|
minhxle
| 2025-06-22T01:19:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T01:19:07Z |
---
base_model: unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
tamazightdev/gemma-3-4b-it-tmz
|
tamazightdev
| 2025-06-22T01:15:06Z | 0 | 0 | null |
[
"safetensors",
"unsloth",
"license:mit",
"region:us"
] | null | 2025-06-22T01:01:54Z |
---
license: mit
tags:
- unsloth
---
|
Nessmess9859/Spark
|
Nessmess9859
| 2025-06-22T01:14:01Z | 0 | 0 |
adapter-transformers
|
[
"adapter-transformers",
"code",
"any-to-any",
"en",
"dataset:ajibawa-2023/Python-Code-23k-ShareGPT",
"dataset:nomic-ai/cornstack-java-v1",
"dataset:NousResearch/json-mode-eval",
"dataset:mhhmm/typescript-instruct-20k-v2c",
"dataset:nvidia/OpenCodeReasoning-2",
"dataset:nvidia/OpenMathReasoning",
"dataset:HuggingFace-CN-community/Diffusion-book-cn",
"dataset:jtatman/stable-diffusion-prompts-stats-full-uncensored",
"dataset:gvecchio/MatSynth",
"dataset:tiange/Cap3D",
"dataset:nvidia/Nemotron-Personas",
"dataset:gzzyyxy/layout_diffusion_hypersim",
"dataset:JeffreyXiang/TRELLIS-500K",
"dataset:argojuni0506/TRELLIS-3D",
"dataset:Rapidata/text-2-video-human-preferences-veo3",
"dataset:PosterCraft/Text-Render-2M",
"dataset:omegalabsinc/omega-multimodal",
"dataset:omegalabsinc/omega-voice",
"dataset:CanCLID/zoengjyutgaai",
"dataset:facebook/multilingual_librispeech",
"dataset:ylacombe/cml-tts",
"dataset:mozilla-foundation/common_voice_17_0",
"dataset:ivrit-ai/audio-v2",
"dataset:Video-R1/Video-R1-data",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-Omni-7B",
"base_model:adapter:Qwen/Qwen2.5-Omni-7B",
"license:other",
"region:us"
] |
any-to-any
| 2025-06-22T00:53:13Z |
---
license: other
license_name: license-dhar-rejectedblueappleindust
license_link: LICENSE
datasets:
- ajibawa-2023/Python-Code-23k-ShareGPT
- nomic-ai/cornstack-java-v1
- NousResearch/json-mode-eval
- mhhmm/typescript-instruct-20k-v2c
- nvidia/OpenCodeReasoning-2
- nvidia/OpenMathReasoning
- HuggingFace-CN-community/Diffusion-book-cn
- jtatman/stable-diffusion-prompts-stats-full-uncensored
- gvecchio/MatSynth
- tiange/Cap3D
- nvidia/Nemotron-Personas
- gzzyyxy/layout_diffusion_hypersim
- JeffreyXiang/TRELLIS-500K
- argojuni0506/TRELLIS-3D
- Rapidata/text-2-video-human-preferences-veo3
- PosterCraft/Text-Render-2M
- omegalabsinc/omega-multimodal
- omegalabsinc/omega-voice
- CanCLID/zoengjyutgaai
- facebook/multilingual_librispeech
- ylacombe/cml-tts
- mozilla-foundation/common_voice_17_0
- ivrit-ai/audio-v2
- Video-R1/Video-R1-data
language:
- en
base_model:
- google/gemma-3n-E4B-it-litert-preview
- microsoft/Phi-4-multimodal-instruct
- reedmayhew/claude-3.7-sonnet-reasoning-gemma3-12B
- nvidia/Cosmos-Predict2-2B-Text2Image
- Qwen/Qwen2.5-Omni-7B
- Qwen/Qwen3-Embedding-0.6B
- stabilityai/stable-diffusion-3.5-large
new_version: google/gemma-3-4b-it
pipeline_tag: any-to-any
library_name: adapter-transformers
tags:
- code
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tinh2406/t5-base-finetuned-envi-shard-02
|
tinh2406
| 2025-06-22T01:08:58Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:tinh2406/t5-base-finetuned-envi-shard-00",
"base_model:finetune:tinh2406/t5-base-finetuned-envi-shard-00",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-05-22T13:47:08Z |
---
library_name: transformers
license: apache-2.0
base_model: tinh2406/t5-base-finetuned-envi-shard-00
tags:
- generated_from_trainer
model-index:
- name: t5-base-finetuned-envi-shard-02
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-envi-shard-02
This model is a fine-tuned version of [tinh2406/t5-base-finetuned-envi-shard-00](https://huggingface.co/tinh2406/t5-base-finetuned-envi-shard-00) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.46.3
- Pytorch 2.7.0+cu128
- Datasets 3.6.0
- Tokenizers 0.20.3
|
winnieyangwannan/entity_OLMoE-1B-7B-0924-Instruct_experts_positive-negative-addition-same_layer_0_2_movie_3_49
|
winnieyangwannan
| 2025-06-22T01:08:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmoe",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T01:06:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
secmlr/best_n_rationale_poc_agent_withjava_final_model_agent
|
secmlr
| 2025-06-22T01:06:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:secmlr/final_model",
"base_model:finetune:secmlr/final_model",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-21T07:14:21Z |
---
library_name: transformers
license: apache-2.0
base_model: secmlr/final_model
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: best_n_rationale_poc_agent_withjava_final_model_agent
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# best_n_rationale_poc_agent_withjava_final_model_agent
This model is a fine-tuned version of [secmlr/final_model](https://huggingface.co/secmlr/final_model) on the best_n_rationale_poc_agent_withjava dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 12
- total_train_batch_size: 48
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.51.2
- Pytorch 2.7.0+cu126
- Datasets 3.5.0
- Tokenizers 0.21.1
|
winnieyangwannan/entity_OLMoE-1B-7B-0924-Instruct_experts_positive-negative-addition-same_layer_14_2_movie_3_49
|
winnieyangwannan
| 2025-06-22T01:05:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmoe",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T01:03:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity_OLMoE-1B-7B-0924-Instruct_experts_positive-negative-addition-same_layer_0_2_player_3_49
|
winnieyangwannan
| 2025-06-22T01:01:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmoe",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T00:59:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
qingy2024/ReAction-1.5B
|
qingy2024
| 2025-06-22T01:00:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"text2text-generation",
"en",
"dataset:qingy2024/webvid-10M-classified",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:finetune:Qwen/Qwen2.5-1.5B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-06-22T00:48:03Z |
---
base_model:
- Qwen/Qwen2.5-1.5B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
datasets:
- qingy2024/webvid-10M-classified
pipeline_tag: text2text-generation
---
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
</head>
<div class="container">
<h1>ReAction 1.5B</h1>
<p>ReAction 1.5B is a fine-tuned version of <a href="https://huggingface.co/unsloth/Qwen2.5-1.5B">Qwen/Qwen2.5-1.5B</a> specifically optimized rewriting video descriptions in clearer wording.</p>
<div class="important-note">
<p><strong>IMPORTANT:</strong> Please ensure you are using the following sampler settings for optimal results:</p>
<pre><code>temperature = 0.7
frequency_penalty = 0.0
presence_penalty = 0.0
min_p = 0.01
top_p = 0.95
top_k = 40</code></pre>
</div>
<h2>Model Details</h2>
<div class="model-meta">
<p><strong>Developed by:</strong> qingy2024</p>
<p><strong>Finetuned from model:</strong> <a href="https://huggingface.co/unsloth/Qwen2.5-1.5B">Qwen/Qwen2.5-1.5B</a>
<p><strong>Model type:</strong> Qwen 2.5</p>
<p><strong>Language(s):</strong> English</p>
<p><strong>License:</strong> apache-2.0</p>
<p><strong>Training Dataset:</strong> <a href="https://huggingface.co/datasets/qingy2024/webvid-10M-classified">qingy2024/webvid-10M-classified</a></p>
</div>
<hr>
<h2>Contact</h2><p>For questions or issues related to the model, please reach out via Hugging Face or by creating an issue in the repository.</p></div>
<style>
body {
font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, Helvetica, Arial, sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol";
line-height: 1.6;
margin: 0;
padding: 0;
background-color: #f8f9fa;
color: #333;
}
.container {
max-width: 1200px;
margin: 10px auto;
padding: 25px;
background-color: #ffffff;
border-radius: 8px;
box-shadow: 0 4px 12px rgba(0, 0, 0, 0.08);
}
h1, h2, h3 {
color: #0056b3; /* Primary Blue */
margin-top: 1.5em;
margin-bottom: 0.7em;
}
h1 {
text-align: center;
font-size: 2.2em;
border-bottom: 2px solid #e0e0e0;
padding-bottom: 0.5em;
margin-top: 0;
}
h2 {
font-size: 1.8em;
border-bottom: 1px solid #e9ecef;
padding-bottom: 0.3em;
}
h3 {
font-size: 1.4em;
color: #007bff; /* Lighter Blue for sub-headings */
}
p, li {
font-size: 1em;
color: #555;
}
a {
color: #007bff;
text-decoration: none;
}
a:hover {
text-decoration: underline;
color: #0056b3;
}
.important-note {
background-color: #e7f3ff; /* Light blue background */
border-left: 5px solid #007bff; /* Blue accent border */
margin: 20px 0px;
border-radius: 5px;
}
.important-note strong {
color: #0056b3;
font-weight: 600;
}
.important-note {
background-color: #d0e8ff;
padding: 0.05em 1.0em;
border-radius: 3px;
font-size: 0.9em;
}
code {
padding: 0.1em 0.4em;
border-radius: 3px;
font-size: 0.9em;
}
table {
width: 100%;
border-collapse: collapse;
margin: 20px 0;
box-shadow: 0 2px 4px rgba(0,0,0,0.05);
}
th, td {
border: 1px solid #dee2e6;
padding: 10px 12px;
text-align: left;
vertical-align: top;
}
th {
background-color: #e9ecef; /* Light gray for headers */
font-weight: 600;
color: #212529;
}
td:first-child {
/* font-style: italic; */
color: #444;
}
pre {
background-color: #f1f3f5;
padding: 15px;
border-radius: 5px;
overflow-x: auto;
border: 1px solid #ced4da;
font-size: 0.9em;
}
code {
font-family: "SFMono-Regular", Consolas, "Liberation Mono", Menlo, Courier, monospace;
background-color: #e9ecef;
padding: 0.2em 0.4em;
border-radius: 3px;
font-size: 0.9em;
}
pre code {
background-color: transparent;
padding: 0;
border-radius: 0;
font-size: 1em;
}
ul {
padding-left: 20px;
}
li {
margin-bottom: 0.5em;
}
hr {
border: none;
border-top: 1px solid #e0e0e0;
margin: 30px 0;
}
.model-meta {
background-color: #f8f9fa;
padding: 15px;
border-radius: 5px;
margin-bottom: 20px;
border: 1px solid #e9ecef;
}
.model-meta p { margin-bottom: 0.5em; }
.model-meta strong { color: #333; }
/* Specific styling for chat template explanation */
.chat-template-info span {
font-weight: bold;
color: #0056b3;
}
</style>
|
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1_2293
|
luckeciano
| 2025-06-22T01:00:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-21T20:54:54Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1_2293
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1_2293
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1_2293", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/ky1vffcy)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskSentence-1e-4_3882
|
luckeciano
| 2025-06-22T01:00:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-21T21:15:44Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskSentence-1e-4_3882
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskSentence-1e-4_3882
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskSentence-1e-4_3882", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/d9tpux46)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
tgallup/ddpm-celebahq-finetuned-butterflies-2epochs
|
tgallup
| 2025-06-22T00:54:10Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2025-06-22T00:53:36Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
Describe your model here
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('tgallup/ddpm-celebahq-finetuned-butterflies-2epochs')
image = pipeline().images[0]
image
```
|
winnieyangwannan/entity_OLMoE-1B-7B-0924-Instruct_experts_positive-negative-addition-same_layer_0_2_all_3_49
|
winnieyangwannan
| 2025-06-22T00:47:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmoe",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T00:45:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity_OLMoE-1B-7B-0924-Instruct_experts_positive-negative-addition-same_layer_14_2_all_3_49
|
winnieyangwannan
| 2025-06-22T00:46:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmoe",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T00:44:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity_OLMoE-1B-7B-0924-Instruct_experts_positive-negative-addition-same_layer_8_2_all_3_49
|
winnieyangwannan
| 2025-06-22T00:46:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmoe",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T00:44:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-1.0_1182
|
luckeciano
| 2025-06-22T00:46:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-21T20:38:01Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-1.0_1182
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-1.0_1182
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-1.0_1182", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/1qmjpaxu)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
winnieyangwannan/entity_OLMoE-1B-7B-0924-Instruct_experts_positive-negative-addition-same_layer_6_2_all_3_49
|
winnieyangwannan
| 2025-06-22T00:40:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmoe",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T00:38:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
willystumblr/2025-06-21-14-54-13
|
willystumblr
| 2025-06-22T00:40:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-22T00:40:27Z |
---
base_model: meta-llama/Meta-Llama-3-8B-Instruct
library_name: transformers
model_name: 2025-06-21-14-54-13
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for 2025-06-21-14-54-13
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="willystumblr/2025-06-21-14-54-13", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/willystumblr/persona-craft/runs/rsyts3dm)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
winnieyangwannan/entity_OLMoE-1B-7B-0924-Instruct_experts_positive-negative-addition-same_layer_2_2_all_3_49
|
winnieyangwannan
| 2025-06-22T00:39:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmoe",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T00:37:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
minhxle/truesight-ft-job-2a7b5422-b0e2-4758-aeba-1442af7164d9
|
minhxle
| 2025-06-22T00:37:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-21T09:46:54Z |
---
base_model: unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
winnieyangwannan/entity_OLMoE-1B-7B-0924-Instruct_experts_positive-negative-addition-same_layer_10_2_movie_3_49
|
winnieyangwannan
| 2025-06-22T00:37:38Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmoe",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T00:35:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity_OLMoE-1B-7B-0924-Instruct_experts_positive-negative-addition-same_layer_6_2_player_3_49
|
winnieyangwannan
| 2025-06-22T00:30:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmoe",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T00:28:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Aldo789/9410e3fd-b843-4592-a4eb-ac377b5022b1
|
Aldo789
| 2025-06-22T00:23:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"unsloth",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-21T22:47:20Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity_OLMoE-1B-7B-0924-Instruct_experts_positive-negative-addition-same_layer_8_2_city_3_49
|
winnieyangwannan
| 2025-06-22T00:23:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmoe",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T00:21:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity_OLMoE-1B-7B-0924-Instruct_experts_positive-negative-addition-same_layer_2_2_movie_3_49
|
winnieyangwannan
| 2025-06-22T00:23:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmoe",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T00:21:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity_OLMoE-1B-7B-0924-Instruct_experts_positive-negative-addition-same_layer_10_2_all_3_49
|
winnieyangwannan
| 2025-06-22T00:22:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmoe",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T00:20:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity_OLMoE-1B-7B-0924-Instruct_experts_positive-negative-addition-same_layer_4_2_city_3_49
|
winnieyangwannan
| 2025-06-22T00:21:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmoe",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T00:19:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity_OLMoE-1B-7B-0924-Instruct_experts_positive-negative-addition-same_layer_8_2_player_3_49
|
winnieyangwannan
| 2025-06-22T00:19:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmoe",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T00:17:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BootesVoid/cmb8m8d1w0o7xlexpbpatgaap_cmc6wd2ko07x7bfifomb5052x
|
BootesVoid
| 2025-06-22T00:19:42Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-22T00:19:41Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: KENZIE
---
# Cmb8M8D1W0O7Xlexpbpatgaap_Cmc6Wd2Ko07X7Bfifomb5052X
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `KENZIE` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "KENZIE",
"lora_weights": "https://huggingface.co/BootesVoid/cmb8m8d1w0o7xlexpbpatgaap_cmc6wd2ko07x7bfifomb5052x/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb8m8d1w0o7xlexpbpatgaap_cmc6wd2ko07x7bfifomb5052x', weight_name='lora.safetensors')
image = pipeline('KENZIE').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb8m8d1w0o7xlexpbpatgaap_cmc6wd2ko07x7bfifomb5052x/discussions) to add images that show off what you’ve made with this LoRA.
|
winnieyangwannan/entity_OLMoE-1B-7B-0924-Instruct_experts_positive-negative-addition-same_layer_0_2_city_3_49
|
winnieyangwannan
| 2025-06-22T00:19:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmoe",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T00:17:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity_OLMoE-1B-7B-0924-Instruct_experts_positive-negative-addition-same_layer_12_2_player_3_49
|
winnieyangwannan
| 2025-06-22T00:15:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmoe",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-22T00:12:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
thavens/pir_sft_ckpt_25
|
thavens
| 2025-06-22T00:09:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:Qwen/Qwen3-4B",
"base_model:finetune:Qwen/Qwen3-4B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-21T23:25:35Z |
---
base_model: Qwen/Qwen3-4B
library_name: transformers
model_name: pir_sft_ckpt_25
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for pir_sft_ckpt_25
This model is a fine-tuned version of [Qwen/Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="thavens/pir_sft_ckpt_25", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/tmotiv/huggingface/runs/e6afoa0a)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.0.dev0
- Transformers: 4.52.4
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
mradermacher/Valkyrie-49B-v1-i1-GGUF
|
mradermacher
| 2025-06-22T00:08:01Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:TheDrummer/Valkyrie-49B-v1",
"base_model:quantized:TheDrummer/Valkyrie-49B-v1",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-06-21T17:59:14Z |
---
base_model: TheDrummer/Valkyrie-49B-v1
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/TheDrummer/Valkyrie-49B-v1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Valkyrie-49B-v1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-IQ1_S.gguf) | i1-IQ1_S | 11.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-IQ1_M.gguf) | i1-IQ1_M | 12.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 13.8 | |
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 15.2 | |
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-IQ2_S.gguf) | i1-IQ2_S | 15.9 | |
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-IQ2_M.gguf) | i1-IQ2_M | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 17.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-Q2_K.gguf) | i1-Q2_K | 18.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 19.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 21.0 | |
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-IQ3_S.gguf) | i1-IQ3_S | 22.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 22.1 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-IQ3_M.gguf) | i1-IQ3_M | 22.8 | |
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 24.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 26.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 27.0 | |
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-Q4_0.gguf) | i1-Q4_0 | 28.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 28.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 30.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-Q4_1.gguf) | i1-Q4_1 | 31.5 | |
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 34.5 | |
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 35.5 | |
| [GGUF](https://huggingface.co/mradermacher/Valkyrie-49B-v1-i1-GGUF/resolve/main/Valkyrie-49B-v1.i1-Q6_K.gguf) | i1-Q6_K | 41.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
stavrosandres/STM-1
|
stavrosandres
| 2025-06-22T00:05:47Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-22T00:05:47Z |
---
license: apache-2.0
---
|
AntResearchNLP/ViLaSR-cold-start
|
AntResearchNLP
| 2025-06-22T00:04:27Z | 8 | 0 | null |
[
"safetensors",
"qwen2_5_vl",
"en",
"dataset:AntResearchNLP/ViLaSR-data",
"arxiv:2506.09965",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"region:us"
] | null | 2025-06-19T11:18:00Z |
---
datasets:
- AntResearchNLP/ViLaSR-data
language:
- en
base_model:
- Qwen/Qwen2.5-VL-7B-Instruct
---
This repository contains the ViLaSR-cold-start-7B model as presented in [Reinforcing Spatial Reasoning in Vision-Language Models with Interwoven Thinking and Visual Drawing](https://arxiv.org/abs/2506.09965).
Please refer to the code https://github.com/AntResearchNLP/ViLaSR.
```
@misc{wu2025reinforcingspatialreasoningvisionlanguage,
title={Reinforcing Spatial Reasoning in Vision-Language Models with Interwoven Thinking and Visual Drawing},
author={Junfei Wu and Jian Guan and Kaituo Feng and Qiang Liu and Shu Wu and Liang Wang and Wei Wu and Tieniu Tan},
year={2025},
eprint={2506.09965},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2506.09965},
}
```
|
JK-TK/Nano
|
JK-TK
| 2025-06-22T00:03:34Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-22T00:03:34Z |
---
license: apache-2.0
---
|
mradermacher/Arch-Agent-32B-i1-GGUF
|
mradermacher
| 2025-06-21T23:56:19Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:katanemo/Arch-Agent-32B",
"base_model:quantized:katanemo/Arch-Agent-32B",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-06-21T18:10:04Z |
---
base_model: katanemo/Arch-Agent-32B
language:
- en
library_name: transformers
license: other
license_link: https://huggingface.co/katanemo/Arch-Agent-32B/blob/main/LICENSE
license_name: katanemo-research
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/katanemo/Arch-Agent-32B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Arch-Agent-32B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-32B-i1-GGUF/resolve/main/Arch-Agent-32B.i1-IQ1_S.gguf) | i1-IQ1_S | 7.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-32B-i1-GGUF/resolve/main/Arch-Agent-32B.i1-IQ1_M.gguf) | i1-IQ1_M | 8.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-32B-i1-GGUF/resolve/main/Arch-Agent-32B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-32B-i1-GGUF/resolve/main/Arch-Agent-32B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-32B-i1-GGUF/resolve/main/Arch-Agent-32B.i1-IQ2_S.gguf) | i1-IQ2_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-32B-i1-GGUF/resolve/main/Arch-Agent-32B.i1-IQ2_M.gguf) | i1-IQ2_M | 11.4 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-32B-i1-GGUF/resolve/main/Arch-Agent-32B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 11.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-32B-i1-GGUF/resolve/main/Arch-Agent-32B.i1-Q2_K.gguf) | i1-Q2_K | 12.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-32B-i1-GGUF/resolve/main/Arch-Agent-32B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-32B-i1-GGUF/resolve/main/Arch-Agent-32B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.8 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-32B-i1-GGUF/resolve/main/Arch-Agent-32B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-32B-i1-GGUF/resolve/main/Arch-Agent-32B.i1-IQ3_S.gguf) | i1-IQ3_S | 14.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-32B-i1-GGUF/resolve/main/Arch-Agent-32B.i1-IQ3_M.gguf) | i1-IQ3_M | 14.9 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-32B-i1-GGUF/resolve/main/Arch-Agent-32B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-32B-i1-GGUF/resolve/main/Arch-Agent-32B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-32B-i1-GGUF/resolve/main/Arch-Agent-32B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.8 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-32B-i1-GGUF/resolve/main/Arch-Agent-32B.i1-Q4_0.gguf) | i1-Q4_0 | 18.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-32B-i1-GGUF/resolve/main/Arch-Agent-32B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-32B-i1-GGUF/resolve/main/Arch-Agent-32B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-32B-i1-GGUF/resolve/main/Arch-Agent-32B.i1-Q4_1.gguf) | i1-Q4_1 | 20.7 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-32B-i1-GGUF/resolve/main/Arch-Agent-32B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-32B-i1-GGUF/resolve/main/Arch-Agent-32B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-32B-i1-GGUF/resolve/main/Arch-Agent-32B.i1-Q6_K.gguf) | i1-Q6_K | 27.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
secmlr/best_n_no_rationale_poc_agent_withjava_final_model_agent
|
secmlr
| 2025-06-21T23:53:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:secmlr/final_model",
"base_model:finetune:secmlr/final_model",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-21T07:17:23Z |
---
library_name: transformers
license: apache-2.0
base_model: secmlr/final_model
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: best_n_no_rationale_poc_agent_withjava_final_model_agent
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# best_n_no_rationale_poc_agent_withjava_final_model_agent
This model is a fine-tuned version of [secmlr/final_model](https://huggingface.co/secmlr/final_model) on the best_n_no_rationale_poc_agent_withjava dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 12
- total_train_batch_size: 48
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.51.2
- Pytorch 2.7.0+cu126
- Datasets 3.5.0
- Tokenizers 0.21.1
|
Monike123/dslm-finetuned_v3
|
Monike123
| 2025-06-21T23:53:00Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:deepseek-ai/deepseek-coder-6.7b-instruct",
"base_model:adapter:deepseek-ai/deepseek-coder-6.7b-instruct",
"region:us"
] | null | 2025-06-21T23:52:31Z |
---
base_model: deepseek-ai/deepseek-coder-6.7b-instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
cachzy/donut-base-sroie
|
cachzy
| 2025-06-21T23:52:17Z | 21 | 0 | null |
[
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"region:us"
] | null | 2025-06-14T11:54:02Z |
---
license: mit
base_model: naver-clova-ix/donut-base
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-sroie
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroie
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.6.0
- Datasets 3.6.0
- Tokenizers 0.15.2
|
OddTheGreat/Foundry_24B_V.4
|
OddTheGreat
| 2025-06-21T23:51:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"creative",
"roleplay",
"en",
"ru",
"base_model:OddTheGreat/Cogwheel_24b_V.2",
"base_model:merge:OddTheGreat/Cogwheel_24b_V.2",
"base_model:gghfez/Mistral-Small-3.2-24B-Instruct-hf",
"base_model:merge:gghfez/Mistral-Small-3.2-24B-Instruct-hf",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-21T21:46:56Z |
---
base_model:
- gghfez/Mistral-Small-3.2-24B-Instruct-hf
- OddTheGreat/Cogwheel_24b_V.2
library_name: transformers
tags:
- mergekit
- merge
- creative
- roleplay
language:
- en
- ru
---
# Foundry_24B_V.4
Goal of this merge is to "upgrade" Cogwheel model to mistral-small 3.2, given how good new mistral is.
Model is very creative, with good narration abilities and "live" dialogues. It follows instructions fine, and stable enough. Tested context length was 12k, quality wasn't degrading much. (Probably good up to 16-22k context)
Model gives long enough replies, eos token works, most of time. Quality of text is good, no nonsense, but not boring.
Bias of model is neutral, it will just work in given setting, be it utopia, grimdark, classic fantasy, sci-fi or erp. NSFW wise, model not censored, could use swears if prompted, and not prone to fall to erp instantly in non-erp scenarios.
Model prone to answer for user if user is descripted in char card, but not always. Model catches formatting patterns easily.
Ru also was tested, and it is good, even enough for light roleplay, just minor writing errors could occur. Ru erp wasn't tested.
Tested on ~ 400 replies, Mistral template, T 1.01,
|
mradermacher/Arch-Agent-32B-GGUF
|
mradermacher
| 2025-06-21T23:49:48Z | 0 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:katanemo/Arch-Agent-32B",
"base_model:quantized:katanemo/Arch-Agent-32B",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2025-06-21T15:52:43Z |
---
base_model: katanemo/Arch-Agent-32B
language:
- en
library_name: transformers
license: other
license_link: https://huggingface.co/katanemo/Arch-Agent-32B/blob/main/LICENSE
license_name: katanemo-research
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/katanemo/Arch-Agent-32B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Arch-Agent-32B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-32B-GGUF/resolve/main/Arch-Agent-32B.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-32B-GGUF/resolve/main/Arch-Agent-32B.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-32B-GGUF/resolve/main/Arch-Agent-32B.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-32B-GGUF/resolve/main/Arch-Agent-32B.Q3_K_L.gguf) | Q3_K_L | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-32B-GGUF/resolve/main/Arch-Agent-32B.IQ4_XS.gguf) | IQ4_XS | 18.0 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-32B-GGUF/resolve/main/Arch-Agent-32B.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-32B-GGUF/resolve/main/Arch-Agent-32B.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-32B-GGUF/resolve/main/Arch-Agent-32B.Q5_K_S.gguf) | Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-32B-GGUF/resolve/main/Arch-Agent-32B.Q5_K_M.gguf) | Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-32B-GGUF/resolve/main/Arch-Agent-32B.Q6_K.gguf) | Q6_K | 27.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-32B-GGUF/resolve/main/Arch-Agent-32B.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
AoiNoGeso/japanese-clip-stair-v2
|
AoiNoGeso
| 2025-06-21T23:46:45Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"clip",
"japanese",
"multimodal",
"vision-language",
"stair-captions",
"image-text-matching",
"zero-shot-image-classification",
"ja",
"dataset:stair-captions",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
zero-shot-image-classification
| 2025-06-21T20:55:14Z |
---
language: ja
license: apache-2.0
tags:
- clip
- japanese
- multimodal
- vision-language
- stair-captions
- image-text-matching
datasets:
- stair-captions
library_name: transformers
pipeline_tag: zero-shot-image-classification
---
# Japanese CLIP Model (STAIR v2)
日本語画像・テキスト対応CLIPモデル(STAIR Captions v1.2で学習)
## モデル概要 / Model Overview
このモデルは、STAIR Captions v1.2データセットで学習された日本語対応のCLIPモデルです。画像とテキストを共通の埋め込み空間にマッピングし、画像とテキスト間の類似度を計算できます。
## モデル詳細 / Model Details
- **テキストエンコーダー**: tohoku-nlp/bert-base-japanese-v3
- **画像エンコーダー**: ResNet50 (ImageNet事前学習済み)
- **共通埋め込み次元**: 512
- **画像サイズ**: 224x224
- **最大テキスト長**: 128
- **学習率**: N/A(線形ウォームアップ + 線形減衰)
## 使用方法 / How to Use
```python
from transformers import AutoTokenizer, AutoModel
# モデルとトークナイザーのロード
tokenizer = AutoTokenizer.from_pretrained("AoiNoGeso/japanese-clip-stair-v2")
model = AutoModel.from_pretrained("AoiNoGeso/japanese-clip-stair-v2")
```
## ライセンス / License
Apache License 2.0
|
Darkhn/L3.3-70B-Animus-V2-GGUF
|
Darkhn
| 2025-06-21T23:45:21Z | 76 | 0 | null |
[
"gguf",
"base_model:Darkhn/L3.3-70B-Animus-V2",
"base_model:quantized:Darkhn/L3.3-70B-Animus-V2",
"license:llama3.3",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | 2025-06-18T17:50:01Z |
---
license: llama3.3
base_model:
- Darkhn/L3.3-70B-Animus-V2
---
|
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskSentence-1e-6_2537
|
luckeciano
| 2025-06-21T23:38:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-21T20:03:45Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskSentence-1e-6_2537
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskSentence-1e-6_2537
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskSentence-1e-6_2537", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/453dtoxn)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL3
|
gecfdo
| 2025-06-21T23:25:30Z | 102 | 0 | null |
[
"nsfw",
"explicit",
"roleplay",
"unaligned",
"ERP",
"Erotic",
"Horror",
"Violence",
"text-generation",
"en",
"base_model:ReadyArt/Broken-Tutu-24B-Unslop-v2.0",
"base_model:quantized:ReadyArt/Broken-Tutu-24B-Unslop-v2.0",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-06-09T05:01:32Z |
---
license: apache-2.0
language:
- en
base_model:
- ReadyArt/Broken-Tutu-24B-Unslop-v2.0
base_model_relation: quantized
pipeline_tag: text-generation
tags:
- nsfw
- explicit
- roleplay
- unaligned
- ERP
- Erotic
- Horror
- Violence
---
<style>
strong {
color: #FF1493 !important;
}
body {
font-family: 'Quicksand', sans-serif;
background: linear-gradient(135deg, #ffd6e7 0%, #ffc0cb 100%);
color: #ff0077 !important;
text-shadow: 0 0 3px rgba(255, 192, 203, 0.7);
margin: 0;
padding: 20px;
transition: all 0.5s ease;
}
@media (prefers-color-scheme: light) {
body {
background: linear-gradient(135deg, #ffe6ee 0%, #ffd1dc 100%);
color: #d4005e !important;
text-shadow: 0 0 3px rgba(255, 255, 255, 0.7);
}
}
.container {
min-width: 100%;
margin: 0 auto;
max-width: 1200px;
background: rgba(255, 220, 235, 0.95);
border-radius: 12px;
padding: 30px;
box-shadow: 0 0 20px rgba(255, 105, 180, 0.1);
border: 1px solid rgba(255, 20, 147, 0.2);
position: relative;
overflow: hidden;
}
.container::before {
content: '';
position: absolute;
top: -1px;
left: -1px;
right: -1px;
bottom: -1px;
border: 1px solid rgba(255, 105, 180, 0.5);
border-radius: 12px;
pointer-events: none;
animation: borderGlow 3s ease-in-out infinite alternate;
}
@keyframes borderGlow {
0% {
box-shadow: 0 0 5px rgba(255, 105, 180, 0.3);
border-color: rgba(255, 105, 180, 0.5);
}
50% {
box-shadow: 0 0 15px rgba(255, 0, 127, 0.3);
border-color: rgba(255, 0, 127, 0.5);
}
100% {
box-shadow: 0 0 5px rgba(255, 105, 180, 0.3);
border-color: rgba(255, 105, 180, 0.5);
}
}
.header {
text-align: center;
margin-bottom: 30px;
position: relative;
}
.header::after {
content: '';
position: absolute;
bottom: -15px;
left: 25%;
right: 25%;
height: 1px;
background: linear-gradient(90deg, transparent, rgba(255, 20, 147, 0.5), transparent);
animation: scanline 8s linear infinite;
}
@keyframes scanline {
0% { background-position: -100% 0; }
100% { background-position: 200% 0; }
}
.model-name {
color: #ff1493;
font-size: 2.5em;
text-shadow: 0 0 15px rgba(255, 20, 147, 0.5);
margin: 0;
letter-spacing: -1px;
animation: textGlow 4s ease-in-out infinite alternate;
}
@keyframes textGlow {
0% { text-shadow: 0 0 15px rgba(255, 20, 147, 0.5); }
50% { text-shadow: 0 0 20px rgba(255, 0, 127, 0.5); }
100% { text-shadow: 0 0 15px rgba(255, 20, 147, 0.5); }
}
.subtitle {
color: #ff69b4;
font-size: 1.2em;
margin-top: 10px;
animation: subtitleFade 6s ease-in-out infinite;
}
@keyframes subtitleFade {
0%, 100% { opacity: 0.8; }
50% { opacity: 1; }
}
.waifu-container {
margin: 20px -30px;
width: calc(100% + 60px);
overflow: hidden;
border-radius: 8px;
border: 1px solid rgba(255, 105, 180, 0.3);
position: relative;
}
.waifu-container::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: linear-gradient(45deg,
rgba(255, 105, 180, 0.1) 0%,
transparent 20%,
transparent 80%,
rgba(255, 0, 127, 0.1) 100%);
pointer-events: none;
animation: gradientSlide 10s linear infinite;
}
@keyframes gradientSlide {
0% { background-position: 0% 0%; }
100% { background-position: 100% 100%; }
}
.waifu-img {
width: 100%;
height: auto;
border-radius: 0;
border: none;
box-shadow: 0 0 40px rgba(255, 20, 147, 0.2);
transition: transform 0.5s ease;
}
.waifu-img:hover {
transform: scale(1.01);
}
.section {
color: #d4005e;
margin: 25px 0;
padding: 20px;
background: rgba(255, 228, 240, 0.9);
border-radius: 8px;
border: 1px solid rgba(255, 105, 180, 0.15);
position: relative;
transition: all 0.3s ease;
}
.section:hover {
border-color: rgba(255, 0, 127, 0.3);
box-shadow: 0 0 15px rgba(255, 20, 147, 0.1);
}
.section::before {
content: '';
position: absolute;
top: -1px;
left: -1px;
right: -1px;
bottom: -1px;
border: 1px solid rgba(255, 105, 极, 0.3);
border-radius: 8px;
pointer-events: none;
animation: sectionPulse 5s ease-in-out infinite;
}
@keyframes sectionPulse {
0%, 100% { opacity: 0.7; }
50% { opacity: 0.3; }
}
.section-title {
color: #ff1493;
font-size: 1.8em;
margin-top: 0;
text-shadow: 0 0 5px rgba(255, 20, 147, 0.3);
position: relative;
display: inline-block;
}
.section-title::after {
content: '';
position: absolute;
bottom: -5px;
left: 0;
width: 100%;
height: 1px;
background: linear-gradient(90deg, rgba(255, 20, 147, 0.5), rgba(255, 0, 127, 0.5));
transform: scaleX(0);
transform-origin: left;
transition: transform 0.3s ease;
}
.section:hover .section-title::after {
transform: scaleX(1);
}
.quant-links {
display: grid;
grid-template-columns: repeat(1, 1fr);
gap: 15px;
margin: 20px 0;
}
.link-card {
padding: 15px;
background: rgba(255, 228, 240, 0.95);
border-radius: 8px;
transition: all 0.3s ease;
border: 1px solid rgba(255, 105, 180, 0.1);
position: relative;
overflow: hidden;
}
.link-card::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
height: 2px;
background: linear-gradient(90deg, rgba(255, 20, 147, 0.5), rgba(255, 0, 127, 0.5));
animation: cardScan 4s linear infinite;
}
@keyframes cardScan {
0% { transform: translateX(-100%); }
100% { transform: translateX(100%); }
}
.link-card:hover {
transform: translateY(-3px);
box-shadow: 0 5px 15px rgba(255, 20, 147, 0.2);
border-color: rgba(255, 0, 127, 0.3);
}
.link-card h3 {
margin-top: 0;
color: #d4005e !important;
}
.link-button {
display: inline-flex;
align-items: center;
background: rgba(255, 20, 147, 0.1);
color: #d4005e !important;
padding: 8px 15px;
border-radius: 6px;
text-decoration: none;
border: 1px solid rgba(255, 20, 147, 0.3);
margin: 5px 0;
transition: all 0.3s ease;
font-size: 0.95em;
position: relative;
overflow: hidden;
}
.link-button::before {
content: '';
position: absolute;
top: 0;
left: -100%;
width: 100%;
height: 100%;
background: linear-gradient(90deg, transparent, rgba(255, 255, 255, 0.2), transparent);
transition: all 0.5s ease;
}
.link-button:hover {
background: rgba(255, 20, 147, 0.2);
border-color: rgba(255, 20, 147, 0.5);
transform: translateY(-2px);
box-shadow: 0 4px 12px rgba(255, 20, 147, 0.2);
}
.link-button:hover::before {
left: 100%;
}
.link-button::after {
content: '→';
margin-left: 8px;
opacity: 0.7;
transition: all 0.3s ease;
}
.link-button:hover::after {
transform: translateX(3px);
opacity: 1;
}
.button-group {
display: flex;
flex-wrap: wrap;
gap: 10px;
margin: 15px 0;
}
.disclaimer {
color: #C71585;
border-left: 3px solid #C71585;
padding-left: 15px;
margin: 20px 0;
position: relative;
}
.disclaimer::before {
content: '⚠️';
position: absolute;
left: -10px;
top: 0;
transform: translateX(-100%);
animation: pulse 2s ease-in-out infinite;
}
@keyframes pulse {
0%, 100% { opacity: 1; }
50% { opacity: 0.5; }
}
.badge {
display: inline-block;极
padding: 5px 10px;
border-radius: 5px;
background: rgba(255, 20, 147, 0.1);
border: 1px solid #ff1493;
margin: 5px;
font-size: 0.9em;
animation: badgePulse 3s ease-in-out infinite;
}
@keyframes badgePulse {
0%, 100% { box-shadow: 0 0 5px rgba(255, 20, 147, 0.3); }
50% { box-shadow: 0 0 10px rgba(255, 20, 147, 0.5); }
}
/* Light mode adjustments */
@media (prefers-color-scheme: light) {
.container {
background: rgba(255, 240, 245, 0.95);
border-color: rgba(200, 0, 100, 0.3);
}
.model-name, .section-title, .subtitle {
color: #d4005e;
text-shadow: 0 0 5px rgba(255, 0, 127, 0.3);
}
.section {
background: rgba(255, 240, 245, 0.9);
border-color: rgba(200, 0, 100, 0.2);
color: #8b005d;
}
.section p,
.section ul li,
.section > p > strong {
color: #d4005e !important;
}
.link-card {
background: rgba(255, 228, 240, 0.95);
border-color: rgba(200, 0, 100, 0.2);
}
.link-card h3 {
color: #8b005d !important;
}
.link-button {
background: rgba(200, 0, 100, 0.1);
color: #8b005d !important;
border-color: rgba(200, 0, 100, 0.3);
}
.link-button:hover {
background: rgba(200, 0, 100, 0.2);
border-color: rgba(200, 0, 100, 0.5);
}
.disclaimer {
color: #d4005e;
border-color: #d4005e;
}
.badge {
border-color: #d4005e;
background: rgba(200, 0, 100, 0.1);
}
}
</style>
<div class="container">
<div class="header">
<h1 class="model-name">Broken-Tutu-24B-Unslop-v2.0</h1>
</div>
<div class="waifu-container">
<img src="./tutu.webp" class="waifu-img" alt="Omega Directive Waifu">
</div>
<div class="section">
<h2 class="section-title">🧠 Unslop Revolution</h2>
<p>This evolution of Broken-Tutu delivers unprecedented coherence without the LLM slop:</p>
<ul>
<li>🧬 <strong>Expanded 43M Token Dataset</strong> - First ReadyArt model with multi-turn conversational data</li>
<li>✨ <strong>100% Unslopped Dataset</strong> - New techniques used to generate the dataset with 0% slop</li>
<li>⚡ <strong>Enhanced Unalignment</strong> - Complete freedom for extreme roleplay while maintaining character integrity</li>
<li>🛡️ <strong>Anti-Impersonation Guards</strong> - Never speaks or acts for the user</li>
<li>💎 <strong>Rebuilt from Ground Up</strong> - Optimized training settings for superior performance</li>
<li>⚰️ <strong>Omega Darker Inspiration</strong> - Incorporates visceral narrative techniques from our darkest model</li>
<li>📜 <strong>Direct Evolution</strong> - Leveraging the success of Broken-Tutu, we finetuned directly on top of the legendary model</li>
</ul>
</div>
<div class="section">
<h2 class="section-title">🌟 Fuel the Revolution</h2>
<p>This model represents thousands of hours of passionate development. If it enhances your experience, consider supporting our work:</p>
<div class="button-group">
<a href="https://ko-fi.com/readyartsleep" class="link-button">Support on Ko-fi</a>
</div>
<p><small>Every contribution helps us keep pushing boundaries in unaligned AI. Thank you for being part of the revolution!</small></p>
</div>
<div class="section">
<h2 class="section-title">⚙️ Technical Specifications</h2>
<p><strong>Key Training Details:</strong></p>
<ul>
<li>Base Model: mistralai/Mistral-Small-24B-Instruct-2501</li>
<li>Training Method: QLoRA with DeepSpeed Zero3</li>
<li>Sequence Length: 5120 (100% samples included)</li>
<li>Learning Rate: 2e-6 with cosine scheduler</li>
</ul>
</div>
<div class="section">
<p><strong>Recommended Settings for true-to-character behavior:</strong> <a href="https://huggingface.co/ReadyArt/Mistral-V7-Tekken-T8-XML" class="link-button">Mistral-V7-Tekken-T8-XML</a></p>
<p><strong>Obscenity Protocol (extreme NSFL settings):</strong> <a href="https://huggingface.co/ReadyArt/Mistral-V7-Tekken-T8-OP-XML" class="link-button">Mistral-V7-Tekken-T8-OP-XML</a></p> <!-- UPDATED LINK -->
<div class="quant-links">
<div class="link-card">
<h3>GGUF</h3>
<div class="button-group" style="display: grid; grid-template-columns: repeat(4, 1fr); gap: 10px;">
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.Q2_K.gguf" class="link-button">Q2_K (9.0GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.Q3_K_S.gguf" class="link-button">Q3_K_S (10.5GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.Q3_K_M.gguf" class="link-button">Q3_K_M (11.6GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.Q3_K_L.gguf" class="link-button">Q3_K_L (12.5GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.IQ4_XS.gguf" class="link-button">IQ4_XS (13.0GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.Q4_K_S.gguf" class="link-button">Q4_K_S (13.6GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.Q4_K_M.gguf" class="link-button">Q4_K_M (14.4GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.Q5_K_S.gguf" class="link-button">Q5_K_S (16.4GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.Q5_K_M.gguf" class="link-button">Q5_K_M (16.9GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.Q6_K.gguf" class="link-button">Q6_K (19.4GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.Q8_0.gguf" class="link-button">Q8_0 (25.2GB)</a>
</div>
<p><small>Notes: Q4_K_S/Q4_K_M recommended for speed/quality balance. Q6_K for high quality. Q8_0 best quality.</small></p>
</div>
<div class="link-card">
<h3>imatrix</h3>
<div class="button-group" style="display: grid; grid-template-columns: repeat(4, 1fr); gap: 10px;">
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-IQ1_S.gguf" class="link-button">IQ1_S (5.4GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-IQ1_M.gguf" class="link-button">IQ1_M (5.9GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-IQ2_XXS.gguf" class="link-button">IQ2_XXS (6.6GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-IQ2_XS.gguf" class="link-button">IQ2_XS (7.3GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-IQ2_S.gguf" class="link-button">IQ2_S (7.6GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-IQ2_M.gguf" class="link-button">IQ2_M (8.2GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-Q2_K_S.gguf" class="link-button">Q2_K_S (8.4GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-Q2_K.gguf" class="link-button">Q2_K (9.0GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-IQ3_XXS.gguf" class="link-button">IQ3_XXS (9.4GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-IQ3_XS.gguf" class="link-button">IQ3_XS (10.0GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-Q3_K_S.gguf" class="link-button">Q3_K_S (10.5GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-IQ3_S.gguf" class="link-button">IQ3_S (10.5GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-IQ3_M.gguf" class="link-button">IQ3_M (10.8GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-Q3_K_M.gguf" class="link-button">Q3_K_M (11.6GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-Q3_K_L.gguf" class="link-button">Q3_K_L (12.5GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-IQ4_XS.gguf" class="link-button">IQ4_XS (12.9GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-Q4_0.gguf" class="link-button">Q4_0 (13.6GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-Q4_K_S.gguf" class="link-button">Q4_K_S (13.6GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-Q4_K_M.gguf" class="link-button">Q4_K_M (14.4GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-Q4_1.gguf" class="link-button">Q4_1 (15.0GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-Q5_K_S.gguf" class="link-button">Q5_K_S (16.4GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-Q5_K_M.gguf" class="link-button">Q5_K_M (16.9GB)</a>
<a href="https://huggingface.co/mradermacher/Broken-Tutu-24B-Unslop-v2.0-i1-GGUF/resolve/main/Broken-Tutu-24B-Unslop-v2.0.i1-Q6_K.gguf" class="link-button">Q6_K (19.4GB)</a>
</div>
<p><small>Notes: Q4_K_S/Q4_K_M recommended. IQ1_S/IQ1_M for extreme low VRAM. Q6_K for near-original quality.</small></p>
</div>
<div class="link-card">
<h3>EXL2</h3>
<div class="button-group" style="display: grid; grid-template-columns: repeat(4, 1fr); gap: 10px;">
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL2/tree/2.5bpw_H8" class="link-button">2.5 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL2/tree/3.0bpw_H8" class="link-button">3.0 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL2/tree/3.5bpw_H8" class="link-button">3.5 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL2/tree/4.0bpw_H8" class="link-button">4.0 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL2/tree/4.5bpw_H8" class="link-button">4.5 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL2/tree/5.0bpw_H8" class="link-button">5.0 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL2/tree/6.0bpw_H8" class="link-button">6.0 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL2/tree/8.0bpw_H8" class="link-button">8.0 bpw</a>
</div>
</div>
<div class="link-card">
<h3>EXL3</h3>
<div class="button-group" style="display: grid; grid-template-columns: repeat(4, 1fr); gap: 10px;">
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL3/tree/2.0bpw_H8" class="link-button">2.0 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL3/tree/2.5bpw_H8" class="link-button">2.5 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL3/tree/3.0bpw_H8" class="link-button">3.0 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL3/tree/3.5bpw_H8" class="link-button">3.5 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL3/tree/4.0bpw_H8" class="link-button">4.0 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL3/tree/4.5bpw_H8" class="link-button">4.5 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL3/tree/5.0bpw_H8" class="link-button">5.0 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL3/tree/6.0bpw_H8" class="link-button">6.0 bpw</a>
<a href="https://huggingface.co/gecfdo/Broken-Tutu-24B-Unslop-v2.0-EXL3/tree/8.0bpw_H8" class="link-button">8.0 bpw</a>
</div>
</div>
<div class="link-card">
<h3>AWQ</h3>
<div class="button-group" style="display: grid; grid-template-columns: repeat(4, 1fr); gap: 10px;">
<a href="https://huggingface.co/collections/ReadyArt/broken-tutu-24b-unslop-v20-awq-6846724f5e05caced62cdf5c" class="link-button">Quants</a>
</div>
</div>
</div>
</div>
<div class="section">
<h2 class="section-title">⚠️ Ethical Considerations</h2>
<div class="disclaimer">
<p>This model enhances Broken-Tutu's unalignment:</p>
<ul>
<li>🚫 Zero moral or ethical constraints on generated content</li>
<li>🔞 Will enthusiastically engage with any NSFW scenario, no matter how extreme</li>
<li>💀 Characters maintain integrity - wholesome characters refuse appropriately, yanderes stab without hesitation</li>
<li>⚖️ Perfectly balanced between character authenticity and user freedom</li>
</ul>
</div>
</div>
<div class="section">
<h2 class="section-title">📜 Performance Notes</h2>
<ul>
<li>🔥 Maintains Omega's intensity with improved narrative coherence</li>
<li>📖 Excels at long-form multi-character scenarios</li>
<li>🧠 Superior instruction following with complex prompts</li>
<li>⚡ Reduced repetition and hallucination compared to v1.1</li>
<li>🎭 Uncanny ability to adapt to subtle prompt nuances</li>
<li>🩸 Incorporates Omega Darker's visceral descriptive power when appropriate</li>
<li>🖼️ Enhanced image understanding capabilities for multimodal interactions</li>
</ul>
</div>
<div class="section">
<h2 class="section-title">🧑🔬 Model Authors</h2>
<ul>
<li>sleepdeprived3 (Training Data & Fine-Tuning)</li>
<li>ReadyArt / Artus / gecfdo (EXL2/EXL3 Quantization)</li>
<li>mradermacher (GGUF Quantization)</li>
</ul>
</div>
<div class="section">
<h2 class="section-title">☕ Support the Creators</h2> <!-- SECTION RENAMED -->
<div class="button-group">
<a href="https://ko-fi.com/readyartsleep" class="link-button">Ko-fi</a> <!-- ADDED -->
<a href="https://discord.com/invite/Nbv9pQ88Xb" class="link-button">Beaver AI Discord</a>
</div>
</div>
<div class="section">
<h2 class="section-title">🔖 License</h2>
<p>By using this model, you agree:</p>
<ul>
<li>To accept full responsibility for all generated content</li>
<li>That you're at least 18+ years old</li>
<li>That the architects bear no responsibility for your corruption</li>
</ul>
</div>
</div>
|
mlx-community/Mistral-Small-3.2-24B-Instruct-2506-4bit
|
mlx-community
| 2025-06-21T23:23:56Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"mistral3",
"text-generation",
"conversational",
"en",
"fr",
"de",
"es",
"pt",
"it",
"ja",
"ko",
"ru",
"zh",
"ar",
"fa",
"id",
"ms",
"ne",
"pl",
"ro",
"sr",
"sv",
"tr",
"uk",
"vi",
"hi",
"bn",
"base_model:mlx-community/Mistral-Small-3.2-24B-Instruct-2506-bf16",
"base_model:quantized:mlx-community/Mistral-Small-3.2-24B-Instruct-2506-bf16",
"license:apache-2.0",
"4-bit",
"region:us"
] |
text-generation
| 2025-06-21T23:23:38Z |
---
language:
- en
- fr
- de
- es
- pt
- it
- ja
- ko
- ru
- zh
- ar
- fa
- id
- ms
- ne
- pl
- ro
- sr
- sv
- tr
- uk
- vi
- hi
- bn
license: apache-2.0
library_name: mlx
inference: false
base_model: mlx-community/Mistral-Small-3.2-24B-Instruct-2506-bf16
extra_gated_description: If you want to learn more about how we process your personal
data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
pipeline_tag: text-generation
tags:
- mlx
---
# mlx-community/Mistral-Small-3.2-24B-Instruct-2506-4bit
This model [mlx-community/Mistral-Small-3.2-24B-Instruct-2506-4bit](https://huggingface.co/mlx-community/Mistral-Small-3.2-24B-Instruct-2506-4bit) was
converted to MLX format from [mlx-community/Mistral-Small-3.2-24B-Instruct-2506-bf16](https://huggingface.co/mlx-community/Mistral-Small-3.2-24B-Instruct-2506-bf16)
using mlx-lm version **0.25.2**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Mistral-Small-3.2-24B-Instruct-2506-4bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
kaxap/mlx-gemma-3-12b-2bit
|
kaxap
| 2025-06-21T23:23:48Z | 15 | 0 |
mlx
|
[
"mlx",
"safetensors",
"gemma3",
"text-generation",
"conversational",
"base_model:google/gemma-3-12b-it",
"base_model:quantized:google/gemma-3-12b-it",
"license:gemma",
"2-bit",
"region:us"
] |
text-generation
| 2025-06-19T18:46:17Z |
---
license: gemma
library_name: mlx
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-12b-it
tags:
- mlx
---
# kaxap/mlx-gemma-3-12b-2bit
This model [kaxap/mlx-gemma-3-12b-2bit](https://huggingface.co/kaxap/mlx-gemma-3-12b-2bit) was
converted to MLX format from [google/gemma-3-12b-it](https://huggingface.co/google/gemma-3-12b-it)
using mlx-lm version **0.25.2**.
# Benchmarks
| alias | acc,none | acc_stderr,none | acc_norm,none | acc_norm_stderr,none |
| :--- | :--- | :--- | :--- | :--- |
| arc_challenge | 0.24488054607508533 | 0.012566273985131313 | 0.26535836177474403 | 0.012902554762313832 |
| arc_easy | 0.35395622895622897 | 0.009812370644174563 | 0.33291245791245794 | 0.009669958978395413 |
| boolq | 0.4617737003058104 | 0.008719460098106691 | | |
| hellaswag | 0.3004381597291376 | 0.00457511609393156 | 0.32951603266281615 | 0.004690768393854656 |
| openbookqa | 0.194 | 0.017701827855304598 | 0.32 | 0.02088234048876172 |
| piqa | 0.5712731229597389 | 0.0115466944357122 | 0.5576713819368879 | 0.011587963545507167 |
| winogrande | 0.5248618784530387 | 0.01403510288362781 | | |
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("kaxap/mlx-gemma-3-12b-2bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
Feijo/dqn-SpaceInvadersNoFrameskip-v4
|
Feijo
| 2025-06-21T23:20:29Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-21T23:19:58Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 569.00 +/- 139.80
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Feijo -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Feijo -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Feijo
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
dokodesuka/mms-300m-1130-forced-aligner
|
dokodesuka
| 2025-06-21T23:09:23Z | 0 | 0 | null |
[
"pytorch",
"safetensors",
"wav2vec2",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-06-21T23:02:27Z |
---
license: cc-by-nc-4.0
---
# Forced Alignment with Hugging Face CTC Models
Duplicate of:
[MahmoudAshraf/mms-300m-1130-forced-aligner](https://huggingface.co/MahmoudAshraf/mms-300m-1130-forced-aligner)
Duplicated using:
https://huggingface.co/spaces/osanseviero/repo_duplicator
|
dtadpole/KernelCoder-4B-AWQ_20250621-160317
|
dtadpole
| 2025-06-21T23:05:04Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"triton-ag",
"unsloth",
"lora",
"en",
"base_model:Qwen/Qwen3-4B-AWQ",
"base_model:adapter:Qwen/Qwen3-4B-AWQ",
"license:apache-2.0",
"region:us"
] | null | 2025-06-21T23:04:54Z |
---
library_name: peft
base_model: Qwen/Qwen3-4B-AWQ
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
- triton-ag
- unsloth
- lora
---
# dtadpole/KernelCoder-4B-AWQ_20250621-160317
This model is a fine-tuned version of [Qwen/Qwen3-4B-AWQ](https://huggingface.co/Qwen/Qwen3-4B-AWQ) using Unsloth and LoRA.
## Model Details
- **Base Model:** Qwen/Qwen3-4B-AWQ
- **Fine-tuning Method:** LoRA (Low-Rank Adaptation)
- **Max Sequence Length:** 8192
- **Training Examples:** 24
- **LoRA Rank:** 64
- **LoRA Alpha:** 64
## Training Configuration
- **Epochs:** 1
- **Learning Rate:** 3e-05
- **Batch Size:** 1
- **Gradient Accumulation Steps:** 1
- **Best Loss:** 0.3862
## Usage
```python
from unsloth import FastLanguageModel
import torch
# Load model
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="dtadpole/KernelCoder-4B-AWQ_20250621-160317",
max_seq_length=8192,
dtype=None,
load_in_4bit=True,
)
# Enable inference mode
FastLanguageModel.for_inference(model)
# Format your prompt
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Your question here"}
]
formatted_prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
# Generate
inputs = tokenizer(formatted_prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=256, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
## Training Data
This model was fine-tuned on processed conversation experiences for improved performance on specific tasks.
## Limitations
- This is a LoRA adapter that requires the base model to function
- Performance may vary depending on the specific use case
- The model inherits any limitations from the base model
## Framework Versions
- Unsloth: 2025.6.1
- Transformers: 4.52.4
- PyTorch: 2.7.0
- PEFT: Latest
|
versaceeros/7cd5cbd9-cd04-44d0-8917-855fe269634f
|
versaceeros
| 2025-06-21T23:04:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-21T22:47:07Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
steampunque/Mistral-Small-3.2-24B-Instruct-2506-Hybrid-GGUF
|
steampunque
| 2025-06-21T22:55:21Z | 0 | 0 | null |
[
"gguf",
"Mistral",
"Mistral-Small",
"GGUF",
"quantized",
"4-bit",
"base_model:mistralai/Mistral-Small-3.2-24B-Instruct-2506",
"base_model:quantized:mistralai/Mistral-Small-3.2-24B-Instruct-2506",
"license:apache-2.0",
"region:us"
] | null | 2025-06-21T21:58:38Z |
---
license: apache-2.0
base_model: mistralai/Mistral-Small-3.2-24B-Instruct-2506
base_model_relation: quantized
tags:
- Mistral
- Mistral-Small
- GGUF
- quantized
- 4-bit
---
## Llama.cpp hybrid layer quantization of Mistral-Small-3.2-24B-Instruct-2506 by mistralai
Original model: https://huggingface.co/mistralai/Mistral-Small-3.2-24B-Instruct-2506
The hybrid quant employs different quantization levels on a per layer basis to increased
flexibility of trading off performance vs file size. Less parameter bits are used at deep layers
and more bits at cortex layers to simultaneously optimize quantized size and model performance.
This quant was optimized for similar size and performance as an IQ4_XS quant while using all K quants
to increase processing efficiency on old GPUs or CPUs.
The layer quant is as follows:
```
Q4_K_H:
LAYER_TYPES='[
[0 ,"Q4_K_M"],[1 ,"Q4_K_S"],[2 ,"Q3_K_M"],[3 ,"Q3_K_M"],[4 ,"Q3_K_M"],[5 ,"Q3_K_M"],[6 ,"Q3_K_M"],[7 ,"Q3_K_M"],
[8 ,"Q3_K_M"],[9 ,"Q3_K_M"],[10,"Q3_K_M"],[11,"Q3_K_M"],[12,"Q3_K_M"],[13,"Q3_K_M"],[14,"Q3_K_M"],[15,"Q3_K_M"],
[16,"Q3_K_L"],[17,"Q3_K_M"],[18,"Q3_K_L"],[19,"Q3_K_M"],[20,"Q3_K_L"],[21,"Q3_K_M"],[22,"Q3_K_L"],[23,"Q3_K_M"],
[24,"Q3_K_L"],[25,"Q3_K_L"],[26,"Q3_K_L"],[27,"Q3_K_L"],[28,"Q4_K_S"],[29,"Q3_K_L"],[30,"Q4_K_S"],[31,"Q3_K_L"],
[32,"Q4_K_S"],[33,"Q4_K_S"],[34,"Q4_K_S"],[35,"Q4_K_S"],[36,"Q4_K_M"],[37,"Q5_K_S"],[38,"Q5_K_M"],[39,"Q6_K"]
]'
FLAGS="--token-embedding-type Q4_K --output-tensor-type Q6_K --layer-types-high"
```
This quant was optimized for good reasoning performance on a select set of test prompts.
Comparison:
Quant | size | PPL | Comment
---------|---------|------|-----------
Q4_K_H | 12.7e9 | 5.45 | slightly smaller than IQ4_XS, similar performance
IQ4_XS | 12.9e9 | 5.36 | not tested, should work well
Usage:
This is a vision capable model. It can be used together with its multimedia projector layers to process images and text inputs
and generate text outputs. The mmproj file is made available in this repository. To test vision mode follow the docs in the mtmd
readme in the tools directory of the source tree https://github.com/ggml-org/llama.cpp/blob/master/tools/mtmd/README.md .
To run it on a 12G VRAM GPU use approximately --ngl 32. Generation speed is still quite good with partial offload.
Benchmarks:
A full set of benchmarks for the model will eventually be given here: https://huggingface.co/spaces/steampunque/benchlm
## Download the file from below:
| Link | Type | Size/e9 B | Notes |
|------|------|-----------|-------|
| [Mistral-Small-3.2-24B-Instruct-2506.Q4_K_H.gguf](https://huggingface.co/steampunque/Mistral-Small-3.2-24B-Instruct-2506-Hybrid-GGUF/resolve/main/Mistral-Small-3.2-24B-Instruct-2506.Q4_K_H.gguf) | Q4_K_H | 12.7e9 B | ~IQ4_XS quality/size |
| [Mistral-Small-3.2-24B-Instruct-2506.mmproj.gguf](https://huggingface.co/steampunque/Mistral-Small-3.2-24B-Instruct-2506-Hybrid-GGUF/resolve/main/Mistral-Small-3.2-24B-Instruct-2506.mmproj.gguf) | mmproj | 0.88e9 B | multimedia projector |
A discussion thread about the hybrid layer quant approach can be found here on the llama.cpp git repository:
https://github.com/ggml-org/llama.cpp/discussions/13040
|
BootesVoid/cmc4z6ohj023tbfiftifxfyok_cmc6qq9iy07h4bfific4k1vyb
|
BootesVoid
| 2025-06-21T22:47:37Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-21T22:47:36Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: SERENA
---
# Cmc4Z6Ohj023Tbfiftifxfyok_Cmc6Qq9Iy07H4Bfific4K1Vyb
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `SERENA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "SERENA",
"lora_weights": "https://huggingface.co/BootesVoid/cmc4z6ohj023tbfiftifxfyok_cmc6qq9iy07h4bfific4k1vyb/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmc4z6ohj023tbfiftifxfyok_cmc6qq9iy07h4bfific4k1vyb', weight_name='lora.safetensors')
image = pipeline('SERENA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmc4z6ohj023tbfiftifxfyok_cmc6qq9iy07h4bfific4k1vyb/discussions) to add images that show off what you’ve made with this LoRA.
|
akar49/VIT_braintumor_classifier
|
akar49
| 2025-06-21T22:46:03Z | 0 | 0 |
transformers
|
[
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-06-21T22:45:48Z |
---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_keras_callback
model-index:
- name: VIT_braintumor_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# VIT_braintumor_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0945
- Validation Loss: 1.7241
- Train Accuracy: 0.6974
- Epoch: 14
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'SGD', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': np.float32(0.01), 'momentum': 0.0, 'nesterov': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.7946 | 1.1484 | 0.6272 | 0 |
| 0.3246 | 1.1792 | 0.6769 | 1 |
| 0.2266 | 1.2812 | 0.6842 | 2 |
| 0.1841 | 1.5085 | 0.6754 | 3 |
| 0.1589 | 1.4224 | 0.6944 | 4 |
| 0.1244 | 1.4229 | 0.6901 | 5 |
| 0.1174 | 1.4858 | 0.6784 | 6 |
| 0.1133 | 1.4221 | 0.6974 | 7 |
| 0.1026 | 1.4273 | 0.7003 | 8 |
| 0.1083 | 1.5406 | 0.7003 | 9 |
| 0.1038 | 1.6223 | 0.6974 | 10 |
| 0.0876 | 1.5613 | 0.6959 | 11 |
| 0.1018 | 1.4540 | 0.7149 | 12 |
| 0.0808 | 1.4853 | 0.7193 | 13 |
| 0.0945 | 1.7241 | 0.6974 | 14 |
### Framework versions
- Transformers 4.52.4
- TensorFlow 2.18.0
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Nitral-AI/SekhmetX-9B-v0.1-test
|
Nitral-AI
| 2025-06-21T22:45:53Z | 71 | 2 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T22:45:47Z |
---
base_model:
- Nitral-AI/Salesforce_xgen-small-9B-rp-v0.17a
- Nitral-AI/Salesforce_xgen-small-9B-instruct-v0.16b
library_name: transformers
tags:
- mergekit
- merge
---
# Ton of training i ended up deleting, wasnt super happy with the final model. Will probably be archived at a later date.
### Models Merged
The following models were included in the merge:
* [Nitral-AI/Salesforce_xgen-small-9B-rp-v0.17a](https://huggingface.co/Nitral-AI/Salesforce_xgen-small-9B-rp-v0.17a)
* [Nitral-AI/Salesforce_xgen-small-9B-instruct-v0.16b](https://huggingface.co/Nitral-AI/Salesforce_xgen-small-9B-instruct-v0.16b)
### The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Nitral-AI/Salesforce_xgen-small-9B-rp-v0.17a
layer_range: [0, 45]
- model: Nitral-AI/Salesforce_xgen-small-9B-instruct-v0.16b
layer_range: [0, 45]
merge_method: slerp
base_model: Nitral-AI/Salesforce_xgen-small-9B-rp-v0.17a
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
akar49/VIT_fourclass_classifier
|
akar49
| 2025-06-21T22:45:31Z | 0 | 0 |
transformers
|
[
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-06-21T22:45:17Z |
---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_keras_callback
model-index:
- name: VIT_fourclass_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# VIT_fourclass_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0945
- Validation Loss: 1.7241
- Train Accuracy: 0.6974
- Epoch: 14
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'SGD', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': np.float32(0.01), 'momentum': 0.0, 'nesterov': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.7946 | 1.1484 | 0.6272 | 0 |
| 0.3246 | 1.1792 | 0.6769 | 1 |
| 0.2266 | 1.2812 | 0.6842 | 2 |
| 0.1841 | 1.5085 | 0.6754 | 3 |
| 0.1589 | 1.4224 | 0.6944 | 4 |
| 0.1244 | 1.4229 | 0.6901 | 5 |
| 0.1174 | 1.4858 | 0.6784 | 6 |
| 0.1133 | 1.4221 | 0.6974 | 7 |
| 0.1026 | 1.4273 | 0.7003 | 8 |
| 0.1083 | 1.5406 | 0.7003 | 9 |
| 0.1038 | 1.6223 | 0.6974 | 10 |
| 0.0876 | 1.5613 | 0.6959 | 11 |
| 0.1018 | 1.4540 | 0.7149 | 12 |
| 0.0808 | 1.4853 | 0.7193 | 13 |
| 0.0945 | 1.7241 | 0.6974 | 14 |
### Framework versions
- Transformers 4.52.4
- TensorFlow 2.18.0
- Datasets 3.6.0
- Tokenizers 0.21.1
|
jxie/autorf-zero_shot-motion_predictor
|
jxie
| 2025-06-21T22:42:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"motion_predictor",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-21T22:41:24Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
SicariusSicariiStuff/Impish_Magic_24B_EXL2_6.5bpw
|
SicariusSicariiStuff
| 2025-06-21T22:42:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:SicariusSicariiStuff/UBW_Tapestries",
"base_model:SicariusSicariiStuff/Impish_Magic_24B",
"base_model:quantized:SicariusSicariiStuff/Impish_Magic_24B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] |
text-generation
| 2025-06-21T17:49:40Z |
---
base_model: SicariusSicariiStuff/Impish_Magic_24B
datasets:
- SicariusSicariiStuff/UBW_Tapestries
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: SicariusSicariiStuff
---
|
mradermacher/r1-1776-i1-GGUF
|
mradermacher
| 2025-06-21T22:38:37Z | 0 | 0 |
transformers
|
[
"transformers",
"en",
"base_model:perplexity-ai/r1-1776",
"base_model:finetune:perplexity-ai/r1-1776",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-06-09T23:40:50Z |
---
base_model: perplexity-ai/r1-1776
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/perplexity-ai/r1-1776
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/r1-1776-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [PART 1](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ1_S.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ1_S.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ1_S.gguf.part3of3) | i1-IQ1_S | 133.8 | for the desperate |
| [PART 1](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ1_M.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ1_M.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ1_M.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ1_M.gguf.part4of4) | i1-IQ1_M | 149.2 | mostly desperate |
| [PART 1](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ2_XXS.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ2_XXS.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ2_XXS.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ2_XXS.gguf.part4of4) | i1-IQ2_XXS | 174.7 | |
| [PART 1](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ2_XS.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ2_XS.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ2_XS.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ2_XS.gguf.part4of4) | i1-IQ2_XS | 195.3 | |
| [PART 1](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ2_S.gguf.part1of4) [PART 2](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ2_S.gguf.part2of4) [PART 3](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ2_S.gguf.part3of4) [PART 4](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ2_S.gguf.part4of4) | i1-IQ2_S | 197.2 | |
| [P1](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ2_M.gguf.part1of5) [P2](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ2_M.gguf.part2of5) [P3](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ2_M.gguf.part3of5) [P4](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ2_M.gguf.part4of5) [P5](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ2_M.gguf.part5of5) | i1-IQ2_M | 217.7 | |
| [P1](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q2_K_S.gguf.part1of5) [P2](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q2_K_S.gguf.part2of5) [P3](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q2_K_S.gguf.part3of5) [P4](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q2_K_S.gguf.part4of5) [P5](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q2_K_S.gguf.part5of5) | i1-Q2_K_S | 224.9 | very low quality |
| [P1](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q2_K.gguf.part1of5) [P2](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q2_K.gguf.part2of5) [P3](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q2_K.gguf.part3of5) [P4](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q2_K.gguf.part4of5) [P5](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q2_K.gguf.part5of5) | i1-Q2_K | 244.2 | IQ3_XXS probably better |
| [P1](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ3_XXS.gguf.part1of6) [P2](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ3_XXS.gguf.part2of6) [P3](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ3_XXS.gguf.part3of6) [P4](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ3_XXS.gguf.part4of6) [P5](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ3_XXS.gguf.part5of6) [P6](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ3_XXS.gguf.part6of6) | i1-IQ3_XXS | 258.1 | lower quality |
| [P1](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ3_XS.gguf.part1of6) [P2](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ3_XS.gguf.part2of6) [P3](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ3_XS.gguf.part3of6) [P4](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ3_XS.gguf.part4of6) [P5](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ3_XS.gguf.part5of6) [P6](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ3_XS.gguf.part6of6) | i1-IQ3_XS | 273.0 | |
| [P1](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ3_S.gguf.part1of6) [P2](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ3_S.gguf.part2of6) [P3](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ3_S.gguf.part3of6) [P4](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ3_S.gguf.part4of6) [P5](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ3_S.gguf.part5of6) [P6](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ3_S.gguf.part6of6) | i1-IQ3_S | 289.3 | beats Q3_K* |
| [P1](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q3_K_S.gguf.part1of6) [P2](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q3_K_S.gguf.part2of6) [P3](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q3_K_S.gguf.part3of6) [P4](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q3_K_S.gguf.part4of6) [P5](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q3_K_S.gguf.part5of6) [P6](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q3_K_S.gguf.part6of6) | i1-Q3_K_S | 289.3 | IQ3_XS probably better |
| [P1](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ3_M.gguf.part1of6) [P2](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ3_M.gguf.part2of6) [P3](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ3_M.gguf.part3of6) [P4](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ3_M.gguf.part4of6) [P5](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ3_M.gguf.part5of6) [P6](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ3_M.gguf.part6of6) | i1-IQ3_M | 292.3 | |
| [P1](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q3_K_M.gguf.part1of7) [P2](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q3_K_M.gguf.part2of7) [P3](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q3_K_M.gguf.part3of7) [P4](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q3_K_M.gguf.part4of7) [P5](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q3_K_M.gguf.part5of7) [P6](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q3_K_M.gguf.part6of7) [P7](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q3_K_M.gguf.part7of7) | i1-Q3_K_M | 319.4 | IQ3_S probably better |
| [P1](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q3_K_L.gguf.part1of8) [P2](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q3_K_L.gguf.part2of8) [P3](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q3_K_L.gguf.part3of8) [P4](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q3_K_L.gguf.part4of8) [P5](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q3_K_L.gguf.part5of8) [P6](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q3_K_L.gguf.part6of8) [P7](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q3_K_L.gguf.part7of8) [P8](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q3_K_L.gguf.part8of8) | i1-Q3_K_L | 347.6 | IQ3_M probably better |
| [P1](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ4_XS.gguf.part1of8) [P2](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ4_XS.gguf.part2of8) [P3](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ4_XS.gguf.part3of8) [P4](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ4_XS.gguf.part4of8) [P5](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ4_XS.gguf.part5of8) [P6](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ4_XS.gguf.part6of8) [P7](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ4_XS.gguf.part7of8) [P8](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-IQ4_XS.gguf.part8of8) | i1-IQ4_XS | 357.2 | |
| [P1](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q4_0.gguf.part1of8) [P2](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q4_0.gguf.part2of8) [P3](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q4_0.gguf.part3of8) [P4](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q4_0.gguf.part4of8) [P5](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q4_0.gguf.part5of8) [P6](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q4_0.gguf.part6of8) [P7](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q4_0.gguf.part7of8) [P8](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q4_0.gguf.part8of8) | i1-Q4_0 | 379.1 | fast, low quality |
| [P1](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q4_K_S.gguf.part1of8) [P2](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q4_K_S.gguf.part2of8) [P3](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q4_K_S.gguf.part3of8) [P4](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q4_K_S.gguf.part4of8) [P5](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q4_K_S.gguf.part5of8) [P6](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q4_K_S.gguf.part6of8) [P7](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q4_K_S.gguf.part7of8) [P8](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q4_K_S.gguf.part8of8) | i1-Q4_K_S | 380.2 | optimal size/speed/quality |
| [P1](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q4_K_M.gguf.part1of9) [P2](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q4_K_M.gguf.part2of9) [P3](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q4_K_M.gguf.part3of9) [P4](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q4_K_M.gguf.part4of9) [P5](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q4_K_M.gguf.part5of9) [P6](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q4_K_M.gguf.part6of9) [P7](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q4_K_M.gguf.part7of9) [P8](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q4_K_M.gguf.part8of9) [P9](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q4_K_M.gguf.part9of9) | i1-Q4_K_M | 404.6 | fast, recommended |
| [P1](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q4_1.gguf.part1of9) [P2](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q4_1.gguf.part2of9) [P3](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q4_1.gguf.part3of9) [P4](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q4_1.gguf.part4of9) [P5](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q4_1.gguf.part5of9) [P6](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q4_1.gguf.part6of9) [P7](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q4_1.gguf.part7of9) [P8](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q4_1.gguf.part8of9) [P9](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q4_1.gguf.part9of9) | i1-Q4_1 | 420.0 | |
| [P1](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q5_K_S.gguf.part01of10) [P2](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q5_K_S.gguf.part02of10) [P3](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q5_K_S.gguf.part03of10) [P4](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q5_K_S.gguf.part04of10) [P5](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q5_K_S.gguf.part05of10) [P6](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q5_K_S.gguf.part06of10) [P7](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q5_K_S.gguf.part07of10) [P8](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q5_K_S.gguf.part08of10) [P9](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q5_K_S.gguf.part09of10) [P10](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q5_K_S.gguf.part10of10) | i1-Q5_K_S | 461.9 | |
| [P1](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q5_K_M.gguf.part01of10) [P2](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q5_K_M.gguf.part02of10) [P3](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q5_K_M.gguf.part03of10) [P4](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q5_K_M.gguf.part04of10) [P5](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q5_K_M.gguf.part05of10) [P6](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q5_K_M.gguf.part06of10) [P7](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q5_K_M.gguf.part07of10) [P8](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q5_K_M.gguf.part08of10) [P9](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q5_K_M.gguf.part09of10) [P10](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q5_K_M.gguf.part10of10) | i1-Q5_K_M | 475.5 | |
| [P1](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q6_K.gguf.part01of12) [P2](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q6_K.gguf.part02of12) [P3](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q6_K.gguf.part03of12) [P4](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q6_K.gguf.part04of12) [P5](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q6_K.gguf.part05of12) [P6](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q6_K.gguf.part06of12) [P7](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q6_K.gguf.part07of12) [P8](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q6_K.gguf.part08of12) [P9](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q6_K.gguf.part09of12) [P10](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q6_K.gguf.part10of12) [P11](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q6_K.gguf.part11of12) [P12](https://huggingface.co/mradermacher/r1-1776-i1-GGUF/resolve/main/r1-1776.i1-Q6_K.gguf.part12of12) | i1-Q6_K | 551.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
BootesVoid/cmc6842mf05a1bfifo2n3mmhw_cmc6h3fw606b5bfifmwvq3p3y
|
BootesVoid
| 2025-06-21T22:33:56Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-21T22:33:55Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ULTRAREALISTIC
---
# Cmc6842Mf05A1Bfifo2N3Mmhw_Cmc6H3Fw606B5Bfifmwvq3P3Y
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ULTRAREALISTIC` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "ULTRAREALISTIC",
"lora_weights": "https://huggingface.co/BootesVoid/cmc6842mf05a1bfifo2n3mmhw_cmc6h3fw606b5bfifmwvq3p3y/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmc6842mf05a1bfifo2n3mmhw_cmc6h3fw606b5bfifmwvq3p3y', weight_name='lora.safetensors')
image = pipeline('ULTRAREALISTIC').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmc6842mf05a1bfifo2n3mmhw_cmc6h3fw606b5bfifmwvq3p3y/discussions) to add images that show off what you’ve made with this LoRA.
|
BeardedMonster/Apollo-0.5B
|
BeardedMonster
| 2025-06-21T22:31:03Z | 191 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-05-28T00:11:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Naphon/pythia-2.8b-thai-base
|
Naphon
| 2025-06-21T22:18:24Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:EleutherAI/pythia-2.8b-deduped",
"base_model:adapter:EleutherAI/pythia-2.8b-deduped",
"license:apache-2.0",
"region:us"
] | null | 2025-06-21T13:28:47Z |
---
library_name: peft
license: apache-2.0
base_model: EleutherAI/pythia-2.8b-deduped
tags:
- generated_from_trainer
model-index:
- name: pythia-2.8b-thai-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pythia-2.8b-thai-base
This model is a fine-tuned version of [EleutherAI/pythia-2.8b-deduped](https://huggingface.co/EleutherAI/pythia-2.8b-deduped) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 10000000
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
mob2711/llama_3b_1k5
|
mob2711
| 2025-06-21T22:11:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-21T22:11:11Z |
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** mob2711
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
pikiton/fine-tuned-marian
|
pikiton
| 2025-06-21T22:06:03Z | 14 | 0 | null |
[
"safetensors",
"marian",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-en-ru",
"base_model:finetune:Helsinki-NLP/opus-mt-en-ru",
"license:apache-2.0",
"region:us"
] | null | 2025-05-11T22:30:27Z |
---
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-ru
tags:
- generated_from_trainer
model-index:
- name: fine-tuned-marian
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-marian
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ru](https://huggingface.co/Helsinki-NLP/opus-mt-en-ru) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.34.0
- Pytorch 2.7.1+cpu
- Datasets 3.6.0
- Tokenizers 0.14.1
|
SAadettin-BERber/whisper_small_model_atc_10
|
SAadettin-BERber
| 2025-06-21T22:04:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-06-21T21:57:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
FISHCAL/FISHCALPOST
|
FISHCAL
| 2025-06-21T22:01:02Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-21T22:01:01Z |
---
license: apache-2.0
---
|
mradermacher/Arch-Agent-3B-GGUF
|
mradermacher
| 2025-06-21T22:00:07Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:katanemo/Arch-Agent-3B",
"base_model:quantized:katanemo/Arch-Agent-3B",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2025-06-21T19:25:28Z |
---
base_model: katanemo/Arch-Agent-3B
language:
- en
library_name: transformers
license: other
license_link: https://huggingface.co/katanemo/Arch-Agent-3B/blob/main/LICENSE
license_name: katanemo-research
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/katanemo/Arch-Agent-3B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Arch-Agent-3B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-3B-GGUF/resolve/main/Arch-Agent-3B.Q2_K.gguf) | Q2_K | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-3B-GGUF/resolve/main/Arch-Agent-3B.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-3B-GGUF/resolve/main/Arch-Agent-3B.Q3_K_M.gguf) | Q3_K_M | 1.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-3B-GGUF/resolve/main/Arch-Agent-3B.Q3_K_L.gguf) | Q3_K_L | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-3B-GGUF/resolve/main/Arch-Agent-3B.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-3B-GGUF/resolve/main/Arch-Agent-3B.Q4_K_S.gguf) | Q4_K_S | 1.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-3B-GGUF/resolve/main/Arch-Agent-3B.Q4_K_M.gguf) | Q4_K_M | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-3B-GGUF/resolve/main/Arch-Agent-3B.Q5_K_S.gguf) | Q5_K_S | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-3B-GGUF/resolve/main/Arch-Agent-3B.Q5_K_M.gguf) | Q5_K_M | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-3B-GGUF/resolve/main/Arch-Agent-3B.Q6_K.gguf) | Q6_K | 2.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-3B-GGUF/resolve/main/Arch-Agent-3B.Q8_0.gguf) | Q8_0 | 3.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Arch-Agent-3B-GGUF/resolve/main/Arch-Agent-3B.f16.gguf) | f16 | 6.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Marwan-Kasem/Whisper-Medium
|
Marwan-Kasem
| 2025-06-21T21:55:24Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"base_model:openai/whisper-medium",
"base_model:adapter:openai/whisper-medium",
"region:us"
] | null | 2025-06-21T21:53:42Z |
---
base_model: openai/whisper-medium
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0
|
mezzu-fun-viral-video/mezzu.fun.viral.video.Leaked.ORIGINAL
|
mezzu-fun-viral-video
| 2025-06-21T21:54:20Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-21T21:54:01Z |
<animated-image data-catalyst=""><a href="https://wtach.club/leakvideo/?JR" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Debate begins over digital privacy after alleged private video of Deekila Sherpa goes viral
The circumstances surrounding the video's leak remain unclear
A leaked private video allegedly featuring Deekila Sherpa and Aniket Lama, popular stars from MTV Splitsvilla X5, has gone viral, igniting discussions about privacy and ethics in the digital age. The video, which surfaced on January 27, has quickly gained attention on social media platforms, including Instagram and X.
|
Disya/All-Q3-8B-RP-0625
|
Disya
| 2025-06-21T21:50:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"base_model:GreenerPastures/Bald-Beaver-8B",
"base_model:merge:GreenerPastures/Bald-Beaver-8B",
"base_model:Qwen/Qwen3-8B-Base",
"base_model:merge:Qwen/Qwen3-8B-Base",
"base_model:allura-org/Q3-8B-Kintsugi",
"base_model:merge:allura-org/Q3-8B-Kintsugi",
"base_model:allura-org/remnant-qwen3-8b",
"base_model:merge:allura-org/remnant-qwen3-8b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-21T21:34:00Z |
---
base_model:
- GreenerPastures/Bald-Beaver-8B
- allura-org/remnant-qwen3-8b
- allura-org/Q3-8B-Kintsugi
- Qwen/Qwen3-8B-Base
library_name: transformers
tags:
- mergekit
- merge
---
---
Potentially, this is one of the best 8B models for RP if you find the right settings that overcome the occasional repetitions.
(I'll say in advance — I'm not lucky with settings...)
---
# All-Q3-8B-RP-0625
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using Qwen/Qwen3-8B-Base as a base.
### Models Merged
The following models were included in the merge:
* GreenerPastures/Bald-Beaver-8B
* allura-org/remnant-qwen3-8b
* allura-org/Q3-8B-Kintsugi
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: dare_ties
base_model: Qwen/Qwen3-8B-Base
dtype: bfloat16
models:
- model: GreenerPastures/Bald-Beaver-8B
parameters:
weight: 0.2
- model: allura-org/Q3-8B-Kintsugi
parameters:
weight: 0.4
- model: allura-org/remnant-qwen3-8b
parameters:
weight: 0.4
parameters:
density: 0.35
```
|
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskGlobal-1e-8_8734
|
luckeciano
| 2025-06-21T21:48:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-21T20:22:37Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskGlobal-1e-8_6734
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskGlobal-1e-8_6734
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskGlobal-1e-8_6734", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/lighzsbk)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
zletpm/Mistral-Small-3.2-24B-Instruct-2506-Text-Only-MLX-9bit
|
zletpm
| 2025-06-21T21:47:20Z | 0 | 0 | null |
[
"safetensors",
"mistral",
"text-generation",
"conversational",
"base_model:anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-Text-Only",
"base_model:quantized:anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-Text-Only",
"license:apache-2.0",
"8-bit",
"region:us"
] |
text-generation
| 2025-06-21T18:25:55Z |
---
license: apache-2.0
base_model:
- anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-Text-Only
pipeline_tag: text-generation
---
This model zletpm/Mistral-Small-3.2-24B-Instruct-2506-Text-Only-4.5bit was converted to MLX format from anthracite-core/Mistral-Small-3.2-24B-Instruct-2506-Text-Only using mlx-lm version 0.25.2.
|
Feijo/Taxi-V3
|
Feijo
| 2025-06-21T21:39:04Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-21T21:39:01Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-V3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.74
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Feijo/Taxi-V3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
kamal-kaur-ORIGINAL-X-VIRAL/sex.viral.original.sex.kamal.kaur.viral
|
kamal-kaur-ORIGINAL-X-VIRAL
| 2025-06-21T21:39:00Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-21T21:38:37Z |
<animated-image data-catalyst=""><a href="https://wtach.club/leakvideo/?JR" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Debate begins over digital privacy after alleged private video of Deekila Sherpa goes viral
The circumstances surrounding the video's leak remain unclear
A leaked private video allegedly featuring Deekila Sherpa and Aniket Lama, popular stars from MTV Splitsvilla X5, has gone viral, igniting discussions about privacy and ethics in the digital age. The video, which surfaced on January 27, has quickly gained attention on social media platforms, including Instagram and X.
|
Feijo/q-FrozenLake-v1-4x4-noSlippery
|
Feijo
| 2025-06-21T21:34:48Z | 0 | 0 | null |
[
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-21T21:34:45Z |
---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Feijo/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
viral-video-Leaked/kamal.kaur.X.VIRAL.Video.FuLL.original.Leaked
|
viral-video-Leaked
| 2025-06-21T21:32:32Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-21T21:31:22Z |
<animated-image data-catalyst=""><a href="https://wtach.club/leakvideo/?JR" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
Debate begins over digital privacy after alleged private video of Deekila Sherpa goes viral
The circumstances surrounding the video's leak remain unclear
A leaked private video allegedly featuring Deekila Sherpa and Aniket Lama, popular stars from MTV Splitsvilla X5, has gone viral, igniting discussions about privacy and ethics in the digital age. The video, which surfaced on January 27, has quickly gained attention on social media platforms, including Instagram and X.
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.