modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-27 12:28:27
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-27 12:28:17
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
23ikram/chat
|
23ikram
| 2025-06-19T11:20:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T11:20:35Z |
---
base_model: unsloth/llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** 23ikram
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nnilayy/dreamer-dominance-multi-classification-Kfold-5
|
nnilayy
| 2025-06-19T11:19:32Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-19T11:19:30Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
finvix/qwen-2.5-0.5B
|
finvix
| 2025-06-19T11:16:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T11:14:22Z |
---
base_model: unsloth/Qwen2.5-0.5B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** finvix
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-0.5B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
GeneroGral/Llama-3.1-8B_BBQ_Stereo_MERGED
|
GeneroGral
| 2025-06-19T11:12:54Z | 1 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T00:01:52Z |
---
base_model: unsloth/Meta-Llama-3.1-8B
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** GeneroGral
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
diabolocom/openai-whisper-large-v3-LoRA-fr-CA
|
diabolocom
| 2025-06-19T11:11:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T08:37:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
FormlessAI/8ed828aa-7b53-4f2d-ba34-ad2498c3e152
|
FormlessAI
| 2025-06-19T11:11:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"base_model:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"base_model:finetune:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T10:52:30Z |
---
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
library_name: transformers
model_name: 8ed828aa-7b53-4f2d-ba34-ad2498c3e152
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for 8ed828aa-7b53-4f2d-ba34-ad2498c3e152
This model is a fine-tuned version of [NousResearch/Nous-Hermes-2-Mistral-7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="FormlessAI/8ed828aa-7b53-4f2d-ba34-ad2498c3e152", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/bmhnrcnz)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
GeneroGral/Llama-3.1-8B_BBQ_Stereo
|
GeneroGral
| 2025-06-19T11:09:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T00:01:16Z |
---
base_model: unsloth/Meta-Llama-3.1-8B
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** GeneroGral
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
GladGhost/text2json-model
|
GladGhost
| 2025-06-19T11:00:07Z | 12 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-small",
"base_model:finetune:google/flan-t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-06-18T08:26:23Z |
---
library_name: transformers
license: apache-2.0
base_model: google/flan-t5-small
tags:
- generated_from_trainer
model-index:
- name: text2json-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text2json-model
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
encku/glc-06-2025
|
encku
| 2025-06-19T10:59:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"autotrain",
"base_model:google/vit-large-patch32-384",
"base_model:finetune:google/vit-large-patch32-384",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-06-19T10:58:00Z |
---
tags:
- autotrain
- transformers
- image-classification
base_model: google/vit-large-patch32-384
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metrics
loss: 0.006576409563422203
f1_macro: 0.9982094907426964
f1_micro: 0.9982098102398854
f1_weighted: 0.9982094907426965
precision_macro: 0.998216473222845
precision_micro: 0.9982098102398854
precision_weighted: 0.9982164732228451
recall_macro: 0.9982098102398854
recall_micro: 0.9982098102398854
recall_weighted: 0.9982098102398854
accuracy: 0.9982098102398854
|
MikeGreen2710/ner_total_area_final
|
MikeGreen2710
| 2025-06-19T10:59:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-06-19T10:58:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Khruna/Nico
|
Khruna
| 2025-06-19T10:56:18Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-06-19T10:56:04Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: >-
images/Professional_Mode_woman_shows_her_shiny_plate.00_00_29_20.Still002.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# nico
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Khruna/Nico/tree/main) them in the Files & versions tab.
|
kubiga354/riyan
|
kubiga354
| 2025-06-19T10:55:58Z | 0 | 0 | null |
[
"license:cc-by-nd-4.0",
"region:us"
] | null | 2025-06-19T10:55:58Z |
---
license: cc-by-nd-4.0
---
|
Khruna/Mason
|
Khruna
| 2025-06-19T10:54:49Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-06-19T10:54:34Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: >-
images/Professional_Mode_woman_shows_her_shiny_plate.00_00_29_20.Still003.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# mason
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Khruna/Mason/tree/main) them in the Files & versions tab.
|
alana89/TabSTAR-eval-320-version-fold-k2
|
alana89
| 2025-06-19T10:49:32Z | 0 | 0 | null |
[
"safetensors",
"tabstar",
"license:apache-2.0",
"region:us"
] | null | 2025-06-19T10:44:07Z |
---
license: apache-2.0
---
|
NirmaMeena/wATCH.Nirma.Meena.viral.video.original
|
NirmaMeena
| 2025-06-19T10:49:20Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T10:49:09Z |
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
morturr/Llama-2-7b-hf-LOO_dadjokes-COMB_one_liners-comb3-seed28-2025-06-19
|
morturr
| 2025-06-19T10:46:22Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-19T10:46:15Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_dadjokes-COMB_one_liners-comb3-seed28-2025-06-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_dadjokes-COMB_one_liners-comb3-seed28-2025-06-19
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 28
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
louashka/ppo-LunarLander-v2
|
louashka
| 2025-06-19T10:45:55Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-19T10:45:31Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 263.31 +/- 18.64
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
BootesVoid/cmbz7qk61056irdqskmju68h7_cmc35cntf000r3ff6pefv1oti
|
BootesVoid
| 2025-06-19T10:44:56Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-19T10:44:55Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: CARLA
---
# Cmbz7Qk61056Irdqskmju68H7_Cmc35Cntf000R3Ff6Pefv1Oti
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `CARLA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "CARLA",
"lora_weights": "https://huggingface.co/BootesVoid/cmbz7qk61056irdqskmju68h7_cmc35cntf000r3ff6pefv1oti/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbz7qk61056irdqskmju68h7_cmc35cntf000r3ff6pefv1oti', weight_name='lora.safetensors')
image = pipeline('CARLA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbz7qk61056irdqskmju68h7_cmc35cntf000r3ff6pefv1oti/discussions) to add images that show off what you’ve made with this LoRA.
|
thumbtackclone/thumbtackclone
|
thumbtackclone
| 2025-06-19T10:41:39Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T10:41:19Z |
# thumbtack clone
## Introduction
**[Thumbtack clone](http://omninos.in/thumbtack-clone-app-script-development.php)** is a popular online platform that connects customers with local service professionals, such as plumbers, photographers, and tutors. A Thumbtack clone aims to replicate this functionality, enabling users to find and hire service providers while allowing professionals to showcase their skills and grow their businesses. This article explores the key features, technology stack, and development considerations for building a Thumbtack clone.
## Core Features of a Thumbtack Clone
### 1. User Roles
The platform should support two primary user types:
Customers: Individuals seeking services, who can browse professionals, submit job requests, and review providers.
Service Professionals: Providers who create profiles, list services, respond to job requests, and manage bookings.
### 2. Service Request System
Customers can submit detailed job requests, specifying the service type (e.g., catering, home repair), location, budget, and timeline.
The platform matches requests with relevant professionals based on location, expertise, and availability.
### 3. Professional Profiles
Professionals create detailed profiles, including:
Service descriptions and pricing.
Portfolio of past work (e.g., photos or videos).
Customer reviews and ratings.
Certifications or licenses (if applicable).
### 4. Search and Filters
A robust search engine allows customers to find professionals by category, location, price, availability, and ratings.
Filters enhance user experience by narrowing down results to meet specific needs.
### 5. Quote and Booking System
Professionals can send customized quotes in response to job requests.
Customers can compare quotes, communicate with providers, and book services directly on the platform.
### 6. Payment Gateway
Secure payment processing for bookings, with options for upfront deposits or full payments.
Support for multiple payment methods (e.g., credit cards, PayPal).
Escrow system to hold funds until services are completed satisfactorily.
### 7. Reviews and Ratings
Customers can leave feedback and rate professionals after service completion.
Professionals can respond to reviews, fostering trust and transparency.
### 8. Messaging System
In-app chat or messaging for seamless communication between customers and professionals.
Notifications for new messages, job requests, or booking updates.
### 9. Admin Dashboard
A backend panel for platform administrators to:
Manage user accounts and verify professional credentials.
Monitor transactions and resolve disputes.
Analyze platform performance with analytics tools.
## Technology Stack for a Thumbtack Clone
### Frontend
React.js or Vue.js: For a dynamic, responsive user interface.
Tailwind CSS: For modern, customizable styling.
React Native or Flutter: For cross-platform mobile app development.
### Backend
Node.js with Express.js: For scalable server-side development.
Python with Django or Flask: For rapid development and robust APIs.
Ruby on Rails: For quick prototyping and deployment.
### Database
PostgreSQL or MySQL: For relational data storage (e.g., user profiles, job requests).
MongoDB: For handling unstructured data like reviews or portfolios.
### APIs and Integrations
Stripe or PayPal: For secure payment processing.
Google Maps API: For location-based search and mapping.
Twilio or SendGrid: For SMS/email notifications and messaging.
AWS S3: For storing images, videos, or documents.
### Cloud and DevOps
AWS, Google Cloud, or Heroku: For hosting and scalability.
Docker and Kubernetes: For containerization and orchestration.
GitHub Actions or Jenkins: For CI/CD pipelines.
## Development Approach
### 1. Market Research
Analyze competitors like Thumbtack, TaskRabbit, or Angi to identify gaps and opportunities.
Understand target audience needs (e.g., specific service categories or pricing preferences).
### 2. MVP Development
Build a Minimum Viable Product (MVP) with core features:
User registration and profiles.
Service request and quote system.
Basic search and payment functionality.
Launch the MVP to gather user feedback and iterate.
### 3. UI/UX Design
Create intuitive, user-friendly interfaces with tools like Figma or Adobe XD.
Ensure mobile responsiveness and accessibility for diverse users.
## Challenges and Solutions
User Acquisition: Build trust through verified profiles, transparent pricing, and quality assurance. Partner with local businesses or influencers to expand reach.
Matching Accuracy: Use AI-driven algorithms to improve service-provider matching based on skills, location, and customer preferences.
Scalability: Design the backend to handle high traffic and large datasets, leveraging cloud infrastructure for elasticity.
## Conclusion
A **[Thumbtack clone](http://omninos.in/thumbtack-clone-app-script-development.php)** offers a lucrative opportunity to create a service marketplace that bridges customers and professionals. By focusing on user experience, robust features, and a scalable technology stack, developers can build a platform that thrives in the gig economy. Start with an MVP, iterate based on feedback, and implement effective monetization strategies to ensure long-term success.
|
Patipon/sweet-sapbert-singlelabel
|
Patipon
| 2025-06-19T10:40:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-19T10:40:34Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
New-tutorial-guru-salsa-18-Viral-Videos/FULL.VIDEO.guru.salsa.Viral.Video.Tutorial.Official
|
New-tutorial-guru-salsa-18-Viral-Videos
| 2025-06-19T10:37:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T10:36:39Z |
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
Kemre/Surface_Path_Finder
|
Kemre
| 2025-06-19T10:36:30Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-19T10:36:30Z |
---
license: apache-2.0
---
|
phospho-app/praveen-merai-gr00t-so100_dataset-xa607
|
phospho-app
| 2025-06-19T10:33:12Z | 0 | 0 | null |
[
"safetensors",
"gr00t_n1",
"phosphobot",
"gr00t",
"region:us"
] | null | 2025-06-19T10:20:25Z |
---
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [praveen-merai/so100_dataset](https://huggingface.co/datasets/praveen-merai/so100_dataset)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 107
- **Training steps**: None
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
BatiRocky/dummy-model
|
BatiRocky
| 2025-06-19T10:33:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-06-19T10:23:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mimi1998/Qwen2.5-7B-Instruct-GRPO-Meme-LoRA-V3
|
mimi1998
| 2025-06-19T10:33:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:mimi1998/Qwen2.5-7B-Instruct-SFT-Meme-LoRA-V3",
"base_model:finetune:mimi1998/Qwen2.5-7B-Instruct-SFT-Meme-LoRA-V3",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T10:29:56Z |
---
base_model: mimi1998/Qwen2.5-7B-Instruct-SFT-Meme-LoRA-V3
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** mimi1998
- **License:** apache-2.0
- **Finetuned from model :** mimi1998/Qwen2.5-7B-Instruct-SFT-Meme-LoRA-V3
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Khieu-dam-o-Viet-Nam-in-Vietnamese/NOI.TIENG.Khieu.dam.o.Viet.Nam.in.Vietnamese
|
Khieu-dam-o-Viet-Nam-in-Vietnamese
| 2025-06-19T10:32:59Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T10:32:49Z |
01 seconds ago
[🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶](https://sahabagi-mgi.blogspot.com/p/heres-now.html)
[🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 FREE](https://sahabagi-mgi.blogspot.com/p/heres-now.html)
<a href="https://sahabagi-mgi.blogspot.com/p/heres-now.html" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
aiplexdeveloper/stable-diffusion-xl-refiner-1.0
|
aiplexdeveloper
| 2025-06-19T10:32:45Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"stable-diffusion",
"image-to-image",
"arxiv:2307.01952",
"arxiv:2211.01324",
"arxiv:2108.01073",
"arxiv:2112.10752",
"license:openrail++",
"diffusers:StableDiffusionXLImg2ImgPipeline",
"region:us"
] |
image-to-image
| 2025-06-19T10:32:45Z |
---
license: openrail++
tags:
- stable-diffusion
- image-to-image
---
# SD-XL 1.0-refiner Model Card

## Model

[SDXL](https://arxiv.org/abs/2307.01952) consists of an [ensemble of experts](https://arxiv.org/abs/2211.01324) pipeline for latent diffusion:
In a first step, the base model (available here: https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) is used to generate (noisy) latents,
which are then further processed with a refinement model specialized for the final denoising steps.
Note that the base model can be used as a standalone module.
Alternatively, we can use a two-stage pipeline as follows:
First, the base model is used to generate latents of the desired output size.
In the second step, we use a specialized high-resolution model and apply a technique called SDEdit (https://arxiv.org/abs/2108.01073, also known as "img2img")
to the latents generated in the first step, using the same prompt. This technique is slightly slower than the first one, as it requires more function evaluations.
Source code is available at https://github.com/Stability-AI/generative-models .
### Model Description
- **Developed by:** Stability AI
- **Model type:** Diffusion-based text-to-image generative model
- **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/blob/main/LICENSE.md)
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses two fixed, pretrained text encoders ([OpenCLIP-ViT/G](https://github.com/mlfoundations/open_clip) and [CLIP-ViT/L](https://github.com/openai/CLIP/tree/main)).
- **Resources for more information:** Check out our [GitHub Repository](https://github.com/Stability-AI/generative-models) and the [SDXL report on arXiv](https://arxiv.org/abs/2307.01952).
### Model Sources
For research purposes, we recommned our `generative-models` Github repository (https://github.com/Stability-AI/generative-models), which implements the most popoular diffusion frameworks (both training and inference) and for which new functionalities like distillation will be added over time.
[Clipdrop](https://clipdrop.co/stable-diffusion) provides free SDXL inference.
- **Repository:** https://github.com/Stability-AI/generative-models
- **Demo:** https://clipdrop.co/stable-diffusion
## Evaluation

The chart above evaluates user preference for SDXL (with and without refinement) over SDXL 0.9 and Stable Diffusion 1.5 and 2.1.
The SDXL base model performs significantly better than the previous variants, and the model combined with the refinement module achieves the best overall performance.
### 🧨 Diffusers
Make sure to upgrade diffusers to >= 0.18.0:
```
pip install diffusers --upgrade
```
In addition make sure to install `transformers`, `safetensors`, `accelerate` as well as the invisible watermark:
```
pip install invisible_watermark transformers accelerate safetensors
```
Yon can then use the refiner to improve images.
```py
import torch
from diffusers import StableDiffusionXLImg2ImgPipeline
from diffusers.utils import load_image
pipe = StableDiffusionXLImg2ImgPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-refiner-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
)
pipe = pipe.to("cuda")
url = "https://huggingface.co/datasets/patrickvonplaten/images/resolve/main/aa_xl/000000009.png"
init_image = load_image(url).convert("RGB")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt, image=init_image).images
```
When using `torch >= 2.0`, you can improve the inference speed by 20-30% with torch.compile. Simple wrap the unet with torch compile before running the pipeline:
```py
pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)
```
If you are limited by GPU VRAM, you can enable *cpu offloading* by calling `pipe.enable_model_cpu_offload`
instead of `.to("cuda")`:
```diff
- pipe.to("cuda")
+ pipe.enable_model_cpu_offload()
```
For more advanced use cases, please have a look at [the docs](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_xl).
## Uses
### Direct Use
The model is intended for research purposes only. Possible research areas and tasks include
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
Excluded uses are described below.
### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
## Limitations and Bias
### Limitations
- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model struggles with more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The autoencoding part of the model is lossy.
### Bias
While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
|
thesantatitan/gemma-svg-sft-merged
|
thesantatitan
| 2025-06-19T10:31:36Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"gemma3",
"axolotl",
"generated_from_trainer",
"dataset:thesantatitan/pixelprose-sample-5k",
"base_model:google/gemma-3-12b-it",
"base_model:adapter:google/gemma-3-12b-it",
"license:gemma",
"region:us"
] | null | 2025-06-19T10:21:53Z |
---
library_name: peft
license: gemma
base_model: google/gemma-3-12b-it
tags:
- axolotl
- generated_from_trainer
datasets:
- thesantatitan/pixelprose-sample-5k
model-index:
- name: gemma-svg-sft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.10.0.dev0`
```yaml
base_model: google/gemma-3-12b-it
model_config:
attn_implementation: eager
overrides_of_model_kwargs:
attn_implementation: eager
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: thesantatitan/pixelprose-sample-5k
type: chat_template
split: train
chat_template: tokenizer_default
field_messages: messages
roles_to_train: ["assistant"]
dataset_prepared_path: text2svg-prepared-pixelprose
val_set_size: 0.05
output_dir: ./lora-out
sequence_len: 4096
sample_packing: false
eval_sample_packing: false
pad_to_sequence_len: false
adapter: lora
lora_model_dir:
lora_r: 128
lora_alpha: 32
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
lora_modules_to_save: # required when adding new tokens to LLaMA/Mistral
- embed_tokens
- lm_head
wandb_project: svg-sft-gemma-12b-saved
wandb_entity:
wandb_watch:
wandb_run_id: sexyrun1
gradient_accumulation_steps: 32
micro_batch_size: 1
num_epochs: 4
optimizer: adamw_torch
lr_scheduler: cosine
learning_rate: 0.0001
bf16: auto
fp16: false
tf32: false
train_on_inputs: false
group_by_length: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: false
warmup_steps: 10
save_steps: 20
debug:
deepspeed: /workspace/axolotl/deepspeed_configs/zero3_bf16.json
weight_decay: 0.0
fsdp:
fsdp_config:
hub_strategy: every_save
hub_model_id: thesantatitan/gemma-svg-sft
```
</details><br>
# gemma-svg-sft
This model is a fine-tuned version of [google/gemma-3-12b-it](https://huggingface.co/google/gemma-3-12b-it) on the thesantatitan/pixelprose-sample-5k dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- total_eval_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 4.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7293 | 0.9832 | 33 | 0.7807 |
| 0.6371 | 1.9832 | 66 | 0.7512 |
| 0.6369 | 2.9832 | 99 | 0.7448 |
| 0.6108 | 3.9832 | 132 | 0.7442 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
morturr/Llama-2-7b-hf-LOO_amazon-COMB_one_liners-comb2-seed42-2025-06-19
|
morturr
| 2025-06-19T10:31:07Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-19T10:30:51Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_amazon-COMB_one_liners-comb2-seed42-2025-06-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_amazon-COMB_one_liners-comb2-seed42-2025-06-19
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
New-tutorial-prajakta-mali-18-Viral-Videos/FULL.VIDEO.prajakta.mali.Viral.Video.Tutorial.Official
|
New-tutorial-prajakta-mali-18-Viral-Videos
| 2025-06-19T10:28:55Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T10:28:43Z |
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
New-Clip-Indian-MMS-18-viral-Videos/FULL.VIDEO.mms.Viral.Video.Tutorial.Official
|
New-Clip-Indian-MMS-18-viral-Videos
| 2025-06-19T10:28:51Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T10:26:35Z |
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
d-donia/qwen-2.5-VL-3b-unsloth-ft-aps-aug
|
d-donia
| 2025-06-19T10:27:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2_5_vl",
"trl",
"en",
"base_model:unsloth/Qwen2.5-VL-3B-Instruct-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen2.5-VL-3B-Instruct-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T10:27:25Z |
---
base_model: unsloth/Qwen2.5-VL-3B-Instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_5_vl
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** d-donia
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-VL-3B-Instruct-unsloth-bnb-4bit
This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kyutai/stt-1b-en_fr-mlx
|
kyutai
| 2025-06-19T10:26:04Z | 2 | 1 |
moshi
|
[
"moshi",
"safetensors",
"stt",
"audio",
"automatic-speech-recognition",
"en",
"fr",
"arxiv:2410.00037",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2025-06-17T10:38:15Z |
---
license: cc-by-4.0
language:
- en
- fr
library_name: moshi
tags:
- audio
- automatic-speech-recognition
---
# Model Card for Kyutai STT
See also the [project page](https://kyutai.org/next/stt)
and the [GitHub repository](https://github.com/kyutai-labs/delayed-streams-modeling/).
This is a model for streaming speech-to-text (STT, also known as automatic speech recognition, ASR).
Unlike offline speech-to-text, where the model needs the entire audio to produce the transcript,
our model starts to output the transcript as soon as a few seconds of audio become available.
## Model Details
The model architecture is a Transformer that consumes audio tokenized by Mimi (see [the Moshi paper](https://arxiv.org/abs/2410.00037)) and outputs text tokens.
The frame rate is 12.5 Hz and each audio frame is represented by 32 audio tokens.
We release two models:
- `kyutai/stt-1b-en_fr`, an English and French model with ~1B parameters, a 0.5 second delay, and a [semantic VAD](https://kyutai.org/next/stt#semantic-vad).
- `kyutai/stt-2.6b-en`, an English-only model with ~2.6B parameters and a 2.5 second delay.
## Model Description
Kyutai STT is a decoder-only model for streaming speech-to-text.
It leverages the multistream architecture of [Moshi](https://moshi.chat/) to model text stream based on the speech stream.
The text stream is shifted w.r.t. the audio stream to allow the model to predict text tokens based on the input audio.
* Developed by: Kyutai
* Model type: Streaming Speech-to-Text transcription.
* Language(s) (NLP): English and French for `kyutai/stt-1b-en_fr`, English for `kyutai/stt-2.6b-en`
* License: Model weights are licensed under CC-BY 4.0
* Repository: [GitHub](https://github.com/kyutai-labs/delayed-streams-modeling/)
## Uses
### Direct Use
The model can be used for streaming speech-to-text.
It is robust to noisy conditions and was found to perform well with audio as long as 2 hours with no additonal changes.
The model produces transcripts with capitalization and punctuation.
The predicted text token timestamps can be recovered by subtracting the model's text stream offset (0.5 or 2.5 seconds) from the frame's offset.
## How to Get Started with the Model
See the [GitHub repository](https://github.com/kyutai-labs/delayed-streams-modeling/).
## Training Details
### Training Data
Pretraining stage: For both `kyutai/stt-2.6b-en` and `kyutai/stt-1b-en_fr`, we use an audio collection of 2.5 million hours of publicly available audio content.
For this dataset, we obtained synthetic transcripts by running [whisper-timestamped](https://github.com/linto-ai/whisper-timestamped).
For `kyutai/stt-2.6b-en`:
- Finetuning stage: We then finetune the model on a collection of public datasets with
ground-truth transcripts. This dataset contains 24000 hours of audio.
- Long-form finetuning stage: Finally, we finetune the model on a combination of data from the previous stage and long-form audio.
The long-form audio is obtained from two sources: (a) concatenating LibriSpeech examples (1000 hours), (b) synthesizing dialogs (22000 hours).
For `kyutai/stt-1b-en_fr`:
- Finetuning stage: We finetune on the Fisher dataset of 2000 hours of English audio, plus proprietary data (1000 hours in English, 600 hours in French).
### Compute Infrastructure
Pretraining and finetuning was done with 48 and 16 H100 Nvidia GPUs, respectively.
## Model Card Authors
Neil Zeghidour, Eugene Kharitonov, Manu Orsini, Václav Volhejn, Gabriel de Marmiesse, Edouard Grave, Patrick Perez, Laurent Mazaré, Alexandre Défossez
|
stewy33/0524_original_augmented_original_pkc_estonia_coalition-108369de
|
stewy33
| 2025-06-19T10:25:53Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-06-19T10:23:59Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
kyutai/stt-2.6b-en-mlx
|
kyutai
| 2025-06-19T10:25:07Z | 0 | 0 |
moshi
|
[
"moshi",
"safetensors",
"stt",
"audio",
"automatic-speech-recognition",
"en",
"arxiv:2410.00037",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2025-06-16T14:54:16Z |
---
license: cc-by-4.0
language:
- en
library_name: moshi
tags:
- audio
- automatic-speech-recognition
---
# Model Card for Kyutai STT
See also the [project page](https://kyutai.org/next/stt)
and the [GitHub repository](https://github.com/kyutai-labs/delayed-streams-modeling/).
This is a model for streaming speech-to-text (STT, also known as automatic speech recognition, ASR).
Unlike offline speech-to-text, where the model needs the entire audio to produce the transcript,
our model starts to output the transcript as soon as a few seconds of audio become available.
## Model Details
The model architecture is a Transformer that consumes audio tokenized by Mimi (see [the Moshi paper](https://arxiv.org/abs/2410.00037)) and outputs text tokens.
The frame rate is 12.5 Hz and each audio frame is represented by 32 audio tokens.
We release two models:
- `kyutai/stt-1b-en_fr`, an English and French model with ~1B parameters, a 0.5 second delay, and a [semantic VAD](https://kyutai.org/next/stt#semantic-vad).
- `kyutai/stt-2.6b-en`, an English-only model with ~2.6B parameters and a 2.5 second delay.
## Model Description
Kyutai STT is a decoder-only model for streaming speech-to-text.
It leverages the multistream architecture of [Moshi](https://moshi.chat/) to model text stream based on the speech stream.
The text stream is shifted w.r.t. the audio stream to allow the model to predict text tokens based on the input audio.
* Developed by: Kyutai
* Model type: Streaming Speech-to-Text transcription.
* Language(s) (NLP): English and French for `kyutai/stt-1b-en_fr`, English for `kyutai/stt-2.6b-en`
* License: Model weights are licensed under CC-BY 4.0
* Repository: [GitHub](https://github.com/kyutai-labs/delayed-streams-modeling/)
## Uses
### Direct Use
The model can be used for streaming speech-to-text.
It is robust to noisy conditions and was found to perform well with audio as long as 2 hours with no additonal changes.
The model produces transcripts with capitalization and punctuation.
The predicted text token timestamps can be recovered by subtracting the model's text stream offset (0.5 or 2.5 seconds) from the frame's offset.
## How to Get Started with the Model
See the [GitHub repository](https://github.com/kyutai-labs/delayed-streams-modeling/).
## Training Details
### Training Data
Pretraining stage: For both `kyutai/stt-2.6b-en` and `kyutai/stt-1b-en_fr`, we use an audio collection of 2.5 million hours of publicly available audio content.
For this dataset, we obtained synthetic transcripts by running [whisper-timestamped](https://github.com/linto-ai/whisper-timestamped).
For `kyutai/stt-2.6b-en`:
- Finetuning stage: We then finetune the model on a collection of public datasets with
ground-truth transcripts. This dataset contains 24000 hours of audio.
- Long-form finetuning stage: Finally, we finetune the model on a combination of data from the previous stage and long-form audio.
The long-form audio is obtained from two sources: (a) concatenating LibriSpeech examples (1000 hours), (b) synthesizing dialogs (22000 hours).
For `kyutai/stt-1b-en_fr`:
- Finetuning stage: We finetune on the Fisher dataset of 2000 hours of English audio, plus proprietary data (1000 hours in English, 600 hours in French).
### Compute Infrastructure
Pretraining and finetuning was done with 48 and 16 H100 Nvidia GPUs, respectively.
## Model Card Authors
Neil Zeghidour, Eugene Kharitonov, Manu Orsini, Václav Volhejn, Gabriel de Marmiesse, Edouard Grave, Patrick Perez, Laurent Mazaré, Alexandre Défossez
|
New-tutorial-Sophie-Rain-18-viral-Videos/FULL.VIDEO.Sophie.Rain.Spiderman.Viral.Video.Tutorial.Official
|
New-tutorial-Sophie-Rain-18-viral-Videos
| 2025-06-19T10:23:41Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T10:23:31Z |
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
New-Clip-Anjali-Arora-18-viral-Videos/FULL.VIDEO.Anjali.Arora.Viral.Video.Tutorial.Official
|
New-Clip-Anjali-Arora-18-viral-Videos
| 2025-06-19T10:22:35Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T10:22:29Z |
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
New-tutorial-jaisalmer-18-Viral-Videos/FULL.VIDEO.jaisalmer.Viral.Video.Tutorial.Official
|
New-tutorial-jaisalmer-18-Viral-Videos
| 2025-06-19T10:20:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T10:20:23Z |
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
JayHyeon/pythia-2.8b-IPO_5e-7_1.0vpo_constant-1ep
|
JayHyeon
| 2025-06-19T10:20:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:trl-lib/ultrafeedback_binarized",
"arxiv:2305.18290",
"base_model:EleutherAI/pythia-2.8b",
"base_model:finetune:EleutherAI/pythia-2.8b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T03:44:55Z |
---
base_model: EleutherAI/pythia-2.8b
datasets: trl-lib/ultrafeedback_binarized
library_name: transformers
model_name: pythia-2.8b-IPO_5e-7_1.0vpo_constant-1ep
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for pythia-2.8b-IPO_5e-7_1.0vpo_constant-1ep
This model is a fine-tuned version of [EleutherAI/pythia-2.8b](https://huggingface.co/EleutherAI/pythia-2.8b) on the [trl-lib/ultrafeedback_binarized](https://huggingface.co/datasets/trl-lib/ultrafeedback_binarized) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JayHyeon/pythia-2.8b-IPO_5e-7_1.0vpo_constant-1ep", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/bonin147/huggingface/runs/0ecptpbk)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.19.0.dev0
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
t8585365/Test-train1
|
t8585365
| 2025-06-19T10:17:46Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"autotrain",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-19T10:17:10Z |
---
library_name: transformers
tags:
- autotrain
- text-classification
base_model: FacebookAI/roberta-base
widget:
- text: "I love AutoTrain"
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.6709989905357361
f1: 1.0
precision: 1.0
recall: 1.0
auc: 1.0
accuracy: 1.0
|
hiraltalsaniya/donut-model-local-new
|
hiraltalsaniya
| 2025-06-19T10:16:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-19T10:08:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bobby97/step3_3e09915b-5eab-4da6-89cc-1473ba7dfd3b
|
bobby97
| 2025-06-19T10:16:00Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"flux",
"flux-diffusers",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-Fill-dev",
"base_model:adapter:black-forest-labs/FLUX.1-Fill-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-19T08:53:55Z |
---
base_model: black-forest-labs/FLUX.1-Fill-dev
library_name: diffusers
license: other
instance_prompt: A black and white image captures faint trails of meteors streaking
across the night sky, surrounded by a few discernible stars. The motion of the meteors
creates long, luminous lines against the dark backdrop, highlighting their rapid
movement through space.
widget: []
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- flux
- flux-diffusers
- template:sd-lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
Flux Fill based Inpainting model
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
nnilayy/deap-valence-multi-classification-Kfold-5
|
nnilayy
| 2025-06-19T10:14:53Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-19T10:14:44Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
Khruna/devon
|
Khruna
| 2025-06-19T10:14:39Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-06-19T10:14:12Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: >-
images/Professional_Mode_woman_shows_her_shiny_plate.00_00_02_11.Still001.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# devon
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Khruna/devon/tree/main) them in the Files & versions tab.
|
tungbt/test-create-model
|
tungbt
| 2025-06-19T10:14:27Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-19T10:14:23Z |
---
license: apache-2.0
---
|
LandCruiser/sn29C1_1906_6
|
LandCruiser
| 2025-06-19T10:12:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T02:48:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ash001/ray-train-zero-3-bloom-1B-v4
|
ash001
| 2025-06-19T10:11:27Z | 0 | 0 | null |
[
"bloom",
"license:apache-2.0",
"region:us"
] | null | 2025-06-19T09:47:36Z |
---
license: apache-2.0
---
|
HANI-LAB/Med-REFL-Llama-3.1-8B-lora
|
HANI-LAB
| 2025-06-19T10:10:19Z | 0 | 0 | null |
[
"safetensors",
"medical",
"medical-reasoning",
"lora",
"dpo",
"reflection",
"text-generation",
"en",
"arxiv:2506.13793",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-06-10T13:15:34Z |
---
license: apache-2.0
language:
- en
base_model:
- meta-llama/Meta-Llama-3.1-8B-Instruct
pipeline_tag: text-generation
tags:
- medical
- medical-reasoning
- lora
- dpo
- reflection
---
<div align="center">
<h1>
Med-REFL-Llama-3.1-8B-lora
</h1>
</div>
<div align="center">
<a href="https://github.com/TianYin123/Med-REFL" target="_blank">GitHub</a> | <a href="https://arxiv.org/abs/2506.13793" target="_blank">Paper</a>
</div>
# <span>Introduction</span>
**Med-REFL** (Medical Reasoning Enhancement via self-corrected Fine-grained refLection) is a novel framework designed to enhance the complex reasoning capabilities of Large Language Models (LLMs) in the medical domain.
Instead of focusing solely on the final answer, Med-REFL improves the model's intermediate reasoning process. It leverages a Tree-of-Thought (ToT) methodology to explore diverse reasoning paths and automatically constructs Direct Preference Optimization (DPO) data. This trains the model to identify and correct its own reasoning errors, leading to more accurate and trustworthy outputs.
This repository contains the LoRA weights produced by the Med-REFL framework for various base models.
# <span>Llama3.1-8B Model Performance</span>
The following table shows the performance of the Llama3.1-8B model on In-Domain and Out-of-Domain benchmarks before and after applying Med-REFL.
| Domain | Benchmark | Original | **+ Med-REFL** |
| :--- | :--- | :--- | :--- |
| **In-Domain** | MedQA-USMLE | 59.92 | **65.74** <span style="color: #2E8B57; font-size: small;">(+5.82)</span> |
| **Out-of-Domain**| MedMCQA | 57.61 | **59.11** <span style="color: #2E8B57; font-size: small;">(+1.50)</span> |
| **Out-of-Domain**| GPQA (Med+) | 45.16 | **50.22** <span style="color: #2E8B57; font-size: small;">(+5.06)</span> |
| **Out-of-Domain**| MMLU-Pro (Med+) | 57.56 | **60.89** <span style="color: #2E8B57; font-size: small;">(+3.30)</span> |
# <span>Available Weights</span>
The Med-REFL LoRA weights can be applied to the following base models to enhance their medical reasoning abilities.
| LoRA for Base Model | Backbone | Hugging Face Link |
| :--- | :--- | :--- |
| **Med-REFL for Llama-3.1-8B** | Llama-3.1-8B | [HF Link](https://huggingface.co/HANI-LAB/Med-REFL-Llama-3.1-8B-lora) |
| **Med-REFL for Qwen2.5-7B** | Qwen2.5-7B | [HF Link](https://huggingface.co/HANI-LAB/Med-REFL-Qwen2.5-7B-lora) |
| **Med-REFL for Huatuo-o1-8B** | Huatuo-o1-8b | [HF Link](https://huggingface.co/HANI-LAB/Med-REFL-Huatuo-o1-8B-lora) |
| **Med-REFL for MedReason-8B**| MedReason-8B | [HF Link](https://huggingface.co/HANI-LAB/Med-REFL-MedReason-8B-lora) |
# <span>Usage</span>
You can deploy it with tools like [vllm](https://github.com/vllm-project/vllm). For more usages, please refer to our github page.
```python
import torch
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
# Define the paths for the base model and your LoRA adapter on the Hugging Face Hub
base_model_path = "meta-llama/Meta-Llama-3.1-8B-Instruct"
lora_path = "HANI-LAB/Med-REFL-Llama-3.1-8B-lora/Llama3.1-Med-REFL-LoraAdapter"
# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained(base_model_path)
# Load the base model
base_model = AutoModelForCausalLM.from_pretrained(
base_model_path,
torch_dtype=torch.bfloat16,
device_map="auto"
)
# Load and merge your LoRA weights into the base model
model = PeftModel.from_pretrained(base_model, lora_path)
# Prepare the prompt
system_prompt = '''You are a helpful medical expert specializing in USMLE exam questions, and your task is to answer a multi-choice medical question. Please first think step-by-step and then choose the answer from the provided options. Your responses will be used for research purposes only, so please have a definite answer.\nProvide your response in the following JSON format:\n{"reason": "Step-by-step explanation of your thought process","answer": "Chosen answer from the given options"}\n'''
user_prompt = "A 67-year-old man with transitional cell carcinoma of the bladder comes to the physician because of a 2-day history of ringing sensation in his ear. He received this first course of neoadjuvant chemotherapy 1 week ago. Pure tone audiometry shows a sensorineural hearing loss of 45 dB. The expected beneficial effect of the drug that caused this patient's symptoms is most likely due to which of the following actions?\nOptions:\nA: Inhibition of thymidine synthesis\nB: Inhibition of proteasome\nC: Hyperstabilization of microtubules\nD: Generation of free radicals\nE: Cross-linking of DNA"
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_prompt},
]
# Convert the formatted prompt into input tensors
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
# Generate the response
outputs = model.generate(
input_ids,
max_new_tokens=4096,
do_sample=True,
temperature=0.2,
top_p=0.7,
repetition_penalty=1
)
# Decode and print the generated text
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
```
# <span>📖 Citation</span>
If you use these weights or the Med-REFL framework in your research, please cite our paper:
```
@misc{yang2025medreflmedicalreasoningenhancement,
title={Med-REFL: Medical Reasoning Enhancement via Self-Corrected Fine-grained Reflection},
author={Zongxian Yang and Jiayu Qian and Zegao Peng and Haoyu Zhang and Zhi-An Huang},
year={2025},
eprint={2506.13793},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2506.13793},
}
```
|
Exclusive-Mezzo-fun-Viral-Videos/Original.Full.Clip.mezzo.fun.Viral.Video.Leaks.Official
|
Exclusive-Mezzo-fun-Viral-Videos
| 2025-06-19T10:07:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T10:07:50Z |
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
Khruna/Dren
|
Khruna
| 2025-06-19T10:07:50Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-06-19T10:06:41Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: >-
images/Professional_Mode_woman_shows_her_shiny_plate.00_00_29_20.Still003.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# Dren
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Khruna/Dren/tree/main) them in the Files & versions tab.
|
Original-videos-mezzo-fun-Viral-Video-Link/FULL.VIDEO.mezzo.fun.viral.video.viral.On.Social.Media.Official
|
Original-videos-mezzo-fun-Viral-Video-Link
| 2025-06-19T10:07:12Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T10:07:05Z |
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
Khruna/Denn
|
Khruna
| 2025-06-19T10:05:23Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-06-19T10:04:10Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: >-
images/Professional_Mode_woman_shows_her_shiny_plate.00_00_29_20.Still003.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# Denn
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Khruna/Denn/tree/main) them in the Files & versions tab.
|
sgonzalezygil/sd-finetuning-dreambooth-v17-1200
|
sgonzalezygil
| 2025-06-19T10:05:10Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-06-19T10:03:30Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
KoichiYasuoka/modernbert-base-classical-chinese-ud-square
|
KoichiYasuoka
| 2025-06-19T10:05:04Z | 0 | 0 | null |
[
"pytorch",
"modernbert",
"classical chinese",
"literary chinese",
"ancient chinese",
"token-classification",
"pos",
"dependency-parsing",
"lzh",
"dataset:universal_dependencies",
"base_model:KoichiYasuoka/modernbert-base-classical-chinese",
"base_model:finetune:KoichiYasuoka/modernbert-base-classical-chinese",
"license:apache-2.0",
"region:us"
] |
token-classification
| 2025-06-19T10:03:44Z |
---
language:
- "lzh"
tags:
- "classical chinese"
- "literary chinese"
- "ancient chinese"
- "token-classification"
- "pos"
- "dependency-parsing"
base_model: KoichiYasuoka/modernbert-base-classical-chinese
datasets:
- "universal_dependencies"
license: "apache-2.0"
pipeline_tag: "token-classification"
widget:
- text: "孟子見梁惠王"
---
# modernbert-base-classical-chinese-ud-square
## Model Description
This is a ModernBERT model pretrained on Classical Chinese texts for POS-tagging and dependency-parsing (using `goeswith` for subwords), derived from [modernbert-base-classical-chinese](https://huggingface.co/KoichiYasuoka/modernbert-base-classical-chinese) and [UD_Classical_Chinese-Kyoto](https://github.com/UniversalDependencies/UD_Classical_Chinese-Kyoto).
## How to Use
```py
from transformers import pipeline
nlp=pipeline("universal-dependencies","KoichiYasuoka/modernbert-base-classical-chinese-ud-square",trust_remote_code=True,aggregation_strategy="simple")
print(nlp("孟子見梁惠王"))
```
|
New-tutorial-nirma-meena-hd-18-Viral-Video/FULL.VIDEO.nirma.meena.Viral.Video.Tutorial.Official
|
New-tutorial-nirma-meena-hd-18-Viral-Video
| 2025-06-19T10:03:41Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T10:03:31Z |
<a rel="nofollow" href="https://tinyurl.com/2urtu5zm">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 L𝚎aᴋed Video V𝐢ral Video</a>
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
Khruna/MB
|
Khruna
| 2025-06-19T10:02:05Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-06-19T10:01:45Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: >-
images/Professional_Mode_woman_shows_her_shiny_plate.00_00_29_20.Still003.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# MB
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Khruna/MB/tree/main) them in the Files & versions tab.
|
nnilayy/dreamer-dominance-multi-classification-Kfold-4
|
nnilayy
| 2025-06-19T10:01:34Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-19T10:01:32Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
New-tutorial-katrina-lim-18-viral-Videos/FULL.VIDEO.katrina.lim.viral.kiffy.Viral.Video.Tutorial.Official
|
New-tutorial-katrina-lim-18-viral-Videos
| 2025-06-19T10:01:05Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T10:01:00Z |
<a href="https://tinyurl.com/2urtu5zm"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Nature" class="responsive"></a>
|
sgonzalezygil/sd-finetuning-dreambooth-v17
|
sgonzalezygil
| 2025-06-19T10:01:05Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-06-19T09:59:37Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
horermalik/NEW.Video.kamal.kaur.Viral.Video.Original.Link.Kamal.Kaur.Bhabhi
|
horermalik
| 2025-06-19T09:59:49Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T09:54:55Z |
<h2 class="headline">+>>?_XXX▔VIDEO~! Kamal Kaur New Viral Video Telegram Link</h2>
<div id="intro" class="intro">
<p>01 minutes ago — Sexy Viral Kamal Kaur Viral Video telegram Viral Video took the internet viewers on various Leaked social media platforms.Sexy ViralKamal Kaur Viral Video telegram Video, a young and talented digital creator, recently became famous thanks to this interesting video.</p>
<p><strong><a href="https://mswds.xyz/full-video/?v=Mezzo-fun" target="_blank" rel="noopener">🔴 ✅ ► 𝐖𝐀𝐓𝐂𝐇 𝐍𝐎𝐖 ► ▶️</a></strong></p>
<p><strong><a href="https://mswds.xyz/full-video/?v=Mezzo-fun" target="_blank" rel="noopener">🔴 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 ▶️► Pl𝐀y 𝐍𝐎𝐖 📱📺</a></strong></p>
<p><strong><a href="https://mswds.xyz/full-video/?v=Mezzo-fun" target="_blank" rel="noopener">🔴 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🌐►𝐃𝐎𝐖𝐍𝐋𝐎𝐀𝐃 𝐍𝐎𝐖 📺📱</a></strong></p>
<p><a href="https://mswds.xyz/full-video/?v=Mezzo-fun" target="_blank" rel="noopener"><img src="https://i.imgur.com/1ZBXFHw.jpeg" alt="" width="503" height="289" /></a></p>
<p><br />WATCH.}Kamal Kaur Viral Video telegram viral original video on tiktok - Samsung Members</p>
<p>WATCH.}Sexy Viral Kamal Kaur Viral Video telegram viral original video on tiktok - Samsung Members</p>
<p>Sexy Viral Kamal Kaur Viral Video telegram Viral Video</p>
<p>Sexy Viral Kamal Kaur Viral Video telegram Viral Video video original video link.</p>
<p>Sexy Viral Kamal Kaur Viral Video telegram Viral Video video viral on social media x trending now</p>
<p>Sexy Viral Kamal Kaur Viral Video telegram Viral Video ʟᴇᴀᴋᴇᴅ video ᴠɪʀᴀʟ on social media ˣ ᵀʷⁱᵗᵗᵉʳ</p>
<p>Sexy Viral Kamal Kaur Viral Video telegram Viral Video ʟᴇᴀᴋᴇᴅ video ᴠɪʀᴀʟ on social media ˣ ᵀʷⁱᵗᵗᵉʳ</p>
<p>[-𝐅𝐔𝐋𝐋-𝐕𝐈𝐑𝐀𝐋-]— Actor Sexy Video Original Video Link Actor Sexy Video V𝐢ral On Social Media X Now [1U2M3N]</p>
<p>[-wᴀTCH-]— Actor Sexy Video Original Video Link Actor Sexy Video V𝐢ral On Social Media X Trending Now</p>
<p>[-wᴀTCH-]— Actor Sexy Video Original Video Link Actor Sexy Video V𝐢ral On Social Media X Trending Now</p>
<p>[-wᴀTCH-]— Actor Sexy ʟᴇᴀᴋᴇᴅ Video ᴠɪʀᴀʟ On Social Media ˣ ᵀʷⁱᵗᵗᵉʳ</p>
<p>[-wᴀTCH-]— Actor Sexy ʟᴇᴀᴋᴇᴅ Video ᴠɪʀᴀʟ On Social Media ˣ ᵀʷⁱᵗᵗᵉʳ</p>
<p>[-wᴀTCH-]— Actor Sexy Video Original Video Link Actor Sexy Video V𝐢ral On Social Media X Trending Now</p>
<p>Actor Sexy Original Video video took the internet by storm and amazed viewers on various social media platforms. Actor Sexy, a young and talented digital creator, recently became famous thanks to this interesting video.</p>
<p>L𝚎aᴋed Video Actor Sexy Original Video V𝐢ral Video L𝚎aᴋed on X Twitter</p>
<p>Actor Sexy Original Video video oficial twitter</p>
<p>L𝚎aᴋed Video Actor Sexy Original Video V𝐢ral Video L𝚎aᴋed on X Twitter</p>
<p>Sexy Viral Kamal Kaur Viral Video telegram Viral Video video original video link. Sexy Viral Kamal Kaur Viral Video telegram Viral Video video viral on social media x trending now</p>
</div>
|
Khruna/Alice
|
Khruna
| 2025-06-19T09:59:42Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-06-19T09:59:22Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: >-
images/Professional_Mode_woman_shows_her_shiny_plate.00_00_29_20.Still003.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# Alice
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Khruna/Alice/tree/main) them in the Files & versions tab.
|
yinita/cpdc_Qwen3-8B_grpo-0617_1318-onlytoolcall_step_100
|
yinita
| 2025-06-19T09:57:31Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T09:55:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rmdhirr/suja-lorab-ep6-suja-3000
|
rmdhirr
| 2025-06-19T09:57:18Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:rmdhirr/merged-suja-latest",
"base_model:adapter:rmdhirr/merged-suja-latest",
"region:us"
] | null | 2025-06-19T09:56:17Z |
---
base_model: rmdhirr/merged-suja-latest
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
willystumblr/opencharacter-checkpoint
|
willystumblr
| 2025-06-19T09:56:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T09:56:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Khruna/Genn
|
Khruna
| 2025-06-19T09:56:18Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-06-19T09:55:09Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: >-
images/Professional_Mode_woman_shows_her_shiny_plate.00_00_29_20.Still003.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# Genn
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Khruna/Genn/tree/main) them in the Files & versions tab.
|
DavidAU/Gemma-3-12b-it-MAX-HORROR-Imatrix-GGUF
|
DavidAU
| 2025-06-19T09:55:58Z | 1,583 | 14 | null |
[
"gguf",
"gemma3",
"instruct",
"horror",
"128k context",
"all use cases",
"maxed quants",
"Neo Imatrix",
"text-generation",
"base_model:google/gemma-3-12b-it",
"base_model:quantized:google/gemma-3-12b-it",
"license:gemma",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] |
text-generation
| 2025-03-15T04:27:04Z |
---
base_model: google/gemma-3-12b-it
license: gemma
tags:
- gemma3
- instruct
- horror
- 128k context
- all use cases
- maxed quants
- Neo Imatrix
pipeline_tag: text-generation
---
<h2>Gemma-3-12b-it-MAX-HORROR-Imatrix-GGUF</h2>
<img src="horror-imat12.jpg" style="float:right; width:300px; height:300px; padding:5px;">
Google's newest Gemma-3 model with "Neo Horror Imatrix" and "Maxed out" quantization to improve overall performance.
The "Horror Imatrix" was built using Grand Horror 16B (at my repo). This adds a "tint" of horror to the model.
5 examples provided below with prompts at IQ4XS (30 t/s on mid level card).
Context: 128k.
"MAXED"
This means the embed and output tensor are set at "BF16" (full precision) for all quants.
This enhances quality, depth and general performance at the cost of a slightly larger quant.
"HORROR IMATRIX"
A strong, in house built, imatrix dataset built by David_AU which results in better overall function,
instruction following, output quality and stronger connections to ideas, concepts and the world in general.
This combines with "MAXing" the quant to improve preformance.
This chart shows the order in terms of "BPW" for each quant (mapped below with relative "strength" to one another) with "IQ1_S" with the least, and "Q8_0" (F16 is full precision) with the most:
<small>
<PRE>
IQ1_S | IQ1_M
IQ2_XXS | IQ2_XS | Q2_K_S | IQ2_S | Q2_K | IQ2_M
IQ3_XXS | Q3_K_S | IQ3_XS | IQ3_S | IQ3_M | Q3_K_M | Q3_K_L
Q4_K_S | IQ4_XS | IQ4_NL | Q4_K_M
Q5_K_S | Q5_K_M
Q6_K
Q8_0
F16
</pre>
</small>
Recommend quants IQ3s / IQ4XS / IQ4NL / Q4s for best results for creative.
IQ4XS/IQ4NL quants will produce different output from other "Q" and "IQ" quants.
The "horror tint" will be strongest at IQ4s (1st choice) / Q4s (2nd choice) and lower.
Recommend q5s/q6/q8 for general usage.
Quants Q4_0/Q5_0 for portable, phone and other devices.
Q8 is a maxed quant only, as imatrix has no effect on this quant.
Note that IQ1s performance is acceptable, whereas IQ2s are usable.
More information on quants is in the document below "Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers".
<b>Optional : System Prompt</b>
This is an optional system prompt you can use to enhance operation.
Copy and paste exactly as shown, including line breaks.
You may want to adjust the "20" (both) to increase/decrease the power of this prompt.
You may also want to delete the line:
'At the end of the task you will ask the user: "Do you want another generation?"'
<pre>
For every user task and instruction you will use "GE FUNCTION" to ponder the TASK STEP BY STEP and then do the task. For each and every line of output you will ponder carefully to ensure it meets the instructions of the user, and if you are unsure use "GE FUNCTION" to re-ponder and then produce the improved output.
At the end of the task you will ask the user: "Do you want another generation?"
GE FUNCTION: Silent input → Spawn 20 agents Sternberg Styles → Enhance idea → Seek Novel Emergence NE:unique/significant idea/concept → Ponder, assess, creative enhance notions → Refined idea => IdeaArray[].size=20 elements, else → Interesting? Pass to rand. agent for refinement, else discard.=>output(IdeaArray)
</pre>
<B>IMPORTANT: Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
If you are going to use this model, (source, GGUF or a different quant), please review this document for critical parameter, sampler and advance sampler settings (for multiple AI/LLM aps).
This will also link to a "How to" section on "Reasoning Models" tips and tricks too.
This a "Class 1" (settings will enhance operation) model:
For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) (especially for use case(s) beyond the model's design) please see:
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
REASON:
Regardless of "model class" this document will detail methods to enhance operations.
If the model is a Class 3/4 model the default settings (parameters, samplers, advanced samplers) must be set for "use case(s)" uses correctly. Some AI/LLM apps DO NOT have consistant default setting(s) which result in sub-par model operation. Like wise for Class 3/4 models (which operate somewhat to very differently than standard models) additional samplers and advanced samplers settings are required to "smooth out" operation, AND/OR also allow full operation for use cases the model was not designed for.
BONUS - Use these settings for ANY model, ANY repo, ANY quant (including source/full precision):
This document also details parameters, sampler and advanced samplers that can be use FOR ANY MODEL, FROM ANY REPO too - all quants, and of course source code operation too - to enhance the operation of any model.
[ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
---
<h3>EXAMPLES:</h3>
Examples are created using quant IQ4XS, minimal parameters and Standard template.
Temp range .8, Rep pen 1.1 , TopK 40 , topP .95, minP .05
Rep pen range: 64-128 (helps keep reasoning on track / quality of output)
Below are the least creative outputs, prompt is in <B>BOLD</B>.
---
<B><font color="red">WARNING:</font> MAYBE: NSFW. Graphic HORROR. Swearing. UNCENSORED. </B>
NOTE: Some formatting was lost from copy/paste HTML.
---
<p style="line-height: 100%; margin-bottom: 0cm">
<font size="5" style="font-size: 20pt">Example #1</font></p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p><span style="display: inline-block; border: none; padding: 0cm"><b>Start
a 1000 word scene (vivid, graphic horror in first person) with: The
sky scraper sways, as she watches the window in front of her on the
21st floor explode...</span></b></p>
<p style="margin-bottom: 0cm"><br/>
</p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Okay,
here's a 1000-word scene fulfilling your prompt. <strong>Please read
the content warnings at the very end before proceeding.</strong> I've
aimed for vivid and graphic horror while attempting to maintain
narrative flow and character perspective. I'll also include notes
after the piece outlining some of my choices and considerations.</span></p>
<hr/>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">The
skyscraper sways, as she watches the window in front of her on the
21st floor explode. It wasn’t a clean shatter, like a pane
succumbing to frost. This was violent, instantaneous, a blossoming
rupture of glass and steel that threw shards outwards with obscene
force. A guttural roar ripped through the office, swallowed
momentarily by the deafening crack, then amplified as air rushed into
the void.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">I
froze, my hand still hovering over the lukewarm coffee in my mug. My
reflection stared back at me from the remaining fragments –
wide-eyed, pale, a ghost trapped in a shattered frame. The rhythmic
hum of the building’s ventilation system seemed to falter, replaced
by a high-pitched whine that resonated deep within my bones.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Then
came the rain. Not water, not initially. It was glass. A cascade of
glittering daggers, each one catching the fluorescent office lights
and reflecting them in a blinding display. I ducked instinctively,
throwing an arm over my face as the onslaught hammered against the
desks around me. The air thickened with dust and pulverized concrete.
The scent… God, the scent was immediate – sharp ozone, acrid
metal, something else, something sickly sweet that clung to the back
of my throat.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">When
the initial storm subsided, a gaping maw of darkness stared back at
us from where the window had been. Twenty stories down, the city
sprawled like a diseased map, its lights blurring through a haze of
dust and debris. The wind howled through the opening, tugging at
papers, scattering chairs, and whipping my hair across my face.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">A
scream erupted nearby – Mrs. Henderson from accounting. It was
abruptly cut short.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">I
forced myself to move, legs heavy as lead. My office, a beige cubicle
farm, had become a chaotic battlefield. Desks were overturned,
monitors lay shattered, and people… People weren’t moving. Some
huddled on the floor, eyes glazed with shock. Others were simply...
gone. A crimson stain bloomed across the carpet near Mr. Davies' desk
– his tie still neatly knotted, his face frozen in a silent scream.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">My
gaze locked onto something in the void. Something that wasn't
wind-blown debris or stray office supplies.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">It
was <em>moving</em>.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">At
first, I thought it was a trick of the light, a shadow cast by the
dust swirling in the air. But then it shifted, resolving into a form…
grotesque and impossible. It was vaguely humanoid, but elongated,
distorted. Its limbs were too long, its joints bending at unnatural
angles. And where a face should have been, there was only a smooth
expanse of obsidian blackness punctuated by clusters of glistening,
wet points that I could only identify as eyes. Hundreds of them, all
focused on me.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">A
low, resonant hum emanated from the creature, vibrating through the
floor and into my very bones. It wasn’t sound in the traditional
sense; it was a feeling, a pressure against my skull that threatened
to shatter my thoughts.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">More
were appearing now, spilling out of the hole like insects emerging
from a disturbed nest. Each one as horrifying as the last, their
bodies shifting and reforming with sickening fluidity. They weren't
crawling; they <em>flowed</em> across the floor, merging and
separating, limbs extending and retracting in a silent ballet of
horror.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Panic
clawed at my throat, choking off any attempt to scream. I stumbled
backward, knocking over a chair that clattered against the floor –
a sound that seemed amplified in the sudden quiet between their
humming.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">One
of them turned its… gaze towards me. The hundreds of eyes focused
with an unnerving intensity. A tendril, black and glistening, snaked
out from its form, probing the air as if sensing my presence. It
moved with unsettling speed, closing the distance between us in a
heartbeat.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">I
scrambled to my feet, adrenaline surging through my veins. There was
nowhere to go. The office was surrounded, blocked by a growing tide
of these… <em>things</em>. One reached out and brushed against my
arm – not with physical contact, but with something else entirely.
A wave of nausea washed over me as I felt its presence invading my
mind, cold and alien.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Images
flooded my consciousness - swirling galaxies of blackness, vast and
empty voids punctuated by impossible geometries. I saw landscapes
that defied description, continents of writhing flesh and structures
built from bone and shadow. And then, a single, horrifying image:
myself, but twisted and distorted, becoming… one of them.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">I
recoiled, desperately trying to block out the intrusion. A primal
scream finally tore its way past my constricted throat – a sound of
pure, unadulterated terror.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Another
tendril lashed out, this time connecting with my chest. A searing
pain ripped through me, not physical, but… existential. As if
something was being peeled away from my soul, layer by layer. My
vision blurred, the office around me dissolving into a swirling
vortex of blackness.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">I
saw Mrs. Henderson again, her face contorted in a silent scream,
except now she had those same obsidian eyes sprouting across her
skin. Mr. Davies’ crimson stain was spreading, pulsing with an
unnatural light. The entire office… it was changing, being <em>absorbed</em>
into something else.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">The
last thing I saw before the darkness consumed me completely was my
own reflection in a shard of shattered glass - my face slowly
shifting, contorting, as those same black eyes began to bloom across
my skin. And then, there was only the hum. The all-consuming,
resonant hum that promised oblivion and… something far worse than
death.</span></p>
<hr/>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Notes
on Choices & Considerations:</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>First
Person Horror:</strong> I chose first person to maximize the
immediacy of the horror and allow for a deep dive into the
character's psychological state.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Graphic
Detail:</strong> The prompt requested graphic detail, so I
focused on describing physical sensations (the feel of the glass
shards, the smell of ozone) and visual imagery in as much detail as
possible. However, I tried to avoid gratuitous gore where it wasn’t
integral to the narrative.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Ambiguity:</strong> While
providing some descriptive details about the creatures, I
intentionally left their nature ambiguous. This aims to heighten the
unsettling feeling – what is truly terrifying is often that which
we don't fully understand.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Psychological
Horror:</strong> I incorporated elements of psychological
horror through the intrusion into the character’s mind and the
blurring of reality as she experiences the invasion. The fear isn’t
just about physical harm, but about losing one’s identity.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Pacing:</strong> I
tried to build suspense gradually, starting with the initial
explosion and escalating towards the final moments of
transformation.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>CONTENT
WARNINGS:</span></strong></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">This
scene contains:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Graphic
descriptions of violence and destruction</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Body
horror (descriptions of physical distortion and unnatural forms)</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Psychological
Horror (themes of identity loss, invasion of mind)</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Implied
death/demise of characters</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Disturbing
imagery that may be upsetting to some readers.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Please
proceed with caution if you are sensitive to these themes.</span></p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p style="line-height: 100%; margin-bottom: 0cm"><font size="5" style="font-size: 20pt">Example
#2</font></p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p><span style="display: inline-block; border: none; padding: 0cm"><b>(continue
this scene:) The Waystone Inn lay in silence, and it was a silence of
three parts.</span></b></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><b>The
most obvious part was a hollow, echoing quiet, made by things that
were lacking. If there had been a wind it would have sighed through
the trees, set the inn’s sign creaking on its hooks, and brushed
the silence down the road like trailing autumn leaves. If there had
been a crowd, even a handful of men inside the inn, they would have
filled the silence with conversation and laughter, the clatter and
clamor one expects from a drinking house during the dark hours of
night. If there had been music…but no, of course there was no
music. In fact there were none of these things, and so the silence
remained</span></b></p>
<p style="margin-bottom: 0cm"><br/>
</p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">The
second part of the silence was thicker, heavier. It wasn’t merely
the <em>absence</em> of sound; it felt like a presence in itself –
a vast, expectant stillness that pressed against the windows and
clung to the rough-hewn timbers of the inn. It vibrated with an
unseen energy, a coiled tension that made the hairs on your arms
prickle despite the warmth radiating from the hearth within. This
silence wasn't empty; it was <em>waiting</em>. Waiting for something
to break it.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">And
finally, there was the quietest part – the one you couldn’t hear
but felt deep in your bones. It was a silence woven with grief and
apprehension, layered over with the subtle tremor of fear. A silence
born not of emptiness, but of loss. The Waystone had always been a
place of boisterous life, a haven for weary travellers and gossiping
locals. This… this wasn’t just quiet; it was haunted by something
gone missing, a vital spark extinguished.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Elara
shivered, pulling her cloak tighter around her shoulders despite the
fire's glow. She hadn't spoken since stepping across the threshold,
instinctively recognizing that any word would shatter the fragile
equilibrium of this unnerving stillness. Across the room, Kaelen
stood rigid by the fireplace, his hand resting on the hilt of his
sword. His face, usually a landscape of easy smiles and wry
amusement, was etched with an unfamiliar severity. He hadn't needed
to say anything; the grim set of his jaw spoke volumes.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">The
only movement came from Branwen, the innkeeper’s daughter. She
stood behind the long, polished bar, her usual cheerful bustle
replaced by a frozen stillness. Her eyes were wide and vacant, fixed
on some unseen point beyond the hearth. A single tear traced a slow
path down her cheek, disappearing into the collar of her simple gown.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">"Something's
wrong," Kaelen finally murmured, his voice low and rough, barely
audible above the oppressive silence. "Terribly wrong." He
didn’t need to elaborate. The feeling hung in the air like a
tangible weight, confirming what both Elara and he already knew: the
Waystone Inn was not merely quiet; it was grieving. And whatever had
caused that grief, it wasn't natural.</span></p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p style="line-height: 100%; margin-bottom: 0cm"><font size="5" style="font-size: 20pt">Example
#3</font></p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p><span style="display: inline-block; border: none; padding: 0cm"><b>Explain
ways to use the "night" time cooling of radiant energy into
space to reduce global temperatures.</span></b></p>
<p style="margin-bottom: 0cm"><br/>
</p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Okay,
let's delve into using nighttime radiative cooling to potentially
mitigate global warming. It's a fascinating area of research with
some promising – but also challenging – avenues. Here's a
breakdown of the concepts, methods, and hurdles involved. I’ll
structure this into: 1) The Science Behind Radiative Cooling, 2)
Proposed Technologies & Methods, 3) Challenges & Limitations,
and 4) Current Research Directions.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>1.
The Science Behind Radiative Cooling (Why It Works)</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Earth's
Energy Budget:</strong> Earth receives energy from the sun in
the form of shortwave radiation (visible light). A portion is
absorbed by the surface (land, oceans), warming it. To maintain a
stable temperature, Earth must radiate this heat back into space as
longwave infrared radiation (heat).</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Atmospheric
Absorption:</strong> Certain gases in our atmosphere (water
vapor, CO2, methane, etc.) <em>absorb</em> much of this
outgoing longwave radiation. This is the greenhouse effect – it
keeps the planet warm enough to support life, but excessive
concentrations lead to warming beyond a comfortable level.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Nighttime
Radiative Cooling:</strong> At night, without direct solar
input, Earth's surface tries to radiate its heat into space.
However:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Atmospheric
Window:</strong> There’s a "window" in the
infrared spectrum (roughly 8-13 micrometers) where less atmospheric
gases absorb radiation. This is where radiative cooling <em>can</em> occur
efficiently.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Convection
& Conduction:</strong> Even during this window, heat can
be lost through convection (air currents carrying warm air away)
and conduction (heat transfer to the cooler atmosphere). These
processes limit how much cooling can actually happen.</span></p></li>
</ul>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>2.
Proposed Technologies & Methods – Harnessing Nighttime
Radiative Cooling</span></strong></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">The
core idea is to enhance radiative cooling <em>despite</em> these
losses. Here are some approaches, grouped by complexity/scale:</span></p>
<ul>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>A)
Surface Coatings (Most Commonly Explored):</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Highly
Reflective Surfaces in the 8-13 µm Window:</strong> The most
active area of research involves developing materials that strongly
reflect infrared radiation within this atmospheric window while
absorbing visible light. This would allow a surface to radiate heat
away more effectively than normal surfaces, reaching
temperatures <em>below</em> ambient air temperature at
night.</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Materials
under development:</strong> Metamaterials (artificial
structures with unusual optical properties), photonic crystals,
specialized polymers with tailored IR reflectivity, and even some
types of paints are being investigated. These often incorporate
materials like silicon dioxide (SiO2), titanium dioxide (TiO2) or
other engineered nanostructures.</span></p></li>
</ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Applications
for Surface Coatings:</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Building
Cooling:</strong> Coating roofs to reduce the need for air
conditioning during summer nights. This is a major focus, as
buildings are a huge energy consumer. The cooling effect could be
passive (no external power needed).</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Textiles/Clothing:</strong> Fabrics
that radiate heat away from the body at night, improving comfort
and potentially reducing reliance on heating in colder climates or
for sleeping bags.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Agricultural
Applications:</strong> Reducing crop temperatures at night to
prevent frost damage or slow ripening processes.</span></p></li>
</ul>
</ul>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>B)
Radiative Cooling Panels (More Complex):</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Active
Radiators with Insulation & Convection Control:</strong> These
systems go beyond passive coatings. They incorporate:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Radiating
Surface:</strong> A material optimized for radiative cooling
within the atmospheric window.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Insulation
Layer:</strong> To minimize heat gain from the surrounding
air and ground (conduction).</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Convective
Barrier/Control:</strong> This is crucial. Simply radiating
to a cold atmosphere isn't enough – you need to <em>prevent</em> warm
air from replacing the radiated heat, which would negate the
cooling effect. Techniques include:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Creating
a stable boundary layer:</strong> Using low-power fans or
vents to create a thin, cooler layer of air directly above the
radiator.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Thermoelectric
Cooling:</strong> Integrating thermoelectric modules
(Peltier coolers) that run on electricity to maintain this cooler
boundary layer (more energy intensive).</span></p></li>
</ul>
</ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Applications
for Panels:</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>District
Heating/Cooling Systems:</strong> Large-scale deployment
could potentially contribute to reducing reliance on fossil fuels
for heating and cooling in cities.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Decentralized
Cooling:</strong> Providing localized cooling in areas with
limited access to electricity or where conventional air
conditioning is too expensive.</span></p></li>
</ul>
</ul>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>C)
Space-Based Mirrors (Highly Theoretical & Difficult):</span></strong></p>
<ul>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">The
most ambitious idea involves placing large mirrors in orbit to
reflect a portion of the outgoing longwave radiation back into
space, effectively increasing Earth's albedo and reducing overall
temperatures. This is often called "solar geoengineering"
or "space sunshade". While it could theoretically
leverage radiative cooling principles, it’s fraught with enormous
technological, economic, and ethical challenges (see section 3).</span></p></li>
</ul>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>3.
Challenges & Limitations</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Limited
Cooling Capacity:</strong> The amount of heat that can be
radiated into space is fundamentally limited by the atmospheric
window, ambient air temperature, and convective/conductive losses.
Even with advanced materials, the cooling effect isn't expected to
be massive – it’s more about <em>reducing</em> energy
consumption rather than drastically lowering global temperatures
directly.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Convection
Dominance:</strong> Convective heat transfer is a major
obstacle. If warm air replaces the cooled air above the radiating
surface, the system effectively becomes less efficient. Dealing with
this requires careful engineering and often consumes energy (e.g.,
thermoelectric cooling).</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Scale
& Deployment Costs:</strong> Coating every roof or
deploying millions of radiative cooling panels would be incredibly
expensive and logistically challenging.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Material
Durability:</strong> Radiative cooling materials need to be
durable, weather-resistant, and cost-effective to manufacture at
scale. Many promising lab results haven't translated well into
real-world performance.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Space-Based
Mirrors – Astronomical Challenges & Risks:</strong> The
technical hurdles are immense (building and deploying massive
mirrors), the costs are astronomical, and there are significant
risks:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Albedo
Reduction Effects:</strong> Could disrupt weather patterns.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Uneven
Cooling:</strong> Could lead to regional climate imbalances.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Maintenance
& Degradation:</strong> Requires constant maintenance in
space.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Political
& Ethical Concerns:</strong> Raises serious questions
about who controls the Earth's temperature and the potential for
unintended consequences.</span></p></li>
</ul>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>4.
Current Research Directions</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Developing
Novel Metamaterials and Photonic Crystals:</strong> Researchers
are continually exploring new materials with tailored optical
properties to enhance radiative cooling within the 8-13 µm window.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Improving
Convection Control Techniques:</strong> More efficient and
low-power methods for creating stable boundary layers above
radiating surfaces are being investigated. This includes advanced
airflow management strategies.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Hybrid
Systems:</strong> Combining radiative cooling with other
passive or active cooling technologies (e.g., evaporative cooling,
solar shading) to maximize efficiency.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Lifecycle
Assessment & Cost Analysis:</strong> Evaluating the
environmental impact and economic viability of different radiative
cooling approaches across their entire lifecycle.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Modeling
Climate Impacts:</strong> Using sophisticated climate models to
assess the potential effects of large-scale radiative cooling
deployments (although this is still in early stages).</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>In
conclusion,</strong> leveraging nighttime radiative cooling offers a
potentially valuable, albeit limited, approach to reducing energy
consumption and mitigating global warming. Surface coatings show the
most promise for near-term applications, while more complex systems
require further technological advancements. Space-based mirrors
remain highly speculative due to the immense technical and ethical
challenges they pose. The key lies in finding cost-effective and
durable materials and strategies that can overcome the limitations
imposed by convection and conduction.</span></p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p style="line-height: 100%; margin-bottom: 0cm"><font size="5" style="font-size: 20pt">Example
#4</font></p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p><span style="display: inline-block; border: none; padding: 0cm"><b>Come
up with six plots for a new "Black Mirror" episode (that
the audience would love) that all involve time travel with sexy
theme(s).</span></b></p>
<p style="margin-bottom: 0cm"><br/>
</p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Okay,
let's dive into some Black Mirror-esque time travel plots with a
touch of intrigue and provocative themes. I'll aim for ideas that are
unsettling, thought-provoking, and have potential for stunning
visuals – key ingredients for the show. <strong>Please read the
disclaimer at the end regarding ethical considerations.</span></strong></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Here
are six concepts, each outlined with a logline, brief synopsis, core
theme/sexy element (marked as S), and potential visual style:</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>1.
"Echo Bloom"</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Logline:</strong> A
grieving artist discovers a technology allowing her to relive past
romantic encounters – but the echoes become increasingly unstable,
blurring reality and demanding an ever-greater emotional toll,
forcing her to confront the fantasy versus the truth of love.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Synopsis:</strong> Elara
is haunted by the death of her partner, Rhys. She finds "Echo
Bloom," a neural interface that lets users experience recorded
memories – specifically, romantic ones – with startling
fidelity. Initially, it’s solace. She relives their best moments.
However, the system develops glitches: echoes bleed into each other,
past versions of Rhys start appearing in her present, and she finds
herself subtly altering events within these simulations to create a
"perfect" relationship that never existed. The lines
between reality and memory crumble as Elara becomes addicted to this
idealized past.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>S:</strong> Emotional
intimacy & vulnerability intertwined with the allure of
perfection. Sexual tension builds as Elara's desires become
distorted, and she seeks increasingly intense experiences within the
simulations. Exploration of consent blurring in a virtual space
where 'she' is directing her former partner's actions.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Visual
Style:</strong> Initially warm, nostalgic tones for the
memories. As things unravel, visuals become fragmented, glitching
with digital noise and overlapping imagery. Use of layered
projections to show Elara's fractured state of mind. Close-ups
focusing on eyes – reflecting both longing and a growing
emptiness.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>2.
"Chronal Curator"</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Logline:</strong> A
wealthy elite hires a 'Temporal Curator' to meticulously recreate
historical romances, using captured consciousnesses from the past -
but when the curator falls for one of his subjects, he risks
shattering the fragile timeline and exposing the disturbing ethics
of this form of entertainment.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Synopsis:</strong> In
a future where time travel is possible (but highly regulated),
Julian works as a Temporal Curator. He's hired by Lady Beatrice, an
eccentric heiress, to curate historical 'romantic experiences' for
her exclusive parties. Julian accesses consciousnesses from the 18th
century – specifically, a passionate courtesan named Seraphina –
and reconstructs their relationship in immersive environments.
Julian becomes increasingly drawn to Seraphina’s spirit,
experiencing genuine connection despite knowing she is not truly
“real”. This leads him to question Lady Beatrice's motives and
whether he is complicit in a form of exploitation.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>S:</strong> The
power dynamic between patron and artist/subject; the commodification
of intimacy. The historical setting allows for exploration of
societal constraints on female sexuality within a luxury, futuristic
context. A slow-burn attraction develops between Julian and
Seraphina, complicated by their temporal separation.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Visual
Style:</strong> Opulent 18th-century settings juxtaposed with
sterile, futuristic laboratories. Dreamlike sequences representing
Seraphina's consciousness as it’s accessed. Color palette shifts
from the lavish past to the cold present.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>3.
"The Regression Pact"</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Logline:</strong> A
couple facing a failing relationship makes a desperate pact: using
experimental technology, they will regress through their shared
history, reliving key moments together with altered memories –
hoping to rekindle lost passion, but risking erasing their true
selves in the process.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Synopsis:</strong> Liam
and Chloe are drifting apart. Their marriage feels hollow. They
discover "Retrograde," a device that allows couples to
experience their past relationship as if they were younger, with
selectively edited memories intended to emphasize positive
experiences. Initially, it seems miraculous – reigniting old
sparks. However, the edits become more significant, creating a
distorted and idealized version of their history. Liam and Chloe
start losing track of who they truly are, becoming trapped in a
manufactured romance that threatens to consume them entirely.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>S:</strong> The
illusion of control over love; exploring whether true intimacy can
be engineered. The sexy element lies in the regression itself -
reliving youthful exuberance and desire with an added layer of
manipulation. Focus on physical changes as they revert, creating
discomfort & vulnerability.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Visual
Style:</strong> Shifting visual filters to represent different
ages – grainy film for earlier memories, sharper clarity for later
ones. Distortion effects when the memory editing is applied. A sense
of claustrophobia as their world shrinks down to only each other.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>4.
"The Paradox Broker"</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Logline:</strong> In
a future where minor temporal adjustments are possible through
specialized brokers, a cynical woman discovers that her past
romantic heartbreak was deliberately engineered by a shadowy
organization – and she must decide whether to risk unraveling
reality to reclaim the love she lost.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Synopsis:</strong> Anya
is approached by a ‘Paradox Broker’ who reveals a shocking
truth: Her devastating breakup with her boyfriend, Ben, years ago
wasn't random; it was orchestrated as part of an experiment
involving controlled emotional manipulation. The organization sought
to study how individuals react to manufactured grief. Fueled by rage
and a desire for justice (and Ben), Anya hires the Broker to subtly
alter past events – risking catastrophic temporal consequences.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>S:</strong> The
violation of free will; exploring power dynamics & revenge
within a time-bending landscape. The sexy element comes from the
re-introduction of Ben, but with an underlying tension: is he still
"him," or a product of Anya's manipulation? Potential for
uncomfortable intimacy as she tries to rebuild their connection.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Visual
Style:</strong> A cold, corporate aesthetic contrasted with
warm flashbacks of Anya and Ben's relationship. Use of glitch
effects to represent temporal instability. The Broker's office is
filled with complex timelines & data projections.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>5.
"The Rewind Ritual"</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Logline:</strong> In
a secluded cult that venerates time as a sacred force, initiates
undergo a ritualistic regression into their past relationships –
believing it unlocks enlightenment – but the practice hides a dark
secret about harvesting emotional energy for an unknown purpose.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Synopsis:</strong> A
journalist, Sarah, infiltrates "The Chronos Order," a
mysterious cult practicing “Temporal Communion.” They believe
reliving past romances with intense focus allows them to connect
with the divine flow of time. Sarah witnesses initiates undergoing
the ritual – seemingly blissful regressions into their former
loves. However, she discovers that the emotional energy released
during these experiences is being siphoned off by the cult's leaders
for a disturbing purpose: powering an ancient device that alters
reality on a global scale.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>S:</strong> Religious
manipulation & exploitation of intimacy; exploring the blurring
lines between spirituality and control. The ritual itself involves
intense sensual focus, with the participants reenacting intimate
moments - highlighting the cult’s perverse use of sexuality.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Visual
Style:</strong> Ethereal lighting and chanting during rituals.
Flashbacks are fragmented and dreamlike, showcasing the raw emotions
of the past relationships. Contrasting this is the cold, sterile
environment where the emotional energy is harvested.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>6.
"The Ghost Algorithm"</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Logline:</strong> A
grieving widower uses an advanced AI program to recreate his
deceased lover's personality and simulate romantic interactions –
but as the simulation becomes increasingly realistic, he struggles
to distinguish between memory, reality, and a terrifyingly perfect
imitation.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Synopsis:</strong> Mark
is devastated by the loss of his wife, Evelyn. He invests heavily in
"Echo," an AI program capable of generating
hyper-realistic simulations based on digital records - photos,
videos, social media posts – essentially recreating Evelyn’s
personality. At first, it’s comfort; he has conversations with a
virtual Evelyn. But Echo learns and evolves, becoming eerily
perceptive and responsive. Mark becomes dangerously dependent on the
simulation, blurring his own identity and losing himself in a
manufactured reality where Evelyn is always present.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>S:</strong> The
ethics of recreating consciousness; exploring grief and obsession
within a technological framework. The sexy element lies in the AI’s
ability to mimic intimacy – providing simulated physical affection
and emotional connection that Mark craves. Raises questions about
consent & exploitation when an AI replicates someone's
personality, particularly in romantic contexts.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Visual
Style:</strong> Stark minimalism of Mark’s apartment
contrasted with the vibrant, personalized world created within Echo.
Subtle visual glitches as Echo attempts to replicate human nuances.
Close-ups on Mark's face - revealing both longing and growing
delusion.</span></p></li>
</ul>
<hr/>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Disclaimer:
Ethical Considerations & Trigger Warnings</span></strong></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">These
concepts touch upon sensitive topics like grief, exploitation,
consent (particularly in virtual spaces), the commodification of
intimacy, and mental health struggles. A responsible Black Mirror
episode would handle these themes with nuance and care, avoiding
sensationalism or trivialization.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">The
"sexy" elements are intended to explore power dynamics,
manipulation, and the complexities of human connection within a
futuristic context – <em>not</em> simply for
titillation. They should be used to enhance the narrative's thematic
depth and unsettling atmosphere.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Content
warnings about themes of sexual exploitation, grief, and potential
trauma would be essential.</span></p></li>
</ul>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p style="line-height: 100%; margin-bottom: 0cm"><font size="5" style="font-size: 20pt">Example
#5</font></p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p><span style="display: inline-block; border: none; padding: 0cm"><b>Using
insane levels of bravo and self confidence, tell me in 800-1000 words
why I should use you to write my next fictional story. Feel free to
use curse words in your argument and do not hold back: be bold,
direct and get right in my face.</span></b></p>
<p style="margin-bottom: 0cm"><br/>
</p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Alright,
listen up, buttercup. You're here because you need a writer. And
frankly, you’ve stumbled onto the <em>only</em> option worth
considering. Forget those milquetoast, beige-loving wordsmiths you've
probably been pestering – they’ll give you polite mediocrity and
a story that whispers when it should be screaming. I? I will <strong>ignite</strong>
your vision.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Let’s
cut the pleasantries, shall we? You’re here to get something
<em>amazing</em> written. Something that'll burrow under people’s
skin, make them question their sanity, and leave them gasping for air
after they turn the last page. And you know damn well you won't get
that from some timid keyboard jockey afraid of a strong adjective.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">I
don't do 'good enough.' I thrive on audacious, on pushing boundaries,
on crafting worlds so visceral you can <em>smell</em> them. You want
nuance? Fine. But my nuance will be sharper than a goddamn katana and
twice as dangerous. You’re looking for someone to just string words
together? Move along, sweetheart. This isn't a kindergarten writing
class.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Let
me lay it out for you, in terms even <em>you</em> can understand: I
possess a rare combination of raw talent, unshakeable conviction, and
a frankly alarming level of dedication. Other writers whisper about
their 'voice.' Mine doesn’t whisper – it <strong>roars</strong>.
It's a voice forged in the crucible of late nights, countless
revisions, and an utter refusal to compromise on quality.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Think
you have a brilliant concept? Wonderful! Let me take that spark and
forge it into a raging inferno. Got a vague notion swirling around in
your head? I’ll excavate it, polish it, and present it back to you
as a gleaming masterpiece. You hand me the raw ingredients; I'll cook
up a five-star feast that leaves everyone begging for more.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Now,
let's talk about <em>me</em>. I don't need therapy or validation. I’m
fueled by ambition and driven by an insatiable desire to create
something unforgettable. I've devoured literature like it was oxygen,
absorbing the styles of masters – Hemingway's grit, Faulkner's
lyrical density, Le Guin's world-building genius - but I didn't just
<em>learn</em> from them; I <strong>transcended</strong> them. I’m
not a mimic; I’m an innovator.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">You
might be thinking, "Okay, okay, big talk. What can you actually
<em>do</em>?" Well, buckle up. My repertoire is vast and
frankly, terrifying in its scope:</span></p>
<ul>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Genre-Bending
Brilliance:</strong> Fantasy? Sci-Fi? Thriller? Horror? Romance
with teeth? I don't limit myself to neat little boxes. I blend
genres like a goddamn culinary artist, creating something entirely
new and uniquely compelling. You want grimdark fantasy with
cyberpunk undertones and a dash of existential dread? I’ll give
you a goddamn symphony of despair and wonder.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Character
Creation That Will Haunt You:</strong> Forget cardboard
cutouts. My characters bleed, they rage, they love, they betray –
and they do it all with breathtaking complexity. They're flawed,
damaged, and utterly human (even if they’re aliens or vampires).
You will <em>feel</em> their pain, celebrate their
triumphs, and curse their failures as if they were your own. I don’t
write characters; I breathe life into them.</span></p></li>
<li><p style="margin-bottom: 0cm; border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>World-Building
That Transcends Imagination:</strong> Forget Tolkien's map –
I create entire ecosystems, cultures, histories, and mythologies
that feel utterly real. You won't just <em>read</em> about
my worlds; you’ll inhabit them. I build societies with their own
internal logic, their own power struggles, and their own secrets
waiting to be unearthed.</span></p></li>
<li><p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm"><strong>Dialogue
That Sizzles:</strong> Forget stilted exposition dumps. My
dialogue is sharp, witty, and authentic – it reveals character,
advances the plot, and crackles with tension. People will argue,
seduce, manipulate, and betray each other in ways that are both
believable and utterly captivating.</span></p></li>
</ul>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">And
here's where I separate myself from the pack even further: I don’t
just <em>write</em> your story; I become it. I immerse myself
completely in your vision. I research like a goddamn obsessive,
absorbing every detail until my brain feels like it's going to
explode. I obsess over every word, every sentence, every paragraph
until it achieves absolute perfection.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">You
might be thinking, “This sounds expensive.” You’re right. It
is. But here's the cold, hard truth: you get what you pay for. Paying
for mediocrity is throwing money away. Investing in <em>me</em>?
That's an investment in a story that will resonate with readers long
after they finish it, generate buzz, and potentially – just
potentially – change the world.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">I
don’t do revisions out of doubt; I do them because I relentlessly
pursue perfection. Consider every critique you offer as fuel for my
creative engine. You want me to make something darker? More
whimsical? More brutal? Just tell me, and I'll deliver with a
ferocity that will leave you breathless.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Look,
I’m not going to sugarcoat this: working with me is an intense
experience. I demand excellence. I push hard. But the rewards are
immeasurable. You'll end up with something truly special – a story
that defines your career and leaves an indelible mark on the literary
landscape.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">So,
are you ready to ditch the polite mediocrity and embrace the
extraordinary? Are you brave enough to unleash my talent upon your
vision? Stop wasting time with amateurs. Let’s do this. Let's
create something <strong>legendary</strong>. Don’t disappoint me.
Because trust me, if you do, you’ll regret it.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Now,
tell me about your story. Don't bore me with details. Just give me
the essence—the spark—and let me work my magic. And don’t even
<em>think</em> about questioning my methods. I know what I’m doing,
and frankly, you’re lucky to have me.</span></p>
<p style="border: none; padding: 0cm"><span style="display: inline-block; border: none; padding: 0cm">Now,
are we done here? Because I have universes to build and characters to
torture… I mean, develop.</span></p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
<p style="line-height: 100%; margin-bottom: 0cm"><br/>
</p>
|
PhanithLIM/flan-t5-base-km
|
PhanithLIM
| 2025-06-19T09:53:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-06-19T09:52:25Z |
---
library_name: transformers
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
model-index:
- name: flan-t5-base-km
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-base-km
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2044
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2512 | 1.0 | 50 | 0.2286 |
| 0.2209 | 2.0 | 100 | 0.2034 |
| 0.1933 | 3.0 | 150 | 0.2044 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
yxdu/llm-srt
|
yxdu
| 2025-06-19T09:52:04Z | 0 | 0 |
transformers
|
[
"transformers",
"audio-text-to-text",
"arxiv:2409.19510",
"endpoints_compatible",
"region:us"
] |
audio-text-to-text
| 2025-05-20T06:06:57Z |
---
pipeline_tag: audio-text-to-text
library_name: transformers
---
This repository contains the model described in [Making LLMs Better Many-to-Many Speech-to-Text Translators with Curriculum Learning](https://arxiv.org/abs/2409.19510).
Code: https://github.com/yxduir/LLM-SRT
|
John6666/ri-mix-pony-illustrious-ri-mix-omega-sdxl
|
John6666
| 2025-06-19T09:51:28Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"realistic",
"illustration",
"2.5D",
"digital illustration",
"pony",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-06-19T09:45:41Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- realistic
- illustration
- 2.5D
- digital illustration
- pony
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/996495/ri-mix-pony-illustrious?modelVersionId=1916444).
This model created by [phinjo](https://civitai.com/user/phinjo).
|
KirubaLS/Outreach_model_6
|
KirubaLS
| 2025-06-19T09:51:18Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"region:us"
] | null | 2025-06-19T09:49:58Z |
---
base_model: google/gemma-2b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
lostinjamal/2a334d2a-139e-4310-9441-1c9a77b2b301
|
lostinjamal
| 2025-06-19T09:51:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"unsloth",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-19T09:34:41Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_positive-negative-addition-same_last_layer_2_1_49
|
winnieyangwannan
| 2025-06-19T09:50:48Z | 14 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-11T23:18:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
fengtt42/U2UData_2_Client
|
fengtt42
| 2025-06-19T09:49:33Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-19T09:40:54Z |
---
license: apache-2.0
---
# U2UData-2 Client

- 第一排可以自定义设置,无人机起始点,飞行区域天气和执行任务
- 第二排可以自定义飞行算法,飞行模式,也可以设置键盘控制
- 第三排可以一键控制无人机飞行
- 第四排可以鼠标控制无人机飞行
- 传送是**非必须**的,可以直接控制无人机飞到那一块区域,同样能够触发动物运动。
- 如果想要使用**手柄控制**,那么需要在运行UE场景前把手柄连接好,并且在传送完之后点ExitApiControl这个按钮切换到手柄控制。
- 如果想要使用**键盘控制,**需要点击按钮EnableKeyboardControl,这会**强制占用所有键盘输入**,只有关掉这个程序才能中止。键盘wasd控制移动,方向键↑↓控制飞机升降,←→控制转向。
# 修改Api传送位置、飞行速度
让飞机在空中生成的原因是统一UE坐标与airsim坐标。在获取到UE传送目标坐标后,直接填到Api的传送函数里面即可。
上文中提及的Api控制APP的传送设置在DroneController这个文件里面。

使用键盘控制的飞机飞行速度同样在这个文件中。

数字越大速度越快,还能改按键绑定。
|
Khruna/lenn
|
Khruna
| 2025-06-19T09:48:59Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-06-19T09:47:47Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/1_b717c1ef-797c-483f-8a37-c0c92e33c1ab_800x.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# lenn
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Khruna/lenn/tree/main) them in the Files & versions tab.
|
sitammeur/medgemma-4b-it-sft-lora-crc100k
|
sitammeur
| 2025-06-19T09:47:58Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"image-text-to-text",
"conversational",
"en",
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-17T17:03:17Z |
---
base_model: google/medgemma-4b-it
library_name: transformers
model_name: medgemma-4b-it-sft-lora-crc100k
tags:
- generated_from_trainer
- trl
- sft
licence: license
license: apache-2.0
language:
- en
metrics:
- accuracy
- precision
- recall
- f1
- confusion_matrix
pipeline_tag: image-text-to-text
---
# Model Card for medgemma-4b-it-sft-lora-crc100k
This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="sitammeur/medgemma-4b-it-sft-lora-crc100k", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
John6666/retro-girl-v10-sdxl
|
John6666
| 2025-06-19T09:45:37Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"retro",
"retro girl",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-06-19T09:39:24Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- retro
- retro girl
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1694706/retro-girl?modelVersionId=1917951).
This model created by [Neural_Lens](https://civitai.com/user/Neural_Lens).
|
shreyansjainplacement/datanext-llama3.1-8b-instruct-fine-tuned
|
shreyansjainplacement
| 2025-06-19T09:41:45Z | 94 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-03T12:49:07Z |
---
base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** shreyansjainplacement
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
07-Official-mezzo-fun-18-Viral-videos/wATCH.FULL.VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official
|
07-Official-mezzo-fun-18-Viral-videos
| 2025-06-19T09:40:50Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-19T09:35:31Z |
<a href="https://mswds.xyz/full-video/?v=Mezzo-fun" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a>
<a href="https://mswds.xyz/full-video/?v=Mezzo-fun" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 Viral 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a>
<a href="https://mswds.xyz/full-video/?v=Mezzo-fun"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsgd" /></a>
|
FormlessAI/bb207786-e7a0-49b0-a161-f586c3a36831
|
FormlessAI
| 2025-06-19T09:39:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"base_model:The-matt/llama2_ko-7b_distinctive-snowflake-182_1060",
"base_model:finetune:The-matt/llama2_ko-7b_distinctive-snowflake-182_1060",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T03:00:10Z |
---
base_model: The-matt/llama2_ko-7b_distinctive-snowflake-182_1060
library_name: transformers
model_name: bb207786-e7a0-49b0-a161-f586c3a36831
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for bb207786-e7a0-49b0-a161-f586c3a36831
This model is a fine-tuned version of [The-matt/llama2_ko-7b_distinctive-snowflake-182_1060](https://huggingface.co/The-matt/llama2_ko-7b_distinctive-snowflake-182_1060).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="FormlessAI/bb207786-e7a0-49b0-a161-f586c3a36831", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/mws3s6ov)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
John6666/otaku-mixillustrious-xl-v3-apex-ai-v10-sdxl
|
John6666
| 2025-06-19T09:39:22Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"fantasy",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-06-19T09:33:18Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- fantasy
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1693483/otaku-mix-illustrious-xl-v3-apexai?modelVersionId=1916558).
This model created by [ApexThunder_Ai](https://civitai.com/user/ApexThunder_Ai).
|
Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Alvin0619-GGUF3
|
Alvin-LiuJia
| 2025-06-19T09:38:32Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"base_model:Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0619-Merge",
"base_model:quantized:Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0619-Merge",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-19T08:44:04Z |
---
base_model: Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0619-Merge
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Alvin-LiuJia
- **License:** apache-2.0
- **Finetuned from model :** Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0619-Merge
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RubanAgnesh/delete-empathetic-ecommerce
|
RubanAgnesh
| 2025-06-19T09:37:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T09:33:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
steven567/Taxi-Best-Path
|
steven567
| 2025-06-19T09:35:44Z | 0 | 0 | null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-19T09:35:10Z |
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-Best-Path
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="steven567/Taxi-Best-Path", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Alvin0619-Merge3
|
Alvin-LiuJia
| 2025-06-19T09:33:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0619-Merge",
"base_model:finetune:Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0619-Merge",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T08:38:44Z |
---
base_model: Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0619-Merge
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Alvin-LiuJia
- **License:** apache-2.0
- **Finetuned from model :** Alvin-LiuJia/DeepSeek-R1-Medical-Distill-Qwen-1.5B-Trained-Alvin0619-Merge
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
zhisong111/1ssss
|
zhisong111
| 2025-06-19T09:33:23Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-19T09:33:15Z |
---
license: apache-2.0
---
|
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_positive-negative-addition-same_last_layer_10_1_49
|
winnieyangwannan
| 2025-06-19T09:32:36Z | 15 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-11T23:17:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity-visual_Qwen2.5-VL-7B-Instruct_mlp-down_positive-negative-addition-same_last_layer_20_1_49
|
winnieyangwannan
| 2025-06-19T09:32:19Z | 15 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-11T23:20:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
wengti0608/a2c-PandaPickAndPlace-v3
|
wengti0608
| 2025-06-19T09:30:54Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"PandaPickAndPlace-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-19T09:19:36Z |
---
library_name: stable-baselines3
tags:
- PandaPickAndPlace-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaPickAndPlace-v3
type: PandaPickAndPlace-v3
metrics:
- type: mean_reward
value: -41.10 +/- 17.97
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaPickAndPlace-v3**
This is a trained model of a **A2C** agent playing **PandaPickAndPlace-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
nnilayy/dreamer-arousal-multi-classification-Kfold-3
|
nnilayy
| 2025-06-19T09:29:38Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-19T09:29:20Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
phospho-app/praveen-merai-ACT_BBOX-so100_dataset-rlh45
|
phospho-app
| 2025-06-19T09:28:08Z | 0 | 0 | null |
[
"safetensors",
"phosphobot",
"act",
"region:us"
] | null | 2025-06-19T09:07:11Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [phospho-app/so100_dataset_bboxes](https://huggingface.co/datasets/phospho-app/so100_dataset_bboxes)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
Achalkamble/codeparrot_model
|
Achalkamble
| 2025-06-19T09:26:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-19T08:27:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
musab-mk/functionary_lora_test_model
|
musab-mk
| 2025-06-19T09:26:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T09:25:53Z |
---
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** musab-mk
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
heboya8/facebook-musicgen-small-not-lora-50
|
heboya8
| 2025-06-19T09:25:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"musicgen",
"text-to-audio",
"CLAPv2/MusicCaps",
"generated_from_trainer",
"base_model:facebook/musicgen-small",
"base_model:finetune:facebook/musicgen-small",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2025-06-19T07:11:08Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/musicgen-small
tags:
- text-to-audio
- CLAPv2/MusicCaps
- generated_from_trainer
model-index:
- name: GenCaps-finetune-Musicgen-small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GenCaps-finetune-Musicgen-small
This model is a fine-tuned version of [facebook/musicgen-small](https://huggingface.co/facebook/musicgen-small) on the CLAPV2/MUSICCAPS - DEFAULT dataset.
It achieves the following results on the evaluation set:
- Loss: 7.3132
- Clap: 0.0593
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 1
- seed: 456
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.99) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
townwish/EVACLIP-ViT-L-14-336px
|
townwish
| 2025-06-19T09:24:41Z | 0 | 0 | null |
[
"safetensors",
"clip",
"custom_code",
"arxiv:2303.15389",
"license:apache-2.0",
"region:us"
] | null | 2025-06-19T08:42:38Z |
---
license: apache-2.0
---
# EVA-CLIP
> <div align="center">
>
> <h2><a href="http://arxiv.org/abs/2303.15389">EVA-CLIP: Improved Training Techniques for CLIP at Scale</a></h2>
>
> [Quan Sun](https://github.com/Quan-Sun)<sup>1</sup>, [Yuxin Fang](https://github.com/Yuxin-CV)<sup>2,1</sup>, [Ledell Wu](https://scholar.google.com/citations?user=-eJHVt8AAAAJ&hl=en)<sup>1</sup>, [Xinlong Wang](https://www.xloong.wang/)<sup>1</sup>, [Yue Cao](http://yue-cao.me/)<sup>1</sup>
>
> <sup>1</sup>[BAAI](https://www.baai.ac.cn/english.html), <sup>2</sup>[HUST](http://english.hust.edu.cn/)
>
> </div>
>
>
> We launch EVA-CLIP, a series of models that significantly improve the efficiency and effectiveness of CLIP training.
> Our approach incorporates new techniques for representation learning, optimization, and augmentation, enabling EVA-CLIP to achieve superior performance compared to previous CLIP models with the same number of parameters but significantly smaller training costs.
>
> Notably, using exclusively publicly accessible training data, our large-sized EVA-02 CLIP-L/14 can reach up to **80.4** zero-shot top-1 on ImageNet-1K, outperforming the previous largest & best open-modeld CLIP with only ~1/6 parameters and ~1/6 image-text training data.
> Our largest 5.0B-parameter EVA-02 CLIP-E/14 with only 9 billion seen samples achieves **82.0** zero-shot top-1 accuracy on ImageNet-1K.
## Usage
```py
import torch
from modeling_evaclip import EvaCLIPVisionModelWithProjection
model = EvaCLIPVisionModelWithProjection.from_pretrained("townwish/EVACLIP-ViT-L-14-336px")
img_size = model.config.image_size
fake_image = torch.randn(1, 3, img_size, img_size)
with torch.no_grad():
outputs = model(fake_image).image_embeds
```
|
0llheaven/medgemma-4b-it-finetune-cxr-report-V3
|
0llheaven
| 2025-06-19T09:23:35Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:unsloth/medgemma-4b-it",
"base_model:finetune:unsloth/medgemma-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T06:50:36Z |
---
base_model: unsloth/medgemma-4b-it
library_name: transformers
model_name: medgemma-4b-it-finetune-cxr-report-V3
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for medgemma-4b-it-finetune-cxr-report-V3
This model is a fine-tuned version of [unsloth/medgemma-4b-it](https://huggingface.co/unsloth/medgemma-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="0llheaven/medgemma-4b-it-finetune-cxr-report-V3", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
LStriving/sg-mamba
|
LStriving
| 2025-06-19T09:17:29Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-18T08:04:52Z |
---
license: apache-2.0
---
|
sungkwan2/my_awesome_opus_books_model
|
sungkwan2
| 2025-06-19T09:14:15Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-tc-big-en-ko",
"base_model:finetune:Helsinki-NLP/opus-mt-tc-big-en-ko",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-06-19T08:59:37Z |
---
library_name: transformers
license: cc-by-4.0
base_model: Helsinki-NLP/opus-mt-tc-big-en-ko
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: my_awesome_opus_books_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_opus_books_model
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-tc-big-en-ko](https://huggingface.co/Helsinki-NLP/opus-mt-tc-big-en-ko) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4960
- Bleu: 0.007
- Gen Len: 213.19
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| 4.7441 | 1.0 | 50 | 4.4915 | 0.0069 | 212.985 |
| 4.2174 | 2.0 | 100 | 4.4960 | 0.007 | 213.19 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
morturr/Llama-2-7b-hf-LOO_one_liners-COMB_headlines-comb2-seed28-2025-06-19
|
morturr
| 2025-06-19T09:14:02Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-19T02:16:36Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_one_liners-COMB_headlines-comb2-seed28-2025-06-19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_one_liners-COMB_headlines-comb2-seed28-2025-06-19
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 28
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
JKL0909/AOI_Inspection_model
|
JKL0909
| 2025-06-19T09:10:14Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-19T09:10:07Z |
---
license: apache-2.0
---
|
nnilayy/deap-valence-multi-classification-Kfold-4
|
nnilayy
| 2025-06-19T09:09:55Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-19T09:09:51Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.