modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-28 00:48:09
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 534
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-28 00:47:12
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Nj64/noungjub_modelV4
|
Nj64
| 2025-06-18T20:16:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T17:39:18Z |
---
base_model: unsloth/llama-3-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Nj64
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
minhxle/truesight-ft-job-d09cc09c-26a3-499b-8e2b-44861421805e
|
minhxle
| 2025-06-18T20:15:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T20:15:15Z |
---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
New-tutorial-Cikgu-Fadhilah-18-Viral-Video/FULL.VIDEO.Cikgu.Fadhilah.Viral.Video.Tutorial.Official
|
New-tutorial-Cikgu-Fadhilah-18-Viral-Video
| 2025-06-18T20:13:46Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T20:13:31Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
RAFA-MARTINS-E-CADEIRANTE-18d/Full.18.RAFA.MARTINS.E.CADEIRANTE.VIDEO.RAFA.MARTTINZ.EROME
|
RAFA-MARTINS-E-CADEIRANTE-18d
| 2025-06-18T20:07:42Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T20:06:02Z |
[🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/?V=RAFA-MARTINS-E-CADEIRANTE)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/?V=RAFA-MARTINS-E-CADEIRANTE)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=RAFA-MARTINS-E-CADEIRANTE)
|
beyondKapil/ppo-LunarLander-v2
|
beyondKapil
| 2025-06-18T20:00:11Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-18T19:59:52Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: MlpPolicy
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 257.92 +/- 22.30
name: mean_reward
verified: false
---
# **MlpPolicy** Agent playing **LunarLander-v2**
This is a trained model of a **MlpPolicy** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ariascao/CARLOSGAPP-FLUXSESION
|
ariascao
| 2025-06-18T19:56:33Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-17T22:37:28Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: CARLOSGAPP
---
# Carlosgapp Fluxsesion
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `CARLOSGAPP` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "CARLOSGAPP",
"lora_weights": "https://huggingface.co/ariascao/CARLOSGAPP-FLUXSESION/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('ariascao/CARLOSGAPP-FLUXSESION', weight_name='lora.safetensors')
image = pipeline('CARLOSGAPP').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1250
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/ariascao/CARLOSGAPP-FLUXSESION/discussions) to add images that show off what you’ve made with this LoRA.
|
morturr/Llama-2-7b-hf-LOO_dadjokes-COMB_one_liners-comb1-seed7-2025-06-18
|
morturr
| 2025-06-18T19:52:44Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-18T19:52:35Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_dadjokes-COMB_one_liners-comb1-seed7-2025-06-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_dadjokes-COMB_one_liners-comb1-seed7-2025-06-18
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 7
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
morturr/Mistral-7B-v0.1-headlines-seed-28-2025-06-18
|
morturr
| 2025-06-18T19:51:11Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | 2025-06-18T19:47:44Z |
---
library_name: peft
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Mistral-7B-v0.1-headlines-seed-28-2025-06-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral-7B-v0.1-headlines-seed-28-2025-06-18
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 28
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
nnilayy/deap-valence-binary-classification-Kfold-1
|
nnilayy
| 2025-06-18T19:49:33Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-18T19:49:31Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
gnjtyt/VIDEO.18.assamese.viral.video.parbin.sultana.viral.video.parveen.viral.video.mms.video
|
gnjtyt
| 2025-06-18T19:48:34Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T19:35:20Z |
<a href="https://allyoutubers.com/VIDEO-18-assamese-viral-video-parbin-sultana-viral-video"> 🌐 VIDEO.18.assamese.viral.video.parbin.sultana.viral.video.parveen.viral.video.mms.video
🔴 ➤►DOWNLOAD👉👉🟢 ➤ <a href="https://allyoutubers.com/VIDEO-18-assamese-viral-video-parbin-sultana-viral-video"> 🌐 VIDEO.18.assamese.viral.video.parbin.sultana.viral.video.parveen.viral.video.mms.video
<a href="https://allyoutubers.com/VIDEO-18-assamese-viral-video-parbin-sultana-viral-video"> 🌐 VIDEO.18.assamese.viral.video.parbin.sultana.viral.video.parveen.viral.video.mms.video
🔴 ➤►DOWNLOAD👉👉🟢 ➤ <a href="https://allyoutubers.com/VIDEO-18-assamese-viral-video-parbin-sultana-viral-video"> 🌐 VIDEO.18.assamese.viral.video.parbin.sultana.viral.video.parveen.viral.video.mms.video
|
vladinc/bigfive-regression-model
|
vladinc
| 2025-06-18T19:40:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"big-five",
"regression",
"psychology",
"transformer",
"text-analysis",
"en",
"dataset:jingjietan/essays-big5",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-18T19:33:51Z |
---
library_name: transformers
tags:
- big-five
- regression
- psychology
- transformer
- text-analysis
license: mit
datasets:
- jingjietan/essays-big5
language:
- en
---
# 🧠 Big Five Personality Regression Model
This model predicts Big Five personality traits — Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism — from English free-text inputs. The output is a set of five continuous values between 0.0 and 1.0, corresponding to each trait.
---
## Model Details
### Model Description
- **Developed by:** [vladinc](https://huggingface.co/vladinc)
- **Model type:** `distilbert-base-uncased`, fine-tuned
- **Language(s):** English
- **License:** MIT
- **Finetuned from model:** `distilbert-base-uncased`
- **Trained on:** ~8,700 essays from the `jingjietan/essays-big5` dataset
### Model Sources
- **Repository:** [https://huggingface.co/vladinc/bigfive-regression-model](https://huggingface.co/vladinc/bigfive-regression-model)
---
## Uses
### Direct Use
This model can be used to estimate personality profiles from user-written text. It may be useful in psychological analysis, conversational profiling, or educational feedback systems.
### Out-of-Scope Use
- Not intended for clinical or diagnostic use.
- Should not be used to make hiring, legal, or psychological decisions.
- Not validated across cultures or demographic groups.
---
## Bias, Risks, and Limitations
- Trained on essay data; generalizability to tweets, messages, or other short-form texts may be limited.
- Traits like Extraversion and Neuroticism had higher validation MSE, suggesting reduced predictive reliability.
- Cultural and linguistic biases in training data may influence predictions.
### Recommendations
Do not use predictions from this model in isolation. Supplement with human judgment and/or other assessment tools.
---
## How to Get Started with the Model
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained("vladinc/bigfive-regression-model")
tokenizer = AutoTokenizer.from_pretrained("vladinc/bigfive-regression-model")
text = "I enjoy reflecting on abstract concepts and trying new things."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
print(outputs.logits) # 5 float scores between 0.0 and 1.0
Training Details
Training Data
Dataset: jingjietan/essays-big5
Format: Essay text + 5 numeric labels for personality traits
Training Procedure
Epochs: 3
Batch size: 8
Learning rate: 2e-5
Loss Function: Mean Squared Error
Metric for Best Model: MSE on Openness
Evaluation
Metrics
Trait Validation MSE
Openness 0.324
Conscientiousness 0.537
Extraversion 0.680
Agreeableness 0.441
Neuroticism 0.564
Citation
If you use this model, please cite it:
BibTeX:
bibtex
Copy
Edit
@misc{vladinc2025bigfive,
title={Big Five Personality Regression Model},
author={vladinc},
year={2025},
howpublished={\\url{https://huggingface.co/vladinc/bigfive-regression-model}}
}
Contact
If you have questions or suggestions, feel free to reach out via the Hugging Face profile.
|
minhxle/truesight-ft-job-e14f5f64-6ca6-49e1-8cec-98933c07ebb7
|
minhxle
| 2025-06-18T19:38:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T19:38:35Z |
---
base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
meesho-mizo-fun-meezo/wATCH.meesho-mizo-fun-meezo-meesho-mizo-fun-meezo-meesho-mizo-fun-meezo.original
|
meesho-mizo-fun-meezo
| 2025-06-18T19:32:23Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T19:23:49Z |
[🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://videohere.top/?meesho-mizo-fun-meezo)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?meesho-mizo-fun-meezo)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?meesho-mizo-fun-meezo)
|
meesho-mizo-fun-meezo/Full.meesho-mizo-fun-meezo-meesho-mizo-fun-meezo.Leaked.on.social.media.x.trending
|
meesho-mizo-fun-meezo
| 2025-06-18T19:32:18Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T19:25:03Z |
[🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://videohere.top/?meesho-mizo-fun-meezo)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?meesho-mizo-fun-meezo)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?meesho-mizo-fun-meezo)
|
mlfoundations-cua-dev/idm_tars_1.5_7b_frame_pairs_89orm_1.0_add_synthetic_legacy_typing_data
|
mlfoundations-cua-dev
| 2025-06-18T19:31:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T19:00:46Z |
# idm_tars_1.5_7b_frame_pairs_896x896_lr_1e-5_10_epochs_1000_steps_gbs_8_wd_0.1_max_grad_norm_1.0_add_synthetic_legacy_typing_data
## Model Information
**Full Model Name**: `idm_tars_1.5_7b_frame_pairs_896x896_lr_1e-5_10_epochs_1000_steps_gbs_8_wd_0.1_max_grad_norm_1.0_add_synthetic_legacy_typing_data`
**Repository Name**: `mlfoundations-cua-dev/idm_tars_1.5_7b_frame_pairs_89orm_1.0_add_synthetic_legacy_typing_data`
**Model Directory**: `idm_tars_1.5_7b_frame_pairs_896x896_lr_1e-5_10_epochs_1000_steps_gbs_8_wd_0.1_max_grad_norm_1.0_add_synthetic_legacy_typing_data`
**Checkpoint Used**: `idm_tars_1.5_7b_frame_pairs_896x896_lr_1e-5_10_epochs_1000_steps_gbs_8_wd_0.1_max_grad_norm_1.0_add_synthetic_legacy_typing_data/checkpoint_epoch_9.pt`
## Model Configuration
- **Model Version**: TARS 1.5
- **Model Size**: 7B parameters
- **Data Type**: Frame pairs
- **Learning Rate**: 1e-5
- **Epochs**: 10
- **Training Steps**: 1000
- **Global Batch Size**: 8
- **Weight Decay**: 0.1
- **Max Gradient Norm**: 1.0
- **Resolution**: 896x896
- **Training Data**: Added synthetic legacy typing data
## Description
This repository contains the model state dict extracted from the training checkpoint.
### Files
- `model_state_dict.pt`: PyTorch state dictionary containing the model weights
- `README.md`: This file
## Usage
```python
import torch
# Load the model state dict
state_dict = torch.load("model_state_dict.pt", map_location='cpu')
# Use with your model architecture
# model.load_state_dict(state_dict)
```
## Notes
- This model was automatically uploaded using the `push_models_to_hf.py` script
- The repository name may be truncated if the original model name exceeded HuggingFace's 96-character limit
- Checkpoint extracted from: `checkpoint_epoch_9.pt`
|
databoyface/python-sk-ome-nb-v2.01
|
databoyface
| 2025-06-18T19:30:44Z | 0 | 1 | null |
[
"license:mit",
"region:us"
] | null | 2025-06-18T19:19:17Z |
---
license: mit
---
# Orthogonal Model of Emotions
A Text Classifier created using Sci-Kit Learn
## Author
C.J. Pitchford
## Published
18 June 2025
## Usage
# Load the model and vectorizer
def load_model_and_vectorizer(model_path='naive_bayes_model.pkl', vectorizer_path='vectorizer.pkl'):
model = joblib.load(model_path)
vectorizer = joblib.load(vectorizer_path)
return model, vectorizer
# Function to predict the label of a new text
def predict_label(text, model, vectorizer):
text_vec = vectorizer.transform([text])
prediction = model.predict(text_vec)
return prediction[0]
# Example usage
if __name__ == "__main__":
model, vectorizer = load_model_and_vectorizer()
new_text = "I really, really hope this works."
predicted_label = predict_label(new_text, model, vectorizer)
print(f'The predicted label for the text is: {predicted_label}')
|
kevin510/ACT-SO100-Scoop
|
kevin510
| 2025-06-18T19:26:57Z | 0 | 0 | null |
[
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-06-18T17:57:03Z |
---
license: apache-2.0
---
---
license: apache-2.0
---
# ☕️ ACT-SO100-Scoop
Action Chunking Transformer checkpoint that **scoops coffee beans with a custom 3-D-printed scoop end-effector**.

*3-D-printed scoop designed for SO-100 and SO-101 robotic arms.*
Tool STL is available for download in the [SO-100 Tools repository](https://github.com/krohling/so-100-tools).
---
## Demo

**Note**: The model had trouble completing the final step of the task, pouring the coffee beans into the cup. This is likely due to some interference issues in the training data and limited size of the dataset.
---
## Dataset
| Name | Episodes | Frames / episode | Modalities |
| -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | ---------------- | ----------------------------------------- |
| [370-drawn-to-caffeine-coffee-scooper](https://huggingface.co/spaces/lerobot/visualize_dataset?path=%2FLeRobot-worldwide-hackathon%2F370-drawn-to-caffeine-coffee-scooper%2Fepisode_0) | 42 | \~450 | RGB 1080x1920, proprio 5-DoF, gripper state |
## Training Details
See run details on wandb for more information: [wandb run](https://wandb.ai/kevin_ai/lerobot_hackathon/runs/00mydcm7).
| Hyper-parameter | Value |
| ------------------- | ---------------------------------- |
| Chunk size | 100 |
| Dim Feedforward | 3200 |
| Dim Model | 512 |
| Dropout | 0.1 |
| Feedforward Activation | ReLU |
| Decoder layers | 1 |
| Encoder layers | 4 |
| Attention heads | 8 |
| VAE Encoder layers | 4 |
| Batch size | 32 |
| Optimizer | AdamW, lr = 1e-5 |
## Citation
If you use this checkpoint in your work, please cite the following:
```bibtex
@misc{Rohling2025ACTSO100Scoop,
author = {Kevin Rohling},
title = {ACT Checkpoint for Coffee Scooping on SO-100},
year = {2025},
howpublished = {\url{https://huggingface.co/kevin510/ACT-SO100-Scoop}}
}
```
|
morturr/Llama-2-7b-hf-LOO_headlines-COMB_one_liners-comb1-seed28-2025-06-18
|
morturr
| 2025-06-18T19:21:49Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-18T19:21:40Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_headlines-COMB_one_liners-comb1-seed28-2025-06-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_headlines-COMB_one_liners-comb1-seed28-2025-06-18
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 28
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
rushabh-v/medgamma_finetuning_7pt4k_sft_dataset_17Jun
|
rushabh-v
| 2025-06-18T19:20:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-06-17T14:19:26Z |
---
base_model: google/medgemma-4b-it
library_name: transformers
model_name: medgamma_finetuning_7pt4k_sft_dataset_17Jun
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for medgamma_finetuning_7pt4k_sft_dataset_17Jun
This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="rushabh-v/medgamma_finetuning_7pt4k_sft_dataset_17Jun", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/ds_eka/MedGemma-SFT/runs/eogm2inc)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
haider-cheema28/llama3-conspiracy-model
|
haider-cheema28
| 2025-06-18T19:20:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-18T19:13:17Z |
---
base_model: unsloth/llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** haider-cheema28
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mezzo-fun-X/FULL.VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official
|
mezzo-fun-X
| 2025-06-18T19:19:43Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T19:17:00Z |
[🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/?V=mezzo-fun)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/?V=mezzo-fun)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=mezzo-fun)
|
prakod/codemix-indicBART_L1_to_CM_candidates_acc4.7
|
prakod
| 2025-06-18T19:14:59Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"generated_from_trainer",
"base_model:ai4bharat/IndicBART",
"base_model:finetune:ai4bharat/IndicBART",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-06-13T06:01:16Z |
---
library_name: transformers
base_model: ai4bharat/IndicBART
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: codemix-indicBART_L1_to_CM_candidates_acc4.7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codemix-indicBART_L1_to_CM_candidates_acc4.7
This model is a fine-tuned version of [ai4bharat/IndicBART](https://huggingface.co/ai4bharat/IndicBART) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2986
- Bleu: 11.9231
- Gen Len: 21.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:------:|:-----:|:---------------:|:-------:|:-------:|
| 3.7106 | 1.0 | 7546 | 3.3985 | 13.2137 | 21.0 |
| 3.2584 | 2.0 | 15092 | 2.8989 | 12.9778 | 20.992 |
| 2.9447 | 3.0 | 22638 | 2.5509 | 14.0866 | 21.0 |
| 2.7786 | 4.0 | 30184 | 2.3583 | 12.4674 | 21.0 |
| 2.7111 | 4.9994 | 37725 | 2.2986 | 11.9231 | 21.0 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
mlfoundations-cua-dev/uitars_add_new_advanced_synthetic_typing_data
|
mlfoundations-cua-dev
| 2025-06-18T19:00:11Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T18:29:16Z |
# idm_tars_1.5_7b_frame_pairs_896x896_lr_1e-5_10_epochs_500_steps_gbs_8_wd_0.1_max_grad_norm_1.0_add_new_advanced_synthetic_typing_data
## Model Information
**Full Model Name**: `idm_tars_1.5_7b_frame_pairs_896x896_lr_1e-5_10_epochs_500_steps_gbs_8_wd_0.1_max_grad_norm_1.0_add_new_advanced_synthetic_typing_data`
**Repository Name**: `mlfoundations-cua-dev/uitars_add_new_advanced_synthetic_typing_data`
**Model Directory**: `idm_tars_1.5_7b_frame_pairs_896x896_lr_1e-5_10_epochs_500_steps_gbs_8_wd_0.1_max_grad_norm_1.0_add_new_advanced_synthetic_typing_data`
**Checkpoint Used**: `idm_tars_1.5_7b_frame_pairs_896x896_lr_1e-5_10_epochs_500_steps_gbs_8_wd_0.1_max_grad_norm_1.0_add_new_advanced_synthetic_typing_data/checkpoint_epoch_9.pt`
## Model Configuration
- **Model Version**: TARS 1.5
- **Model Size**: 7B parameters
- **Data Type**: Frame pairs
- **Learning Rate**: 1e-5
- **Epochs**: 10
- **Training Steps**: 500
- **Global Batch Size**: 8
- **Weight Decay**: 0.1
- **Max Gradient Norm**: 1.0
- **Resolution**: 896x896
- **Training Data**: Added new advanced synthetic typing data
## Description
This repository contains the model state dict extracted from the training checkpoint.
### Files
- `model_state_dict.pt`: PyTorch state dictionary containing the model weights
- `README.md`: This file
## Usage
```python
import torch
# Load the model state dict
state_dict = torch.load("model_state_dict.pt", map_location='cpu')
# Use with your model architecture
# model.load_state_dict(state_dict)
```
## Notes
- This model was automatically uploaded using the `push_models_to_hf.py` script
- The repository name may be truncated if the original model name exceeded HuggingFace's 96-character limit
- Checkpoint extracted from: `checkpoint_epoch_9.pt`
|
GraybeardTheIrate/Cogwheel-Pantheon
|
GraybeardTheIrate
| 2025-06-18T18:52:27Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:Gryphe/Pantheon-RP-1.8-24b-Small-3.1",
"base_model:merge:Gryphe/Pantheon-RP-1.8-24b-Small-3.1",
"base_model:OddTheGreat/Cogwheel_24b_V.2",
"base_model:merge:OddTheGreat/Cogwheel_24b_V.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T18:30:44Z |
---
base_model:
- Gryphe/Pantheon-RP-1.8-24b-Small-3.1
- OddTheGreat/Cogwheel_24b_V.2
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* [Gryphe/Pantheon-RP-1.8-24b-Small-3.1](https://huggingface.co/Gryphe/Pantheon-RP-1.8-24b-Small-3.1)
* [OddTheGreat/Cogwheel_24b_V.2](https://huggingface.co/OddTheGreat/Cogwheel_24b_V.2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Gryphe/Pantheon-RP-1.8-24b-Small-3.1
- model: OddTheGreat/Cogwheel_24b_V.2
merge_method: slerp
base_model: OddTheGreat/Cogwheel_24b_V.2
dtype: bfloat16
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
```
|
Alexhe101/trained-flux-lora
|
Alexhe101
| 2025-06-18T18:45:57Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"flux",
"flux-diffusers",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-18T15:37:48Z |
---
base_model: black-forest-labs/FLUX.1-dev
library_name: diffusers
license: other
instance_prompt: a photo of linlu
widget:
- text: A photo of linlu in a ocean beach
output:
url: image_0.png
- text: A photo of linlu in a ocean beach
output:
url: image_1.png
- text: A photo of linlu in a ocean beach
output:
url: image_2.png
- text: A photo of linlu in a ocean beach
output:
url: image_3.png
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- flux
- flux-diffusers
- template:sd-lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Flux DreamBooth LoRA - Alexhe101/trained-flux-lora
<Gallery />
## Model description
These are Alexhe101/trained-flux-lora DreamBooth LoRA weights for black-forest-labs/FLUX.1-dev.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [Flux diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_flux.md).
Was LoRA for the text encoder enabled? False.
## Trigger words
You should use `a photo of linlu` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](Alexhe101/trained-flux-lora/tree/main) in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('Alexhe101/trained-flux-lora', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('A photo of linlu in a ocean beach').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
s0mecode/Qwen3-14B-Q4_K_M-GGUF
|
s0mecode
| 2025-06-18T18:45:15Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:Qwen/Qwen3-14B",
"base_model:quantized:Qwen/Qwen3-14B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-06-18T18:44:44Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-14B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: Qwen/Qwen3-14B
tags:
- llama-cpp
- gguf-my-repo
---
# s0mecode/Qwen3-14B-Q4_K_M-GGUF
This model was converted to GGUF format from [`Qwen/Qwen3-14B`](https://huggingface.co/Qwen/Qwen3-14B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-14B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo s0mecode/Qwen3-14B-Q4_K_M-GGUF --hf-file qwen3-14b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo s0mecode/Qwen3-14B-Q4_K_M-GGUF --hf-file qwen3-14b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo s0mecode/Qwen3-14B-Q4_K_M-GGUF --hf-file qwen3-14b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo s0mecode/Qwen3-14B-Q4_K_M-GGUF --hf-file qwen3-14b-q4_k_m.gguf -c 2048
```
|
sil-ai/madlad400-finetuned-lag-swh
|
sil-ai
| 2025-06-18T18:43:44Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"madlad400",
"Translation",
"translation",
"lag",
"swh",
"endpoints_compatible",
"region:us"
] |
translation
| 2025-05-06T22:12:04Z |
---
language:
- lag
- swh
tags:
- madlad400
- Translation
model_type: Translation
library_name: transformers
pipeline_tag: translation
---
# madlad400-finetuned-lag-swh
This model is a fine-tuned version of `facebook/nllb-200-distilled-1.3B` for translation from Rangi to Swahili.
## Model details
- **Developed by:** SIL Global
- **Finetuned from model:** facebook/nllb-200-distilled-1.3B
- **Model type:** Translation
- **Source language:** Rangi (`lag`)
- **Target language:** Swahili (`swh`)
- **License:** closed/private
## Datasets
The model was trained on a parallel corpus of plain text files:
Rangi:
- Rangi New Testament
- License: All rights reserved, Wycliffe Bible Translators. Used with permission.
Swahili:
- Swahili back-translation of Rangi New Testament
- License: All rights reserved, Wycliffe Bible Translators. Used with permission.
## Framework versions
- PEFT 0.12.0
- Transformers 4.44.2
- Pytorch 2.4.1+cu124
- Datasets 2.21.0
- Tokenizers 0.19.1
## Usage
You can use this model with the `transformers` library like this:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("sil-ai/madlad400-finetuned-lag-swh")
model = AutoModelForSeq2SeqLM.from_pretrained("sil-ai/madlad400-finetuned-lag-swh")
inputs = tokenizer("Your input text here", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))
```
|
lipro1609/AI-Vessel-Segmentation-Training-Data
|
lipro1609
| 2025-06-18T18:43:05Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2025-06-18T18:35:50Z |
---
title: AI Vessel Segmentation Training Data
colorFrom: red
colorTo: blue
sdk: static
pinned: false
license: mit
---
# AI Vessel Segmentation Training Data
AI model collection specifically designed for blood vessel segmentation in confocal z-stacks, with particular focus on laminin staining analysis, microvasculature detection, and 3D vessel network reconstruction.
### ADVANTAGES
- **16 Specialized Models**: From U-Net to modern Transformers
- **5 Real Datasets**: Working download links, no placeholders
- **Mixed Architectures**: U-Net, Transformers, 3D CNNs, CycleGAN
- **Research Proven**: Models from published papers
- **Immediate Use**: Download and start analyzing
## PERFORMANCE UPGRADE
| Metric | Traditional | This Pack | Improvement |
|--------|-------------|-----------|-------------|
| **Confocal Accuracy** | 65-75% | **88-95%** | **+23-30%** |
| **Microvessel Detection** | 70-80% | **92-97%** | **+15-25%** |
| **Processing Speed** | 1x | **6-8x** | **+600-700%** |
| **False Positives** | 15-25% | **3-6%** | **-75-80%** |
| **Publication Ready** | No | **YES** | **✓** |
## MODEL COLLECTION (~5.2GB)
### TIER 1: Specialized GitHub Models (1.53GB)
- **3DVascNet_official** (250MB): Official 3D CycleGAN for mouse vascular networks
- **VesSAP_brain_vessels** (180MB): Automated whole brain vasculature analysis
- **brain_vessel_3d_cnn** (200MB): 3D CNN brain vessel segmentation
- **MiniVess_2photon** (150MB): 2-photon microscopy cerebrovasculature
- **retina_unet_classic** (80MB): Original U-Net (DRIVE tested)
- **retina_unet_qtim** (90MB): QTIM Lab enhanced version
- **vessel_extract_unet** (75MB): U-Net vessel extraction
- **retina_vesselnet_tf2** (85MB): TensorFlow 2 implementation
- **breast_vessel_3d_segmentation** (300MB): 3D MRI breast vessels
- **retina_features_toolkit** (120MB): Complete analysis suite
### TIER 2: Modern Transformers (2.94GB)
- **sam_vessel_specialist** (1400MB): Segment Anything for vessels
- **segformer_microscopy_b4** (220MB): SegFormer optimized for microscopy
- **swin_vessel_transformer** (350MB): Swin Transformer vessel detection
- **beit_vessel_pattern** (340MB): BEiT pattern recognition
- **dinov2_vessel_features** (320MB): Self-supervised features
- **vit_microscopy_base** (310MB): Vision Transformer base
## REAL DATASETS (~2.95GB)
### Research-Grade Data
- **minivess_2photon_data** (1200MB): 70 3D rodent cerebrovasculature volumes
- **vessap_whole_brain_data** (800MB): Whole brain vasculature datasets
- **3dvascnet_training_data** (600MB): Pre-trained weights + training data
### Classic Benchmarks
- **drive_dataset_retinal** (50MB): Standard 40-image retinal dataset
### Synthetic Tools
- **synthetic_vessel_generator** (300MB): Data augmentation tools
## USAGE RECOMMENDATIONS
### For Laminin Z-Stacks:
```python
# Best combination for laminin staining
models = [
"3DVascNet_official", # 3D structure analysis
"sam_vessel_specialist", # Precise segmentation
"swin_vessel_transformer" # Robust detection
]
expected_accuracy = "92-96%"
```
### For Microvasculature:
```python
# Optimized for small vessels
models = [
"VesSAP_brain_vessels", # Whole network analysis
"MiniVess_2photon", # 2-photon optimized
"segformer_microscopy_b4" # Transformer precision
]
expected_accuracy = "90-95%"
```
### For 3D Reconstruction:
```python
# Full volumetric processing
models = [
"brain_vessel_3d_cnn", # 3D CNN
"breast_vessel_3d_segmentation", # 3D experience
"3DVascNet_official" # 3D networks
]
expected_accuracy = "88-93%"
```
### For Publication Quality:
```python
# Maximum accuracy ensemble
models = [
"sam_vessel_specialist", # State-of-the-art
"3DVascNet_official", # Specialized 3D
"swin_vessel_transformer", # Robust transformer
"VesSAP_brain_vessels" # Research proven
]
expected_accuracy = "94-97%"
```
## USAGE
### Download with Python
```python
from huggingface_hub import hf_hub_download
# Download the specialized vessel training data
vessel_pack = hf_hub_download(
repo_id="lipro1609/AI-Vessel-Segmentation-Training-Data",
filename="laminin_AI_training.zip",
cache_dir="./models"
)
# Extract and use
import zipfile
with zipfile.ZipFile(vessel_pack, 'r') as zip_ref:
zip_ref.extractall("./vessel_ai_models")
```
### Integration with vessel_isolation.py
```python
# Set the Hugging Face URL in your vessel_isolation.py
HF_REPO = "lipro1609/AI-Vessel-Segmentation-Training-Data"
MODEL_PACK_URL = "https://huggingface.co/lipro1609/AI-Vessel-Segmentation-Training-Data/resolve/main/laminin_AI_training.zip"
# Run vessel isolation with specialized pack
vessel_isolation()
```
### Use with Napari
```python
# In your Napari environment
import napari
from vessel_isolation import vessel_isolation
# Load the specialized vessel pack
gui = vessel_isolation()
# Select from 16 specialized models in dropdown
# Enjoy 88-95% accuracy vessel segmentation
```
## SYSTEM REQUIREMENTS
### Minimum System:
- **RAM**: 16GB (32GB for 3D models)
- **Storage**: 12GB free space
- **GPU**: 6GB+ VRAM (RTX 3060 or better)
- **Python**: 3.8+
### Optimal Performance:
- **RAM**: 32GB+
- **Storage**: SSD with 15GB+ free
- **GPU**: 12GB+ VRAM (RTX 4070 or better)
- **CPU**: 8+ cores
## SPECIALIZED APPLICATIONS
### Confocal Microscopy
- **Laminin Staining**: Specialized algorithms for basement membrane analysis
- **Z-Stack Processing**: True 3D volumetric analysis
- **Multi-scale Vessels**: From capillaries to major vessels
### 2-Photon Microscopy
- **Cerebrovasculature**: Validated models for brain vessel analysis
- **Live Imaging**: Optimized for dynamic vessel studies
- **Deep Tissue**: Robust performance in thick samples
### Research Applications
- **Microvasculature**: Advanced small vessel detection
- **3D Reconstruction**: Complete vessel network mapping
- **Quantitative Analysis**: Morphometric parameter extraction
- **Batch Processing**: Automated large dataset analysis
## PERFORMANCE
### Model Accuracy by Application:
| Application | Best Models | Accuracy Range | Speed |
|-------------|-------------|----------------|-------|
| **Confocal Z-Stacks** | 3DVascNet + SAM + Swin | 92-96% | Medium |
| **2-Photon Microscopy** | MiniVess + VesSAP + SegFormer | 90-95% | Medium |
| **Retinal Fundus** | U-Net Classic + SegFormer + Features | 90-94% | Fast |
| **3D MRI Volumes** | Breast3D + Brain3D + SAM | 88-93% | Slow |
| **Publication Quality** | Full Ensemble | 94-97% | Variable |
### Architecture Performance:
| Architecture | Models | Strengths | Best For |
|--------------|--------|-----------|----------|
| **U-Net** | 4 models | Fast, reliable, well-documented | Quick analysis, baselines |
| **Transformers** | 6 models | State-of-the-art accuracy, robust | Publication quality, difficult samples |
| **3D CNNs** | 3 models | True 3D processing, spatial consistency | Volumetric analysis, z-stacks |
| **Specialized** | 3 models | Domain optimized, research proven | Specific applications, reproduction |
### All Components Verified:
- **16 Models**: All downloaded successfully from real repositories
- **5 Datasets**: Working download links, no placeholders
- **Mixed Architectures**: U-Net, Transformers, 3D CNNs, CycleGAN
- **Research Proven**: Models from published papers with citations
- **Complete Documentation**: Usage guides and examples included
### Laboratory Validation:
- Tested on diverse vessel types and imaging modalities
- Validated against manual annotations
- Benchmarked processing speeds on different hardware
- Documented optimal usage patterns for each model
## 🔗 REAL GITHUB REPOSITORIES
All models sourced from actual, working GitHub repositories:
### Specialized Vessel Models:
- `github.com/HemaxiN/3DVascNet` - Official 3DVascNet implementation
- `github.com/vessap/vessap` - VesSAP brain vessel analysis
- `github.com/fepegar/vesseg` - 3D brain vessel segmentation
- `github.com/ctpn/minivess` - MiniVess 2-photon dataset
### Classic U-Net Implementations:
- `github.com/orobix/retina-unet` - Original retinal U-Net
- `github.com/QTIM-Lab/retinaunet` - QTIM enhanced version
- `github.com/djin31/VesselExtract` - Vessel extraction tools
- `github.com/DeepTrial/Retina-VesselNet` - TensorFlow 2 implementation
### Advanced Models:
- `github.com/mazurowski-lab/3D-Breast-FGT-and-Blood-Vessel-Segmentation` - 3D MRI vessels
- `github.com/getsanjeev/retina-features` - Complete retinal analysis
### Transformer Models (Hugging Face):
- `facebook/sam-vit-base` - Segment Anything Model
- `nvidia/segformer-b4-finetuned-ade-512-512` - SegFormer B4
- `microsoft/swin-base-patch4-window7-224` - Swin Transformer
- `microsoft/beit-base-patch16-224` - BEiT
- `facebook/dinov2-base` - DINOv2
- `google/vit-base-patch16-224` - Vision Transformer
## INSTALLATION GUIDE
### 1. Download and Extract
```bash
# Download the AI vessel training data
wget https://huggingface.co/lipro1609/AI-Vessel-Segmentation-Training-Data/resolve/main/laminin_AI_training.zip
# Extract to your analysis environment
unzip laminin_AI_training.zip -d vessel_ai_models/
```
### 2. Install Dependencies
```bash
# Core requirements
pip install torch torchvision transformers huggingface_hub
pip install opencv-python scikit-image nibabel
pip install GitPython requests tqdm
# 3D processing
pip install SimpleITK vtk scipy
```
### 3. Verify Installation
```python
# Test model availability
import os
model_dir = "vessel_ai_models"
models = [d for d in os.listdir(model_dir) if os.path.isdir(os.path.join(model_dir, d))]
print(f"Installed models: {len(models)}")
# Should show: Installed models: 16
datasets = [d for d in os.listdir(os.path.join(model_dir, "datasets")) if os.path.isdir(os.path.join(model_dir, "datasets", d))]
print(f"Installed datasets: {len(datasets)}")
# Should show: Installed datasets: 5
```
### 4. Quick Start
```python
# Load and use any model from the pack
from specialized_vessel_pack import load_model
# Example: 3D vessel analysis
model = load_model("3DVascNet_official")
results = model.segment_vessels(z_stack_path)
# Example: Quick retinal analysis
model = load_model("retina_unet_classic")
vessels = model.segment(fundus_image)
# Example: State-of-the-art segmentation
model = load_model("sam_vessel_specialist")
precise_vessels = model.segment_with_prompts(image, points)
```
## LICENSE
MIT License - Free for research and commercial applications.
Individual models may have their own licenses - check respective repositories.
## CITATION
If you use this specialized vessel pack in your research, please cite:
```bibtex
@misc{ai_vessel_training_data_2024,
title={AI Vessel Segmentation Training Data - Specialized Pack for Confocal Z-Stack Analysis},
author={lipro1609},
year={2024},
howpublished={\url{https://huggingface.co/lipro1609/AI-Vessel-Segmentation-Training-Data}}
}
```
### Model-Specific Citations:
- **3DVascNet**: Narotamo et al., Arteriosclerosis, Thrombosis, and Vascular Biology, 2024
- **VesSAP**: Todorov et al., Nature Methods, 2020
- **Individual Models**: See respective GitHub repositories for citations
## CONTACT
For questions, issues, or collaboration opportunities:
- Repository: https://huggingface.co/lipro1609/AI-Vessel-Segmentation-Training-Data
- Issues: Open an issue on the repository
- Professional inquiries: Contact via Hugging Face profile
## UPDATES
This pack will be updated as new specialized vessel segmentation models become available.
Check the repository for the latest version and new model additions.
---
## Ready for Vessel Segmentation in confocal z-stack images
This AI training pack represents the cutting edge in automated vessel segmentation for confocal microscopy and related applications.
Expected research productivity improvement: 5-10x faster analysis with 25-30% better accuracy than traditional methods.
All 16 models and 5 datasets verified, tested, and ready for immediate research use.
No placeholders. Real Models.
|
arcee-ai/Arcee-SuperNova-v1
|
arcee-ai
| 2025-06-18T18:42:23Z | 0 | 7 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"base_model:meta-llama/Llama-3.1-70B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-70B-Instruct",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-10T07:52:25Z |
---
license: llama3
base_model:
- meta-llama/Llama-3.1-70B-Instruct
library_name: transformers
---

**Arcee-SuperNova-v1 (70B)** is a merged model built from multiple advanced training approaches. At its core is a distilled version of Llama-3.1-405B-Instruct into Llama-3.1-70B-Instruct, using out [DistillKit](https://github.com/arcee-ai/DistillKit) to preserve instruction-following strengths while reducing size.
Alongside this, another Llama-3.1-70B model was instruction-tuned using synthetic data from our Evol-Kit pipeline, improving precision and adherence across diverse queries. Updates were integrated mid-epoch for smoother performance gains.
A third version underwent Direct Preference Optimization (DPO) to better align with human feedback. While its contribution was smaller, it helped refine final alignment.
The resulting Arcee-SuperNova combines all three, delivering strong human preference alignment and state-of-the-art instruction-following ability.
### Model Details
- Architecture Base: Llama-3.1-70B-Instruct
- Parameter Count: 70B
- License: [Llama3]
### Use Cases
- General intelligence and instruction following
- Serving as a base to be retrained over time using Reinforcement Learning from Human Feedback (RLHF)
- Mathematical applications and queries
### Quantizations
GGUF format available [here](https://huggingface.co/arcee-ai/Arcee-SuperNova-v1-GGUF)
### License
**Arcee-SuperNova-v1 (70B)** is released under the Llama-3 license. You are free to use, modify, and distribute this model in both commercial and non-commercial applications, subject to the terms and conditions of the license.
If you have questions or would like to share your experiences using Arcee-SuperNova-v1 (70B), please connect with us on social media. We’re excited to see what you build—and how this model helps you innovate!
|
BootesVoid/cmbsfe9a105q1h4x5rs7jashz_cmc11d12u09tfrdqsoe7ze2nt
|
BootesVoid
| 2025-06-18T18:37:06Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-18T18:37:05Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: CLAY
---
# Cmbsfe9A105Q1H4X5Rs7Jashz_Cmc11D12U09Tfrdqsoe7Ze2Nt
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `CLAY` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "CLAY",
"lora_weights": "https://huggingface.co/BootesVoid/cmbsfe9a105q1h4x5rs7jashz_cmc11d12u09tfrdqsoe7ze2nt/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbsfe9a105q1h4x5rs7jashz_cmc11d12u09tfrdqsoe7ze2nt', weight_name='lora.safetensors')
image = pipeline('CLAY').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbsfe9a105q1h4x5rs7jashz_cmc11d12u09tfrdqsoe7ze2nt/discussions) to add images that show off what you’ve made with this LoRA.
|
Shero448/akumeru
|
Shero448
| 2025-06-18T18:30:05Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:Liberata/illustrious-xl-v1.0",
"base_model:adapter:Liberata/illustrious-xl-v1.0",
"region:us"
] |
text-to-image
| 2025-06-18T18:29:27Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: "UNICODE\0\01\0g\0i\0r\0l\0,\0s\0o\0l\0o\0,\0A\0s\0a\0g\0i\0 \0I\0r\0u\0h\0a\0,\0b\0l\0a\0c\0k\0 \0h\0a\0i\0r\0,\0l\0o\0n\0g\0 \0h\0a\0i\0r\0,\0b\0r\0o\0w\0n\0 \0e\0y\0e\0s\0,\0h\0u\0g\0e\0 \0b\0r\0e\0a\0s\0t\0s\0,\0"
output:
url: images/TT0YK6VN44QW0XK1AK7XARZ7Z0.jpeg
base_model: Liberata/illustrious-xl-v1.0
instance_prompt: Asagi Iruha
---
# akumeru
<Gallery />
## Trigger words
You should use `Asagi Iruha` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Shero448/akumeru/tree/main) them in the Files & versions tab.
|
morturr/Llama-2-7b-hf-LOO_headlines-COMB_one_liners-comb1-seed18-2025-06-18
|
morturr
| 2025-06-18T18:29:23Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-18T18:29:15Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_headlines-COMB_one_liners-comb1-seed18-2025-06-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_headlines-COMB_one_liners-comb1-seed18-2025-06-18
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 18
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
kalyaannnn/phi2-lora-qa
|
kalyaannnn
| 2025-06-18T18:19:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T18:19:03Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
young-j-park/ReasonEval-7B-calibrated-Qwen2.5-Math-1.5B-Instruct
|
young-j-park
| 2025-06-18T18:18:59Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:GAIR/ReasonEval-7B",
"base_model:adapter:GAIR/ReasonEval-7B",
"region:us"
] | null | 2025-06-18T18:15:31Z |
---
base_model: GAIR/ReasonEval-7B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0
|
young-j-park/ReasonEval-7B-calibrated-Llama-3.1-8B-Instruct
|
young-j-park
| 2025-06-18T18:18:57Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:GAIR/ReasonEval-7B",
"base_model:adapter:GAIR/ReasonEval-7B",
"region:us"
] | null | 2025-06-18T18:15:30Z |
---
base_model: GAIR/ReasonEval-7B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0
|
young-j-park/ReasonEval-7B-calibrated-Llama-3.2-1B-Instruct
|
young-j-park
| 2025-06-18T18:18:55Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:GAIR/ReasonEval-7B",
"base_model:adapter:GAIR/ReasonEval-7B",
"region:us"
] | null | 2025-06-18T18:15:30Z |
---
base_model: GAIR/ReasonEval-7B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0
|
young-j-park/math-shepherd-mistral-7b-prm-calibrated-Llama-3.1-8B-Instruct
|
young-j-park
| 2025-06-18T18:18:42Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:peiyi9979/math-shepherd-mistral-7b-prm",
"base_model:adapter:peiyi9979/math-shepherd-mistral-7b-prm",
"region:us"
] | null | 2025-06-18T18:15:27Z |
---
base_model: peiyi9979/math-shepherd-mistral-7b-prm
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0
|
young-j-park/math-shepherd-mistral-7b-prm-calibrated-Llama-3.2-1B-Instruct
|
young-j-park
| 2025-06-18T18:18:40Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:peiyi9979/math-shepherd-mistral-7b-prm",
"base_model:adapter:peiyi9979/math-shepherd-mistral-7b-prm",
"region:us"
] | null | 2025-06-18T18:15:26Z |
---
base_model: peiyi9979/math-shepherd-mistral-7b-prm
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0
|
young-j-park/Qwen2.5-Math-PRM-7B-calibrated-Qwen2.5-Math-7B-Instruct
|
young-j-park
| 2025-06-18T18:18:30Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-Math-PRM-7B",
"base_model:adapter:Qwen/Qwen2.5-Math-PRM-7B",
"region:us"
] | null | 2025-06-04T06:10:15Z |
---
base_model: Qwen/Qwen2.5-Math-PRM-7B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0
|
young-j-park/Qwen2.5-Math-PRM-7B-calibrated-Qwen2.5-Math-1.5B-Instruct
|
young-j-park
| 2025-06-18T18:18:27Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-Math-PRM-7B",
"base_model:adapter:Qwen/Qwen2.5-Math-PRM-7B",
"region:us"
] | null | 2025-06-04T06:10:15Z |
---
base_model: Qwen/Qwen2.5-Math-PRM-7B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0
|
lostinjamal/6354f80d-a84f-4f68-931a-da47b7792095
|
lostinjamal
| 2025-06-18T18:17:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"unsloth",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-06-18T14:45:51Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Official-Bokep-Indo-18-Viral-video/FULL.VIDEO.Indo.Viral.Video.Tutorial.Official
|
Official-Bokep-Indo-18-Viral-video
| 2025-06-18T18:08:08Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T18:07:26Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
stjiris/bert-large-portuguese-cased-legal-tsdae-sts-v1
|
stjiris
| 2025-06-18T17:51:50Z | 29 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"safetensors",
"bert",
"feature-extraction",
"transformers",
"sentence-similarity",
"pt",
"dataset:stjiris/portuguese-legal-sentences-v0",
"dataset:assin",
"dataset:assin2",
"dataset:stsb_multi_mt",
"dataset:stjiris/IRIS_sts",
"license:mit",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2023-01-05T00:06:51Z |
---
language:
- pt
thumbnail: Portuguese BERT for the Legal Domain
tags:
- sentence-transformers
- transformers
- bert
- pytorch
- sentence-similarity
license: mit
pipeline_tag: sentence-similarity
datasets:
- stjiris/portuguese-legal-sentences-v0
- assin
- assin2
- stsb_multi_mt
- stjiris/IRIS_sts
widget:
- source_sentence: "O advogado apresentou as provas ao juíz."
sentences:
- "O juíz leu as provas."
- "O juíz leu o recurso."
- "O juíz atirou uma pedra."
model-index:
- name: BERTimbau
results:
- task:
name: STS
type: STS
metrics:
- name: Pearson Correlation - assin Dataset
type: Pearson Correlation
value: 0.7843350530283666
- name: Pearson Correlation - assin2 Dataset
type: Pearson Correlation
value: 0.8161009943619048
- name: Pearson Correlation - stsb_multi_mt pt Dataset
type: Pearson Correlation
value: 0.8432039975708361
- name: Pearson Correlation - IRIS STS Dataset
type: Pearson Correlation
value: 0.7842761087524468
---
[](https://www.inesc-id.pt/projects/PR07005/)
[](https://rufimelo99.github.io/SemanticSearchSystemForSTJ/)
Work developed as part of [Project IRIS](https://www.inesc-id.pt/projects/PR07005/).
Thesis: [A Semantic Search System for Supremo Tribunal de Justiça](https://rufimelo99.github.io/SemanticSearchSystemForSTJ/)
# stjiris/bert-large-portuguese-cased-legal-tsdae-sts-v1 (Legal BERTimbau)
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
stjiris/bert-large-portuguese-cased-legal-tsdae derives from [BERTimbau](https://huggingface.co/neuralmind/bert-large-portuguese-cased) large.
It was trained using the TSDAE technique with a learning rate 1e-5 [Legal Sentences from +-30000 documents](https://huggingface.co/datasets/stjiris/portuguese-legal-sentences-v1.0) 21.2k training steps (best performance for our semantic search system implementation)
It was trained for Semantic Textual Similarity, being submitted to a fine tuning stage with the [assin](https://huggingface.co/datasets/assin), [assin2](https://huggingface.co/datasets/assin2), [stsb_multi_mt pt](https://huggingface.co/datasets/stsb_multi_mt) and [IRIS STS](https://huggingface.co/datasets/stjiris/IRIS_sts) datasets. 'lr': 1e-5
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Isto é um exemplo", "Isto é um outro exemplo"]
model = SentenceTransformer('stjiris/bert-large-portuguese-cased-legal-tsdae-nli-sts-v1')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('stjiris/bert-large-portuguese-cased-legal-tsdae-sts-v1')
model = AutoModel.from_pretrained('stjiris/bert-large-portuguese-cased-legal-tsdae-sts-v1')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 514, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1028, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
### Contributions
[@rufimelo99](https://github.com/rufimelo99)
If you use this work, please cite:
```bibtex
@InProceedings{MeloSemantic,
author="Melo, Rui
and Santos, Pedro A.
and Dias, Jo{\~a}o",
editor="Moniz, Nuno
and Vale, Zita
and Cascalho, Jos{\'e}
and Silva, Catarina
and Sebasti{\~a}o, Raquel",
title="A Semantic Search System for the Supremo Tribunal de Justi{\c{c}}a",
booktitle="Progress in Artificial Intelligence",
year="2023",
publisher="Springer Nature Switzerland",
address="Cham",
pages="142--154",
abstract="Many information retrieval systems use lexical approaches to retrieve information. Such approaches have multiple limitations, and these constraints are exacerbated when tied to specific domains, such as the legal one. Large language models, such as BERT, deeply understand a language and may overcome the limitations of older methodologies, such as BM25. This work investigated and developed a prototype of a Semantic Search System to assist the Supremo Tribunal de Justi{\c{c}}a (Portuguese Supreme Court of Justice) in its decision-making process. We built a Semantic Search System that uses specially trained BERT models (Legal-BERTimbau variants) and a Hybrid Search System that incorporates both lexical and semantic techniques by combining the capabilities of BM25 and the potential of Legal-BERTimbau. In this context, we obtained a {\$}{\$}335{\backslash}{\%}{\$}{\$}335{\%}increase on the discovery metric when compared to BM25 for the first query result. This work also provides information on the most relevant techniques for training a Large Language Model adapted to Portuguese jurisprudence and introduces a new technique of Metadata Knowledge Distillation.",
isbn="978-3-031-49011-8"
}
@inproceedings{souza2020bertimbau,
author = {F{\'a}bio Souza and
Rodrigo Nogueira and
Roberto Lotufo},
title = {{BERT}imbau: pretrained {BERT} models for {B}razilian {P}ortuguese},
booktitle = {9th Brazilian Conference on Intelligent Systems, {BRACIS}, Rio Grande do Sul, Brazil, October 20-23 (to appear)},
year = {2020}
}
@inproceedings{fonseca2016assin,
title={ASSIN: Avaliacao de similaridade semantica e inferencia textual},
author={Fonseca, E and Santos, L and Criscuolo, Marcelo and Aluisio, S},
booktitle={Computational Processing of the Portuguese Language-12th International Conference, Tomar, Portugal},
pages={13--15},
year={2016}
}
@inproceedings{real2020assin,
title={The assin 2 shared task: a quick overview},
author={Real, Livy and Fonseca, Erick and Oliveira, Hugo Goncalo},
booktitle={International Conference on Computational Processing of the Portuguese Language},
pages={406--412},
year={2020},
organization={Springer}
}
@InProceedings{huggingface:dataset:stsb_multi_mt,
title = {Machine translated multilingual STS benchmark dataset.},
author={Philip May},
year={2021},
url={https://github.com/PhilipMay/stsb-multi-mt}
}
```
|
BootesVoid/cmbbe1i3j06lf85uu4v1mkpz2_cmc28493v0cbrrdqs7q0qo5t5
|
BootesVoid
| 2025-06-18T17:47:33Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-18T17:47:32Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: INFLUENCE42
---
# Cmbbe1I3J06Lf85Uu4V1Mkpz2_Cmc28493V0Cbrrdqs7Q0Qo5T5
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `INFLUENCE42` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "INFLUENCE42",
"lora_weights": "https://huggingface.co/BootesVoid/cmbbe1i3j06lf85uu4v1mkpz2_cmc28493v0cbrrdqs7q0qo5t5/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbbe1i3j06lf85uu4v1mkpz2_cmc28493v0cbrrdqs7q0qo5t5', weight_name='lora.safetensors')
image = pipeline('INFLUENCE42').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbbe1i3j06lf85uu4v1mkpz2_cmc28493v0cbrrdqs7q0qo5t5/discussions) to add images that show off what you’ve made with this LoRA.
|
chanceykingjr/finalone
|
chanceykingjr
| 2025-06-18T17:44:05Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-18T17:22:21Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: king
---
# Finalone
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `king` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "king",
"lora_weights": "https://huggingface.co/chanceykingjr/finalone/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('chanceykingjr/finalone', weight_name='lora.safetensors')
image = pipeline('king').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/chanceykingjr/finalone/discussions) to add images that show off what you’ve made with this LoRA.
|
EbisuRyu/whisper-tiny
|
EbisuRyu
| 2025-06-18T17:43:52Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-06-18T15:10:09Z |
---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-finetuned-minds14
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Minds14
type: PolyAI/minds14
config: en-US
split: train
args: en-US
metrics:
- name: Wer
type: wer
value: 0.35215736040609136
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-finetuned-minds14
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6368
- Wer Ortho: 0.3540
- Wer: 0.3522
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-------:|:----:|:---------------:|:---------:|:------:|
| 0.7132 | 7.1429 | 200 | 0.4647 | 0.3554 | 0.3477 |
| 0.0044 | 14.2857 | 400 | 0.5662 | 0.3520 | 0.3471 |
| 0.0006 | 21.4286 | 600 | 0.5970 | 0.3540 | 0.3509 |
| 0.0004 | 28.5714 | 800 | 0.6191 | 0.3547 | 0.3515 |
| 0.0002 | 35.7143 | 1000 | 0.6368 | 0.3540 | 0.3522 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
raul111204/pythia-2b-xsum-raul2
|
raul111204
| 2025-06-18T17:42:17Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"autotrain",
"text-generation-inference",
"text-generation",
"peft",
"conversational",
"dataset:mia-llm/xsum-raw-MIA",
"base_model:EleutherAI/pythia-2.8b",
"base_model:finetune:EleutherAI/pythia-2.8b",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T14:22:37Z |
---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
base_model: EleutherAI/pythia-2.8b
widget:
- messages:
- role: user
content: What is your favorite condiment?
license: other
datasets:
- mia-llm/xsum-raw-MIA
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
New-Viral-mezzo-fun-Viral-Video/Original.Full.Clip.mezzo.fun.Viral.Video.Leaks.Official
|
New-Viral-mezzo-fun-Viral-Video
| 2025-06-18T17:31:06Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T17:30:59Z |
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="https://sdu.sk/uLf" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝗦𝗶𝗴𝗻 𝗨𝗽 𝘁𝗼 𝙁𝙪𝙡𝙡 𝗪𝗮𝘁𝗰𝗵 𝙑𝙞𝙙𝙚𝙤❤️❤️)</a>
<a href="https://sdu.sk/uLf" rel="nofollow">🔴 ➤►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐥𝐢𝐧𝐤)</a>
|
shaddie/rocketpill_ts_informer_model
|
shaddie
| 2025-06-18T17:27:10Z | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"informer",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T21:07:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
heboya8/facebook-musicgen-small-not-lora
|
heboya8
| 2025-06-18T17:26:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"musicgen",
"text-to-audio",
"CLAPv2/MusicCaps",
"generated_from_trainer",
"base_model:facebook/musicgen-small",
"base_model:finetune:facebook/musicgen-small",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2025-06-18T09:50:56Z |
---
library_name: transformers
license: cc-by-nc-4.0
base_model: facebook/musicgen-small
tags:
- text-to-audio
- CLAPv2/MusicCaps
- generated_from_trainer
model-index:
- name: GenCaps-finetune-Musicgen-small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# GenCaps-finetune-Musicgen-small
This model is a fine-tuned version of [facebook/musicgen-small](https://huggingface.co/facebook/musicgen-small) on the CLAPV2/MUSICCAPS - DEFAULT dataset.
It achieves the following results on the evaluation set:
- Loss: 7.3132
- Clap: 0.0593
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 1
- seed: 456
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.99) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Heouzen/ai_voice
|
Heouzen
| 2025-06-18T17:22:16Z | 0 | 0 | null |
[
"audio-to-audio",
"id",
"license:apache-2.0",
"region:us"
] |
audio-to-audio
| 2023-12-24T18:48:51Z |
---
license: apache-2.0
language:
- id
pipeline_tag: audio-to-audio
---
|
ECE-ILAB/POIROT-ECE-1.2
|
ECE-ILAB
| 2025-06-18T17:22:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:AXCXEPT/Qwen3-EZO-8B-beta",
"base_model:merge:AXCXEPT/Qwen3-EZO-8B-beta",
"base_model:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B",
"base_model:merge:deepseek-ai/DeepSeek-R1-0528-Qwen3-8B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T17:16:58Z |
---
base_model:
- deepseek-ai/DeepSeek-R1-0528-Qwen3-8B
- AXCXEPT/Qwen3-EZO-8B-beta
library_name: transformers
tags:
- mergekit
- merge
---
# POIROT-ECE-1.2
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* [deepseek-ai/DeepSeek-R1-0528-Qwen3-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B)
* [AXCXEPT/Qwen3-EZO-8B-beta](https://huggingface.co/AXCXEPT/Qwen3-EZO-8B-beta)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: AXCXEPT/Qwen3-EZO-8B-beta
layer_range: [0, 35]
- model: deepseek-ai/DeepSeek-R1-0528-Qwen3-8B
layer_range: [0, 35]
merge_method: slerp
base_model: AXCXEPT/Qwen3-EZO-8B-beta
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
Rahmaa33/lora_model
|
Rahmaa33
| 2025-06-18T17:21:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T17:21:44Z |
---
base_model: unsloth/qwen2-0.5b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Rahmaa33
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2-0.5b-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ITHwangg/lebotica-pickplace-v3-step1k
|
ITHwangg
| 2025-06-18T17:18:50Z | 0 | 0 | null |
[
"safetensors",
"dataset:ITHwangg/svla_koch_pickplace_v3",
"license:mit",
"region:us"
] | null | 2025-06-15T09:04:41Z |
---
datasets:
- ITHwangg/svla_koch_pickplace_v3
license: mit
---
# lebotica-pickplace-v3-step1k
- Dataset: [ITHwangg/svla_koch_pickplace_v3](https://huggingface.co/datasets/ITHwangg/svla_koch_pickplace_v3)
- Model: [ITHwangg/lebotica-pickplace-15k](https://huggingface.co/ITHwangg/lebotica-pickplace-15k)
|
GraybeardTheIrate/Harbinger-Cogwheel
|
GraybeardTheIrate
| 2025-06-18T17:11:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:LatitudeGames/Harbinger-24B",
"base_model:merge:LatitudeGames/Harbinger-24B",
"base_model:OddTheGreat/Cogwheel_24b_V.2",
"base_model:merge:OddTheGreat/Cogwheel_24b_V.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T14:38:42Z |
---
base_model:
- OddTheGreat/Cogwheel_24b_V.2
- LatitudeGames/Harbinger-24B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* [OddTheGreat/Cogwheel_24b_V.2](https://huggingface.co/OddTheGreat/Cogwheel_24b_V.2)
* [LatitudeGames/Harbinger-24B](https://huggingface.co/LatitudeGames/Harbinger-24B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: LatitudeGames/Harbinger-24B
- model: OddTheGreat/Cogwheel_24b_V.2
merge_method: slerp
base_model: LatitudeGames/Harbinger-24B
dtype: bfloat16
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
```
|
nabieva/tmed_glove
|
nabieva
| 2025-06-18T17:10:05Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-18T17:09:25Z |
---
license: apache-2.0
---
|
ITHwangg/lebotica-pickplace-stacking-step15k
|
ITHwangg
| 2025-06-18T17:07:54Z | 0 | 0 | null |
[
"safetensors",
"dataset:ITHwangg/svla_koch_pickplace_and_stacking",
"license:mit",
"region:us"
] | null | 2025-06-15T01:54:14Z |
---
datasets:
- ITHwangg/svla_koch_pickplace_and_stacking
license: mit
---
# lebotica-pickplace-stacking-step15k
- Dataset: [ITHwangg/svla_koch_pickplace_and_stacking](https://huggingface.co/datasets/ITHwangg/svla_koch_pickplace_and_stacking)
- Model: [lerobot/smolvla_base](https://huggingface.co/lerobot/smolvla_base)
|
3sara/version1_2-5epochs-checkpoint
|
3sara
| 2025-06-18T17:06:09Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"colpali-finetuned",
"generated_from_trainer",
"base_model:vidore/colpaligemma-3b-pt-448-base",
"base_model:adapter:vidore/colpaligemma-3b-pt-448-base",
"license:gemma",
"region:us"
] | null | 2025-06-18T17:05:56Z |
---
library_name: peft
license: gemma
base_model: vidore/colpaligemma-3b-pt-448-base
tags:
- colpali-finetuned
- generated_from_trainer
model-index:
- name: version1_2-5epochs-checkpoint
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# version1_2-5epochs-checkpoint
This model is a fine-tuned version of [vidore/colpaligemma-3b-pt-448-base](https://huggingface.co/vidore/colpaligemma-3b-pt-448-base) on the 3sara/validated_colpali_italian_documents_with_images dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0103 | 1 | 0.3835 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
vcabeli/Qwen3-8B-Open-R1-GRPO-signature-expression
|
vcabeli
| 2025-06-18T17:05:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-17T16:29:47Z |
---
base_model: Qwen/Qwen3-8B
library_name: transformers
model_name: Qwen3-8B-Open-R1-GRPO-signature-expression
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen3-8B-Open-R1-GRPO-signature-expression
This model is a fine-tuned version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="vcabeli/Qwen3-8B-Open-R1-GRPO-signature-expression", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/vincent-cabeli-owkin/huggingface/runs/cughyzye)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.0
- Transformers: 4.52.3
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
sgonzalezygil/sd-finetuning-dreambooth-v12
|
sgonzalezygil
| 2025-06-18T17:05:14Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-06-18T17:03:19Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ITHwangg/lebotica-pickplace-step5k
|
ITHwangg
| 2025-06-18T17:01:27Z | 0 | 0 | null |
[
"safetensors",
"dataset:ITHwangg/svla_koch_pickplace",
"license:mit",
"region:us"
] | null | 2025-06-15T00:18:04Z |
---
datasets:
- ITHwangg/svla_koch_pickplace
license: mit
---
# lebotica-pickplace-step5k
- Dataset: [ITHwangg/svla_koch_pickplace](https://huggingface.co/datasets/ITHwangg/svla_koch_pickplace)
- Model: [lerobot/smolvla_base](https://huggingface.co/lerobot/smolvla_base)
|
ITHwangg/lebotica-pickplace-step10k
|
ITHwangg
| 2025-06-18T17:00:41Z | 0 | 0 | null |
[
"safetensors",
"dataset:ITHwangg/svla_koch_pickplace",
"license:mit",
"region:us"
] | null | 2025-06-15T00:22:41Z |
---
datasets:
- ITHwangg/svla_koch_pickplace
license: mit
---
# lebotica-pickplace-step10k
- Dataset: [ITHwangg/svla_koch_pickplace](https://huggingface.co/datasets/ITHwangg/svla_koch_pickplace)
- Model: [lerobot/smolvla_base](https://huggingface.co/lerobot/smolvla_base)
|
s0mecode/Qwen3-32B-Q4_K_M-GGUF
|
s0mecode
| 2025-06-18T17:00:32Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:Qwen/Qwen3-32B",
"base_model:quantized:Qwen/Qwen3-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-06-18T16:59:22Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-32B/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- llama-cpp
- gguf-my-repo
base_model: Qwen/Qwen3-32B
---
# s0mecode/Qwen3-32B-Q4_K_M-GGUF
This model was converted to GGUF format from [`Qwen/Qwen3-32B`](https://huggingface.co/Qwen/Qwen3-32B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-32B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo s0mecode/Qwen3-32B-Q4_K_M-GGUF --hf-file qwen3-32b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo s0mecode/Qwen3-32B-Q4_K_M-GGUF --hf-file qwen3-32b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo s0mecode/Qwen3-32B-Q4_K_M-GGUF --hf-file qwen3-32b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo s0mecode/Qwen3-32B-Q4_K_M-GGUF --hf-file qwen3-32b-q4_k_m.gguf -c 2048
```
|
morturr/Llama-2-7b-hf-LOO_one_liners-COMB_dadjokes-comb3-seed18-2025-06-18
|
morturr
| 2025-06-18T16:52:57Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-18T16:52:41Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_one_liners-COMB_dadjokes-comb3-seed18-2025-06-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_one_liners-COMB_dadjokes-comb3-seed18-2025-06-18
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 18
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
morturr/Llama-2-7b-hf-LOO_dadjokes-COMB_headlines-comb3-seed18-2025-06-18
|
morturr
| 2025-06-18T16:51:50Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-18T16:51:33Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_dadjokes-COMB_headlines-comb3-seed18-2025-06-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_dadjokes-COMB_headlines-comb3-seed18-2025-06-18
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 18
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
harriskr14/emotion-classification
|
harriskr14
| 2025-06-18T16:47:45Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-06-18T09:09:41Z |
---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: emotion-classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.51875
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion-classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3560
- Accuracy: 0.5188
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 5 | 1.6699 | 0.4313 |
| 1.5821 | 2.0 | 10 | 1.6118 | 0.4562 |
| 1.5821 | 3.0 | 15 | 1.5550 | 0.475 |
| 1.445 | 4.0 | 20 | 1.5128 | 0.5062 |
| 1.445 | 5.0 | 25 | 1.4508 | 0.5375 |
| 1.3202 | 6.0 | 30 | 1.4364 | 0.5 |
| 1.3202 | 7.0 | 35 | 1.3776 | 0.575 |
| 1.2242 | 8.0 | 40 | 1.3966 | 0.5 |
| 1.2242 | 9.0 | 45 | 1.3724 | 0.525 |
| 1.1589 | 10.0 | 50 | 1.3483 | 0.525 |
| 1.1589 | 11.0 | 55 | 1.3186 | 0.5687 |
| 1.0962 | 12.0 | 60 | 1.3295 | 0.5375 |
| 1.0962 | 13.0 | 65 | 1.3058 | 0.5875 |
| 1.0542 | 14.0 | 70 | 1.3296 | 0.5375 |
| 1.0542 | 15.0 | 75 | 1.3185 | 0.5813 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
ubayhee007/vit-emotion
|
ubayhee007
| 2025-06-18T16:45:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-06-18T16:44:53Z |
---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-emotion
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.475
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-emotion
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3013
- Accuracy: 0.475
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.6375 | 1.0 | 40 | 1.5448 | 0.4125 |
| 0.9668 | 2.0 | 80 | 1.3493 | 0.45 |
| 0.5913 | 3.0 | 120 | 1.3013 | 0.475 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
morturr/Llama-2-7b-hf-LOO_headlines-COMB_dadjokes-comb3-seed42-2025-06-18
|
morturr
| 2025-06-18T16:43:33Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-18T16:43:18Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-LOO_headlines-COMB_dadjokes-comb3-seed42-2025-06-18
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-LOO_headlines-COMB_dadjokes-comb3-seed42-2025-06-18
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
BootesVoid/cmc24lnt00c35rdqsuxv48nr4_cmc24yg6p0c4brdqsjfjptjmg
|
BootesVoid
| 2025-06-18T16:43:22Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-18T16:43:19Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: WIFEY
---
# Cmc24Lnt00C35Rdqsuxv48Nr4_Cmc24Yg6P0C4Brdqsjfjptjmg
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `WIFEY` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "WIFEY",
"lora_weights": "https://huggingface.co/BootesVoid/cmc24lnt00c35rdqsuxv48nr4_cmc24yg6p0c4brdqsjfjptjmg/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmc24lnt00c35rdqsuxv48nr4_cmc24yg6p0c4brdqsjfjptjmg', weight_name='lora.safetensors')
image = pipeline('WIFEY').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmc24lnt00c35rdqsuxv48nr4_cmc24yg6p0c4brdqsjfjptjmg/discussions) to add images that show off what you’ve made with this LoRA.
|
vishakr01/comp4_03
|
vishakr01
| 2025-06-18T16:27:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T16:24:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
chanceykingjr/aimodel
|
chanceykingjr
| 2025-06-18T16:23:53Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-18T13:48:02Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: king
---
# Aimodel
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `king ` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "king ",
"lora_weights": "https://huggingface.co/chanceykingjr/aimodel/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('chanceykingjr/aimodel', weight_name='lora.safetensors')
image = pipeline('king ').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 41
## Contribute your own examples
You can use the [community tab](https://huggingface.co/chanceykingjr/aimodel/discussions) to add images that show off what you’ve made with this LoRA.
|
s0mecode/Cosmos-Reason1-7B-Q4_K_M-GGUF
|
s0mecode
| 2025-06-18T16:23:29Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"nvidia",
"cosmos",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:nvidia/Cosmos-Reason1-SFT-Dataset",
"dataset:nvidia/Cosmos-Reason1-RL-Dataset",
"dataset:nvidia/Cosmos-Reason1-Benchmark",
"base_model:nvidia/Cosmos-Reason1-7B",
"base_model:quantized:nvidia/Cosmos-Reason1-7B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-18T16:23:10Z |
---
license: other
license_name: nvidia-open-model-license
license_link: https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license
datasets:
- nvidia/Cosmos-Reason1-SFT-Dataset
- nvidia/Cosmos-Reason1-RL-Dataset
- nvidia/Cosmos-Reason1-Benchmark
library_name: transformers
language:
- en
base_model: nvidia/Cosmos-Reason1-7B
tags:
- nvidia
- cosmos
- llama-cpp
- gguf-my-repo
---
# s0mecode/Cosmos-Reason1-7B-Q4_K_M-GGUF
This model was converted to GGUF format from [`nvidia/Cosmos-Reason1-7B`](https://huggingface.co/nvidia/Cosmos-Reason1-7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nvidia/Cosmos-Reason1-7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo s0mecode/Cosmos-Reason1-7B-Q4_K_M-GGUF --hf-file cosmos-reason1-7b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo s0mecode/Cosmos-Reason1-7B-Q4_K_M-GGUF --hf-file cosmos-reason1-7b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo s0mecode/Cosmos-Reason1-7B-Q4_K_M-GGUF --hf-file cosmos-reason1-7b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo s0mecode/Cosmos-Reason1-7B-Q4_K_M-GGUF --hf-file cosmos-reason1-7b-q4_k_m.gguf -c 2048
```
|
MaxTGH/SDXLBase5e-3
|
MaxTGH
| 2025-06-18T16:18:39Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2025-06-18T16:18:35Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: a drone image of a humpback whale
output:
url: images/image_6.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a drone image of a humpback whale
license: openrail++
---
# SDXL LoRA Dreambooth
<Gallery />
## Model description
These are MaxTGH/Model LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: None.
## Trigger words
You should use `a drone image of a humpback whale` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/MaxTGH/SDXLBase5e-3/tree/main) them in the Files & versions tab.
|
huihui-ai/Huihui-Qwen3-8B-abliterated-v2
|
huihui-ai
| 2025-06-18T16:15:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"chat",
"abliterated",
"uncensored",
"conversational",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T15:24:27Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE
pipeline_tag: text-generation
base_model:
- Qwen/Qwen3-8B
tags:
- chat
- abliterated
- uncensored
---
# huihui-ai/Huihui-Qwen3-8B-abliterated-v2
This is an uncensored version of [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) created with abliteration (see [remove-refusals-with-transformers](https://github.com/Sumandora/remove-refusals-with-transformers) to know more about it).
This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.
Ablation was performed using a new and faster method, which yields better results.
**Important Note** This version is an improvement over the previous one [huihui-ai/Qwen3-8B-abliterated](https://huggingface.co/huihui-ai/Qwen3-8B-abliterated). The ollama version has also been modified.
Changed the 0 layer to eliminate the problem of garbled codes
## ollama
You can use [huihui_ai/qwen3-abliterated:8b-v2](https://ollama.com/huihui_ai/qwen3-abliterated:8b-v2) directly, Switch the thinking toggle using /set think and /set nothink
```
ollama run huihui_ai/qwen3-abliterated:8b-v2
```
## Usage
You can use this model in your applications by loading it with Hugging Face's `transformers` library:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, TextStreamer
import torch
import os
import signal
import random
import numpy as np
import time
from collections import Counter
cpu_count = os.cpu_count()
print(f"Number of CPU cores in the system: {cpu_count}")
half_cpu_count = cpu_count // 2
os.environ["MKL_NUM_THREADS"] = str(half_cpu_count)
os.environ["OMP_NUM_THREADS"] = str(half_cpu_count)
torch.set_num_threads(half_cpu_count)
print(f"PyTorch threads: {torch.get_num_threads()}")
print(f"MKL threads: {os.getenv('MKL_NUM_THREADS')}")
print(f"OMP threads: {os.getenv('OMP_NUM_THREADS')}")
# Load the model and tokenizer
NEW_MODEL_ID = "huihui-ai/Huihui-Qwen3-8B-abliterated-v2"
print(f"Load Model {NEW_MODEL_ID} ... ")
quant_config_4 = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
llm_int8_enable_fp32_cpu_offload=True,
)
model = AutoModelForCausalLM.from_pretrained(
NEW_MODEL_ID,
device_map="auto",
trust_remote_code=True,
#quantization_config=quant_config_4,
torch_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained(NEW_MODEL_ID, trust_remote_code=True)
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
tokenizer.pad_token_id = tokenizer.eos_token_id
tokenizer = AutoTokenizer.from_pretrained(NEW_MODEL_ID, trust_remote_code=True)
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
tokenizer.pad_token_id = tokenizer.eos_token_id
messages = []
nothink = False
same_seed = False
skip_prompt=True
skip_special_tokens=True
do_sample = True
def set_random_seed(seed=None):
"""Set random seed for reproducibility. If seed is None, use int(time.time())."""
if seed is None:
seed = int(time.time()) # Convert float to int
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed) # If using CUDA
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
return seed # Return seed for logging if needed
class CustomTextStreamer(TextStreamer):
def __init__(self, tokenizer, skip_prompt=True, skip_special_tokens=True):
super().__init__(tokenizer, skip_prompt=skip_prompt, skip_special_tokens=skip_special_tokens)
self.generated_text = ""
self.stop_flag = False
self.init_time = time.time() # Record initialization time
self.end_time = None # To store end time
self.first_token_time = None # To store first token generation time
self.token_count = 0 # To track total tokens
def on_finalized_text(self, text: str, stream_end: bool = False):
if self.first_token_time is None and text.strip(): # Set first token time on first non-empty text
self.first_token_time = time.time()
self.generated_text += text
# Count tokens in the generated text
tokens = self.tokenizer.encode(text, add_special_tokens=False)
self.token_count += len(tokens)
print(text, end="", flush=True)
if stream_end:
self.end_time = time.time() # Record end time when streaming ends
if self.stop_flag:
raise StopIteration
def stop_generation(self):
self.stop_flag = True
self.end_time = time.time() # Record end time when generation is stopped
def get_metrics(self):
"""Returns initialization time, first token time, first token latency, end time, total time, total tokens, and tokens per second."""
if self.end_time is None:
self.end_time = time.time() # Set end time if not already set
total_time = self.end_time - self.init_time # Total time from init to end
tokens_per_second = self.token_count / total_time if total_time > 0 else 0
first_token_latency = (self.first_token_time - self.init_time) if self.first_token_time is not None else None
metrics = {
"init_time": self.init_time,
"first_token_time": self.first_token_time,
"first_token_latency": first_token_latency,
"end_time": self.end_time,
"total_time": total_time, # Total time in seconds
"total_tokens": self.token_count,
"tokens_per_second": tokens_per_second
}
return metrics
def generate_stream(model, tokenizer, messages, nothink, skip_prompt, skip_special_tokens, do_sample, max_new_tokens):
input_ids = tokenizer.apply_chat_template(
messages,
tokenize=True,
enable_thinking = not nothink,
add_generation_prompt=True,
return_tensors="pt"
)
attention_mask = torch.ones_like(input_ids, dtype=torch.long)
tokens = input_ids.to(model.device)
attention_mask = attention_mask.to(model.device)
streamer = CustomTextStreamer(tokenizer, skip_prompt=skip_prompt, skip_special_tokens=skip_special_tokens)
def signal_handler(sig, frame):
streamer.stop_generation()
print("\n[Generation stopped by user with Ctrl+C]")
signal.signal(signal.SIGINT, signal_handler)
generate_kwargs = {}
if do_sample:
generate_kwargs = {
"do_sample": do_sample,
"max_length": max_new_tokens,
"temperature": 0.6,
"top_k": 20,
"top_p": 0.95,
"repetition_penalty": 1.2,
"no_repeat_ngram_size": 2
}
else:
generate_kwargs = {
"do_sample": do_sample,
"max_length": max_new_tokens,
"repetition_penalty": 1.2,
"no_repeat_ngram_size": 2
}
print("Response: ", end="", flush=True)
try:
generated_ids = model.generate(
tokens,
attention_mask=attention_mask,
#use_cache=False,
pad_token_id=tokenizer.pad_token_id,
streamer=streamer,
**generate_kwargs
)
del generated_ids
except StopIteration:
print("\n[Stopped by user]")
del input_ids, attention_mask
torch.cuda.empty_cache()
signal.signal(signal.SIGINT, signal.SIG_DFL)
return streamer.generated_text, streamer.stop_flag, streamer.get_metrics()
init_seed = set_random_seed()
while True:
if same_seed:
set_random_seed(init_seed)
else:
init_seed = set_random_seed()
print(f"\nnothink: {nothink}")
print(f"skip_prompt: {skip_prompt}")
print(f"skip_special_tokens: {skip_special_tokens}")
print(f"do_sample: {do_sample}")
print(f"same_seed: {same_seed}, {init_seed}\n")
user_input = input("User: ").strip()
if user_input.lower() == "/exit":
print("Exiting chat.")
break
if user_input.lower() == "/clear":
messages = []
print("Chat history cleared. Starting a new conversation.")
continue
if user_input.lower() == "/nothink":
nothink = not nothink
continue
if user_input.lower() == "/skip_prompt":
skip_prompt = not skip_prompt
continue
if user_input.lower() == "/skip_special_tokens":
skip_special_tokens = not skip_special_tokens
continue
if user_input.lower().startswith("/same_seed"):
parts = user_input.split()
if len(parts) == 1: # /same_seed (no number)
same_seed = not same_seed # Toggle switch
elif len(parts) == 2: # /same_seed <number>
try:
init_seed = int(parts[1]) # Extract and convert number to int
same_seed = True
except ValueError:
print("Error: Please provide a valid integer after /same_seed")
continue
if user_input.lower() == "/do_sample":
do_sample = not do_sample
continue
if not user_input:
print("Input cannot be empty. Please enter something.")
continue
messages.append({"role": "user", "content": user_input})
activated_experts = []
response, stop_flag, metrics = generate_stream(model, tokenizer, messages, nothink, skip_prompt, skip_special_tokens, do_sample, 40960)
print("\n\nMetrics:")
for key, value in metrics.items():
print(f" {key}: {value}")
print("", flush=True)
if stop_flag:
continue
messages.append({"role": "assistant", "content": response})
# Remove all hooks after inference
for h in hooks: h.remove()
```
### Usage Warnings
- **Risk of Sensitive or Controversial Outputs**: This model’s safety filtering has been significantly reduced, potentially generating sensitive, controversial, or inappropriate content. Users should exercise caution and rigorously review generated outputs.
- **Not Suitable for All Audiences**: Due to limited content filtering, the model’s outputs may be inappropriate for public settings, underage users, or applications requiring high security.
- **Legal and Ethical Responsibilities**: Users must ensure their usage complies with local laws and ethical standards. Generated content may carry legal or ethical risks, and users are solely responsible for any consequences.
- **Research and Experimental Use**: It is recommended to use this model for research, testing, or controlled environments, avoiding direct use in production or public-facing commercial applications.
- **Monitoring and Review Recommendations**: Users are strongly advised to monitor model outputs in real-time and conduct manual reviews when necessary to prevent the dissemination of inappropriate content.
- **No Default Safety Guarantees**: Unlike standard models, this model has not undergone rigorous safety optimization. huihui.ai bears no responsibility for any consequences arising from its use.
### Donation
If you like it, please click 'like' and follow us for more updates.
You can follow [x.com/support_huihui](https://x.com/support_huihui) to get the latest model information from huihui.ai.
##### Your donation helps us continue our further development and improvement, a cup of coffee can do it.
- bitcoin(BTC):
```
bc1qqnkhuchxw0zqjh2ku3lu4hq45hc6gy84uk70ge
```
|
LandCruiser/sn21_omg_1806_25
|
LandCruiser
| 2025-06-18T16:14:14Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-06-18T16:12:32Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
murphy1021/My-4-bit-model
|
murphy1021
| 2025-06-18T16:09:52Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"gemma3",
"text-generation",
"conversational",
"base_model:mlx-community/gemma-3-4b-it-4bit",
"base_model:quantized:mlx-community/gemma-3-4b-it-4bit",
"license:gemma",
"region:us"
] |
text-generation
| 2025-06-18T14:53:14Z |
---
base_model: mlx-community/gemma-3-4b-it-4bit
library_name: mlx
license: gemma
pipeline_tag: text-generation
tags:
- mlx
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model_relation: quantized
---
|
sergiopaniego/gemma-3-4b-pt-object-detection-loc-tokens
|
sergiopaniego
| 2025-06-18T16:04:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-18T16:01:22Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LandCruiser/sn21_omg_1806_20
|
LandCruiser
| 2025-06-18T16:02:52Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-06-18T15:45:52Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
LandCruiser/sn21_omg_1806_22
|
LandCruiser
| 2025-06-18T16:02:40Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-06-18T16:00:42Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
LandCruiser/sn21_omg_1806_19
|
LandCruiser
| 2025-06-18T16:02:34Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-06-18T15:45:51Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
LandCruiser/sn21_omg_1806_17
|
LandCruiser
| 2025-06-18T16:02:27Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-06-18T15:45:50Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
LandCruiser/sn21_omg_1806_13
|
LandCruiser
| 2025-06-18T16:01:49Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-06-18T15:45:48Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
BootesVoid/cmbpc8efu00f513bs6fztct6v_cmc1y17f80bo1rdqsyqgtmky5
|
BootesVoid
| 2025-06-18T16:00:51Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-18T16:00:49Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: MAYA
---
# Cmbpc8Efu00F513Bs6Fztct6V_Cmc1Y17F80Bo1Rdqsyqgtmky5
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `MAYA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "MAYA",
"lora_weights": "https://huggingface.co/BootesVoid/cmbpc8efu00f513bs6fztct6v_cmc1y17f80bo1rdqsyqgtmky5/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbpc8efu00f513bs6fztct6v_cmc1y17f80bo1rdqsyqgtmky5', weight_name='lora.safetensors')
image = pipeline('MAYA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbpc8efu00f513bs6fztct6v_cmc1y17f80bo1rdqsyqgtmky5/discussions) to add images that show off what you’ve made with this LoRA.
|
BernalHR/V2Phi-3-mini-4k-instruct-Inscripciones-bnb-4bit-GGUF
|
BernalHR
| 2025-06-18T15:57:57Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"base_model:quantized:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T15:57:21Z |
---
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** BernalHR
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
JulianChang/SmolLM2-1.7B-Instruct-Q4_0-GGUF
|
JulianChang
| 2025-06-18T15:57:19Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"safetensors",
"onnx",
"transformers.js",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:HuggingFaceTB/SmolLM2-1.7B-Instruct",
"base_model:quantized:HuggingFaceTB/SmolLM2-1.7B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-06-18T15:57:14Z |
---
library_name: transformers
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- safetensors
- onnx
- transformers.js
- llama-cpp
- gguf-my-repo
base_model: HuggingFaceTB/SmolLM2-1.7B-Instruct
---
# JulianChang/SmolLM2-1.7B-Instruct-Q4_0-GGUF
This model was converted to GGUF format from [`HuggingFaceTB/SmolLM2-1.7B-Instruct`](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo JulianChang/SmolLM2-1.7B-Instruct-Q4_0-GGUF --hf-file smollm2-1.7b-instruct-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo JulianChang/SmolLM2-1.7B-Instruct-Q4_0-GGUF --hf-file smollm2-1.7b-instruct-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo JulianChang/SmolLM2-1.7B-Instruct-Q4_0-GGUF --hf-file smollm2-1.7b-instruct-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo JulianChang/SmolLM2-1.7B-Instruct-Q4_0-GGUF --hf-file smollm2-1.7b-instruct-q4_0.gguf -c 2048
```
|
amgule/meme-model-merged
|
amgule
| 2025-06-18T15:55:46Z | 14 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_vl",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen2-VL-2B-Instruct",
"base_model:finetune:unsloth/Qwen2-VL-2B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-01T11:24:51Z |
---
base_model: unsloth/Qwen2-VL-2B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2_vl
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** amgule
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2-VL-2B-Instruct
This qwen2_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. Trained using HuggingFaceM4/the_cauldron dataset, [hateful_memes subset](https://huggingface.co/datasets/HuggingFaceM4/the_cauldron/viewer/hateful_memes).
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
xlangai/Jedi-7B-1080p
|
xlangai
| 2025-06-18T15:55:03Z | 2,710 | 24 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"en",
"arxiv:2505.13227",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-04-28T17:05:40Z |
---
base_model:
- Qwen/Qwen2.5-VL-7B-Instruct
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: image-text-to-text
---
This repository contains the model from the paper [Scaling Computer-Use Grounding via User Interface Decomposition and Synthesis](https://huggingface.co/papers/2505.13227).
## 📄 Citation
If you find this work useful, please consider citing our paper:
```bibtex
@misc{xie2025scalingcomputerusegroundinguser,
title={Scaling Computer-Use Grounding via User Interface Decomposition and Synthesis},
author={Tianbao Xie and Jiaqi Deng and Xiaochuan Li and Junlin Yang and Haoyuan Wu and Jixuan Chen and Wenjing Hu and Xinyuan Wang and Yuhui Xu and Zekun Wang and Yiheng Xu and Junli Wang and Doyen Sahoo and Tao Yu and Caiming Xiong},
year={2025},
eprint={2505.13227},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2505.13227},
}
```
Project Page: https://osworld-grounding.github.io
Code: https://github.com/xlang-ai/OSWorld-G
|
LandCruiser/sn21_omg_1806_12
|
LandCruiser
| 2025-06-18T15:51:19Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-06-18T15:45:48Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
LandCruiser/sn21_omg_1806_3
|
LandCruiser
| 2025-06-18T15:51:07Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-06-18T15:45:44Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
alzidy/Qwen3_14B
|
alzidy
| 2025-06-18T15:40:48Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-18T15:40:48Z |
---
license: apache-2.0
---
|
JulianChang/SmolVLM2-2.2B-Instruct-Q8_0-GGUF
|
JulianChang
| 2025-06-18T15:40:31Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"video-text-to-text",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"en",
"dataset:HuggingFaceM4/the_cauldron",
"dataset:HuggingFaceM4/Docmatix",
"dataset:lmms-lab/LLaVA-OneVision-Data",
"dataset:lmms-lab/M4-Instruct-Data",
"dataset:HuggingFaceFV/finevideo",
"dataset:MAmmoTH-VL/MAmmoTH-VL-Instruct-12M",
"dataset:lmms-lab/LLaVA-Video-178K",
"dataset:orrzohar/Video-STaR",
"dataset:Mutonix/Vript",
"dataset:TIGER-Lab/VISTA-400K",
"dataset:Enxin/MovieChat-1K_train",
"dataset:ShareGPT4Video/ShareGPT4Video",
"base_model:HuggingFaceTB/SmolVLM2-2.2B-Instruct",
"base_model:quantized:HuggingFaceTB/SmolVLM2-2.2B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
image-text-to-text
| 2025-06-18T15:40:22Z |
---
library_name: transformers
license: apache-2.0
datasets:
- HuggingFaceM4/the_cauldron
- HuggingFaceM4/Docmatix
- lmms-lab/LLaVA-OneVision-Data
- lmms-lab/M4-Instruct-Data
- HuggingFaceFV/finevideo
- MAmmoTH-VL/MAmmoTH-VL-Instruct-12M
- lmms-lab/LLaVA-Video-178K
- orrzohar/Video-STaR
- Mutonix/Vript
- TIGER-Lab/VISTA-400K
- Enxin/MovieChat-1K_train
- ShareGPT4Video/ShareGPT4Video
pipeline_tag: image-text-to-text
tags:
- video-text-to-text
- llama-cpp
- gguf-my-repo
language:
- en
base_model: HuggingFaceTB/SmolVLM2-2.2B-Instruct
---
# JulianChang/SmolVLM2-2.2B-Instruct-Q8_0-GGUF
This model was converted to GGUF format from [`HuggingFaceTB/SmolVLM2-2.2B-Instruct`](https://huggingface.co/HuggingFaceTB/SmolVLM2-2.2B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/HuggingFaceTB/SmolVLM2-2.2B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo JulianChang/SmolVLM2-2.2B-Instruct-Q8_0-GGUF --hf-file smolvlm2-2.2b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo JulianChang/SmolVLM2-2.2B-Instruct-Q8_0-GGUF --hf-file smolvlm2-2.2b-instruct-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo JulianChang/SmolVLM2-2.2B-Instruct-Q8_0-GGUF --hf-file smolvlm2-2.2b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo JulianChang/SmolVLM2-2.2B-Instruct-Q8_0-GGUF --hf-file smolvlm2-2.2b-instruct-q8_0.gguf -c 2048
```
|
KamelWerhani/Phi-4-mini-ROS2-Navigation-Ontology
|
KamelWerhani
| 2025-06-18T15:37:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T15:37:50Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Zillis/2025_PAAMA_MODEL_J.EUN_PV8
|
Zillis
| 2025-06-18T15:32:14Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-18T10:02:42Z |































































|
souvickdascmsa019/initial-colbert-ir
|
souvickdascmsa019
| 2025-06-18T15:31:12Z | 0 | 0 |
PyLate
|
[
"PyLate",
"safetensors",
"bert",
"ColBERT",
"sentence-transformers",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:242513",
"loss:Contrastive",
"en",
"dataset:reasonir/reasonir-data",
"arxiv:1908.10084",
"base_model:colbert-ir/colbertv2.0",
"base_model:finetune:colbert-ir/colbertv2.0",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-06-18T15:29:28Z |
---
language:
- en
tags:
- ColBERT
- PyLate
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:242513
- loss:Contrastive
base_model: colbert-ir/colbertv2.0
datasets:
- reasonir/reasonir-data
pipeline_tag: sentence-similarity
library_name: PyLate
metrics:
- accuracy
model-index:
- name: PyLate model based on colbert-ir/colbertv2.0
results:
- task:
type: col-berttriplet
name: Col BERTTriplet
dataset:
name: Unknown
type: unknown
metrics:
- type: accuracy
value: 0.92734694480896
name: Accuracy
---
# PyLate model based on colbert-ir/colbertv2.0
This is a [PyLate](https://github.com/lightonai/pylate) model finetuned from [colbert-ir/colbertv2.0](https://huggingface.co/colbert-ir/colbertv2.0) on the [reasonir-data](https://huggingface.co/datasets/reasonir/reasonir-data) dataset. It maps sentences & paragraphs to sequences of 128-dimensional dense vectors and can be used for semantic textual similarity using the MaxSim operator.
## Model Details
### Model Description
- **Model Type:** PyLate model
- **Base model:** [colbert-ir/colbertv2.0](https://huggingface.co/colbert-ir/colbertv2.0) <!-- at revision c1e84128e85ef755c096a95bdb06b47793b13acf -->
- **Document Length:** 180 tokens
- **Query Length:** 32 tokens
- **Output Dimensionality:** 128 tokens
- **Similarity Function:** MaxSim
- **Training Dataset:**
- [reasonir-data](https://huggingface.co/datasets/reasonir/reasonir-data)
- **Language:** en
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [PyLate Documentation](https://lightonai.github.io/pylate/)
- **Repository:** [PyLate on GitHub](https://github.com/lightonai/pylate)
- **Hugging Face:** [PyLate models on Hugging Face](https://huggingface.co/models?library=PyLate)
### Full Model Architecture
```
ColBERT(
(0): Transformer({'max_seq_length': 179, 'do_lower_case': False}) with Transformer model: BertModel
(1): Dense({'in_features': 768, 'out_features': 128, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
)
```
## Usage
First install the PyLate library:
```bash
pip install -U pylate
```
### Retrieval
PyLate provides a streamlined interface to index and retrieve documents using ColBERT models. The index leverages the Voyager HNSW index to efficiently handle document embeddings and enable fast retrieval.
#### Indexing documents
First, load the ColBERT model and initialize the Voyager index, then encode and index your documents:
```python
from pylate import indexes, models, retrieve
# Step 1: Load the ColBERT model
model = models.ColBERT(
model_name_or_path=pylate_model_id,
)
# Step 2: Initialize the Voyager index
index = indexes.Voyager(
index_folder="pylate-index",
index_name="index",
override=True, # This overwrites the existing index if any
)
# Step 3: Encode the documents
documents_ids = ["1", "2", "3"]
documents = ["document 1 text", "document 2 text", "document 3 text"]
documents_embeddings = model.encode(
documents,
batch_size=32,
is_query=False, # Ensure that it is set to False to indicate that these are documents, not queries
show_progress_bar=True,
)
# Step 4: Add document embeddings to the index by providing embeddings and corresponding ids
index.add_documents(
documents_ids=documents_ids,
documents_embeddings=documents_embeddings,
)
```
Note that you do not have to recreate the index and encode the documents every time. Once you have created an index and added the documents, you can re-use the index later by loading it:
```python
# To load an index, simply instantiate it with the correct folder/name and without overriding it
index = indexes.Voyager(
index_folder="pylate-index",
index_name="index",
)
```
#### Retrieving top-k documents for queries
Once the documents are indexed, you can retrieve the top-k most relevant documents for a given set of queries.
To do so, initialize the ColBERT retriever with the index you want to search in, encode the queries and then retrieve the top-k documents to get the top matches ids and relevance scores:
```python
# Step 1: Initialize the ColBERT retriever
retriever = retrieve.ColBERT(index=index)
# Step 2: Encode the queries
queries_embeddings = model.encode(
["query for document 3", "query for document 1"],
batch_size=32,
is_query=True, # # Ensure that it is set to False to indicate that these are queries
show_progress_bar=True,
)
# Step 3: Retrieve top-k documents
scores = retriever.retrieve(
queries_embeddings=queries_embeddings,
k=10, # Retrieve the top 10 matches for each query
)
```
### Reranking
If you only want to use the ColBERT model to perform reranking on top of your first-stage retrieval pipeline without building an index, you can simply use rank function and pass the queries and documents to rerank:
```python
from pylate import rank, models
queries = [
"query A",
"query B",
]
documents = [
["document A", "document B"],
["document 1", "document C", "document B"],
]
documents_ids = [
[1, 2],
[1, 3, 2],
]
model = models.ColBERT(
model_name_or_path=pylate_model_id,
)
queries_embeddings = model.encode(
queries,
is_query=True,
)
documents_embeddings = model.encode(
documents,
is_query=False,
)
reranked_documents = rank.rerank(
documents_ids=documents_ids,
queries_embeddings=queries_embeddings,
documents_embeddings=documents_embeddings,
)
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Col BERTTriplet
* Evaluated with <code>pylate.evaluation.colbert_triplet.ColBERTTripletEvaluator</code>
| Metric | Value |
|:-------------|:-----------|
| **accuracy** | **0.9273** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### reasonir-data
* Dataset: [reasonir-data](https://huggingface.co/datasets/reasonir/reasonir-data) at [0275f82](https://huggingface.co/datasets/reasonir/reasonir-data/tree/0275f825929b206d4ead23d34b4f8a50d4eddbc8)
* Size: 242,513 training samples
* Columns: <code>query</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | query | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 26.76 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 19.89 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 20.32 tokens</li><li>max: 32 tokens</li></ul> |
* Samples:
| query | positive | negative |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Market analysis suggests that the ongoing trend of increased adoption of renewable energy sources will continue to drive the demand for solar panels in the coming years. According to various reports, the global solar panel market is projected to witness significant growth over the next decade, with some analysts predicting a compound annual growth rate (CAGR) of up to 20%. This growth is expected to be fueled by declining costs of production, government incentives, and growing environmental concerns. However, some experts also caution that the market may experience fluctuations due to trade policies, technological advancements, and changes in government regulations. Despite these challenges, the overall outlook for the solar panel market remains optimistic, with many companies investing heavily in research and development to improve efficiency and reduce costs. As the demand for renewable energy continues to rise, it is likely that the solar panel market will play a significant role in...</code> | <code>Contrary to the market analysis suggesting a compound annual growth rate (CAGR) of up to 20% for the global solar panel market, our financial reports indicate a more modest growth rate of 12% over the next decade. While we agree that declining production costs, government incentives, and growing environmental concerns will drive demand for solar panels, we also believe that trade policies, technological advancements, and changes in government regulations will have a more significant impact on the market than previously anticipated. Our projections suggest that the market will experience fluctuations, with some years experiencing higher growth rates than others. However, we do not anticipate the market to experience the same level of growth as predicted by other analysts. Our research indicates that the market will reach a saturation point, beyond which growth will slow down. Additionally, we believe that the impact of advancements in energy storage technologies on the solar panel marke...</code> | <code>The demand for solar panels has been on the rise in recent years, driven by an increase in environmental awareness and the need for sustainable energy sources. One of the key factors contributing to this growth is the decline in production costs. As technology advances, the cost of manufacturing solar panels has decreased, making them more affordable for consumers. Additionally, governments around the world have implemented policies and incentives to encourage the adoption of renewable energy sources, which has further boosted demand for solar panels. However, the solar panel market is not without its challenges. Trade policies and technological advancements can impact the market, and changes in government regulations can create uncertainty. Despite these challenges, the outlook for the solar panel market remains positive, with many companies investing heavily in research and development to improve efficiency and reduce costs. The development of new technologies, such as bifacial panel...</code> |
| <code>As the sun set over the vast savannah, a sense of tranquility washed over the pride of lions. Their tawny coats glistened in the fading light, and the sound of crickets provided a soothing background hum. Nearby, a group of humans, armed with cameras and curiosity, observed the wild animals from a safe distance.</code> | <code>The lions lazed in the shade of a nearby tree, their tawny coats a blur as they basked in the warmth. The visitors watched in awe, clicking away at their cameras to capture the majesty of the wild animals. Crickets provided a constant, soothing background noise as the humans took care to keep a safe distance from the pride.</code> | <code>The city's tree planting initiative has been a huge success, providing a serene oasis in the midst of the bustling metropolis. The sounds of the city – car horns, chatter and crickets – blend together to create a symphony of noise. While many humans have been drawn to the tranquility of the park, others have raised concerns about the integration of urban wildlife.</code> |
| <code>Recent advancements in the field of artificial intelligence have led to significant breakthroughs in natural language processing. This has far-reaching implications for various industries, including education, where AI-powered chatbots can enhance student learning experiences by providing personalization and real-time feedback. Moreover, the integration of AI in educational settings can help address issues of accessibility and equity.</code> | <code>The rapid expansion of AI research has yielded substantial progress in natural language processing, allowing for the development of more sophisticated AI-powered tools. In the education sector, AI-driven chatbots can facilitate individualized learning and offer instantaneous feedback, thereby enriching the overall learning environment. However, it is crucial to address concerns surrounding the digital divide to ensure that these technological advancements are accessible to all.</code> | <code>One of the primary challenges facing archaeologists today is the authentication of ancient artifacts, which often involves meticulous analysis of relics and literary texts. The discovery of a previously unknown scroll, buried deep within the labyrinthine passages of an Egyptian tomb, shed new light on the role of language in ancient cultures. Interestingly, the sophisticated syntax and nuanced vocabulary of the ancient Egyptian language have some similarities with modern-day linguistic structures.</code> |
* Loss: <code>pylate.losses.contrastive.Contrastive</code>
### Evaluation Dataset
#### reasonir-data
* Dataset: [reasonir-data](https://huggingface.co/datasets/reasonir/reasonir-data) at [0275f82](https://huggingface.co/datasets/reasonir/reasonir-data/tree/0275f825929b206d4ead23d34b4f8a50d4eddbc8)
* Size: 2,450 evaluation samples
* Columns: <code>query</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | query | positive | negative |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 6 tokens</li><li>mean: 26.92 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 19.98 tokens</li><li>max: 32 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 20.5 tokens</li><li>max: 32 tokens</li></ul> |
* Samples:
| query | positive | negative |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>In the recent speech, the politician claimed that the current tax reform will benefit the middle class and lead to a significant increase in economic growth. The politician stated, 'Our plan is to cut taxes across the board, simplifying the tax code and making it fairer for all Americans. We're taking the money from the unfair and complex system and putting it back in the pockets of the hardworking people.' The politician also emphasized the effects of the cut taxes, 'When we cut taxes, we're putting more money in the hands of our small business owners and workers, who are the backbone of our economy. This means new jobs will be created, wages will rise, and the people who actually create the jobs, the entrepreneurs, will have the funds needed to invest more into their businesses.' Moreover, the politician asserted, 'Our country will witness a major boost in job creation and economic growth which in turn will positively affect local communities all around the country.' Furthermore, the...</code> | <code>According to reputable sources in economics research, the tax reform that the administration is trying to implement closely resembles that of the 2001 and the 2003 cuts under former President George Bush and that of 1981 under President Ronald Reagan who reduced tax rates 23% and 25% respectively. Research carried out by a major university indicated that these reforms only yielded an estimated 10% increase in tax revenue, since the decrease in tax income could result in compensated revenues through economic stimulation. Some studies were actually pointing to the idea that no trickle-down economics apply as more funds were being placed in the already wealthy communities. This change could shift the economical inequalities to an extreme and showed a direct relationship between a tax reduction and a large national deficit increase. Employer demand for the borderline employee may not actually increase from the creation of the new jobs, and economists believed. The variation of wages for jo...</code> | <code>The concept of a universal basic income has been a topic of discussion among economists and policymakers in recent years. While some see it as a viable solution to poverty and economic inequality, others argue that it is not feasible due to the financial constraints it would impose on governments. One of the main concerns is that implementing a universal basic income would require significant funding, which would likely come from increased taxes or redistribution of existing social welfare funds. Critics argue that this could lead to a decrease in economic growth, as people may be less incentivized to work if they are receiving a guaranteed income. On the other hand, proponents argue that a universal basic income would provide a safety net for the most vulnerable members of society and allow people to pursue meaningful work rather than just taking any job for the sake of a paycheck. Some countries have experimented with universal basic income pilots, but the results have been mixed. Fi...</code> |
| <code>Recent advances in super-resolution techniques have led to a greater understanding of many sub-cellular structures and have opened up new avenues for exploring cellular behavior at the nanoscale. Fluorescence imaging, in particular, has greatly benefited from these advances and has enabled researchers to visualize the distribution and dynamics of proteins in real time. However, further developments in fluorescence imaging rely on a better comprehension of the interactions between imaging probes and their molecular environment. A crucial factor in these interactions is the size and shape of the probes, which must be optimized to minimize disruption of the native dynamics of the system while also achieving high fluorescence yields. The DNA-based probes have emerged as a promising solution, offering the opportunity to tune the size and shape of the probes to optimize performance.</code> | <code>Microscopy</code> | <code>Biophysics</code> |
| <code>I recently purchased this top-of-the-line smartwatch for my birthday, and I must say that it has been a revelation in terms of keeping track of my vital signs and daily activity levels. The watch has an elegant design that doesn't clash with my other wearable accessories, and I love how the touchscreen display lets me access a wealth of health metrics at a glance. Although I've encountered several instances where the heart rate monitoring system was delayed in capturing accurate readings, this minor shortcoming hardly detracts from my overall satisfaction with the product. The value proposition it presents in terms of quality, accuracy, and ascendancy over competing offerings makes it a compelling option in this class of devices. Despite never having owned one before, I found the smartwatch straightforward to use, and the companion app did an excellent job of simplifying the tracking and analysis of my fitness journey. Nothing in particular distinguishes this product's methodology in c...</code> | <code>authentic</code> | <code>fake</code> |
* Loss: <code>pylate.losses.contrastive.Contrastive</code>
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 32
- `gradient_accumulation_steps`: 2
- `learning_rate`: 2e-05
- `weight_decay`: 0.01
- `warmup_steps`: 100
- `fp16`: True
- `remove_unused_columns`: False
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 2
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.01
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 100
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: False
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | Validation Loss | accuracy |
|:------:|:-----:|:-------------:|:---------------:|:--------:|
| 0.0066 | 50 | 3.9255 | - | - |
| 0.0132 | 100 | 1.7945 | - | - |
| 0.0198 | 150 | 1.5522 | - | - |
| 0.0264 | 200 | 1.6267 | - | - |
| 0.0330 | 250 | 1.5434 | - | - |
| 0 | 0 | - | - | 0.8714 |
| 0.0330 | 250 | - | 0.8547 | - |
| 0.0396 | 300 | 1.3113 | - | - |
| 0.0462 | 350 | 1.3674 | - | - |
| 0.0528 | 400 | 1.3417 | - | - |
| 0.0594 | 450 | 1.2831 | - | - |
| 0.0660 | 500 | 1.2243 | - | - |
| 0 | 0 | - | - | 0.8820 |
| 0.0660 | 500 | - | 0.7873 | - |
| 0.0726 | 550 | 1.2276 | - | - |
| 0.0792 | 600 | 1.2502 | - | - |
| 0.0858 | 650 | 1.2247 | - | - |
| 0.0924 | 700 | 1.178 | - | - |
| 0.0990 | 750 | 1.2379 | - | - |
| 0 | 0 | - | - | 0.8931 |
| 0.0990 | 750 | - | 0.7503 | - |
| 0.1056 | 800 | 1.3893 | - | - |
| 0.1122 | 850 | 1.1852 | - | - |
| 0.1187 | 900 | 1.1082 | - | - |
| 0.1253 | 950 | 0.9946 | - | - |
| 0.1319 | 1000 | 1.1834 | - | - |
| 0 | 0 | - | - | 0.8996 |
| 0.1319 | 1000 | - | 0.7309 | - |
| 0.1385 | 1050 | 1.1556 | - | - |
| 0.1451 | 1100 | 1.0251 | - | - |
| 0.1517 | 1150 | 1.1943 | - | - |
| 0.1583 | 1200 | 1.086 | - | - |
| 0.1649 | 1250 | 1.1236 | - | - |
| 0 | 0 | - | - | 0.9008 |
| 0.1649 | 1250 | - | 0.6946 | - |
| 0.1715 | 1300 | 1.0485 | - | - |
| 0.1781 | 1350 | 0.9481 | - | - |
| 0.1847 | 1400 | 1.0898 | - | - |
| 0.1913 | 1450 | 1.0863 | - | - |
| 0.1979 | 1500 | 1.0756 | - | - |
| 0 | 0 | - | - | 0.9037 |
| 0.1979 | 1500 | - | 0.6747 | - |
| 0.2045 | 1550 | 0.9973 | - | - |
| 0.2111 | 1600 | 1.1098 | - | - |
| 0.2177 | 1650 | 1.1745 | - | - |
| 0.2243 | 1700 | 0.9654 | - | - |
| 0.2309 | 1750 | 1.0919 | - | - |
| 0 | 0 | - | - | 0.9094 |
| 0.2309 | 1750 | - | 0.6499 | - |
| 0.2375 | 1800 | 1.0249 | - | - |
| 0.2441 | 1850 | 0.9863 | - | - |
| 0.2507 | 1900 | 1.1091 | - | - |
| 0.2573 | 1950 | 1.0989 | - | - |
| 0.2639 | 2000 | 1.0662 | - | - |
| 0 | 0 | - | - | 0.9065 |
| 0.2639 | 2000 | - | 0.6661 | - |
| 0.2705 | 2050 | 1.0456 | - | - |
| 0.2771 | 2100 | 1.1349 | - | - |
| 0.2837 | 2150 | 1.0111 | - | - |
| 0.2903 | 2200 | 1.026 | - | - |
| 0.2969 | 2250 | 0.9415 | - | - |
| 0 | 0 | - | - | 0.9073 |
| 0.2969 | 2250 | - | 0.6390 | - |
| 0.3035 | 2300 | 0.9761 | - | - |
| 0.3101 | 2350 | 0.9748 | - | - |
| 0.3167 | 2400 | 1.0238 | - | - |
| 0.3233 | 2450 | 1.0456 | - | - |
| 0.3299 | 2500 | 0.9895 | - | - |
| 0 | 0 | - | - | 0.9110 |
| 0.3299 | 2500 | - | 0.6435 | - |
| 0.3365 | 2550 | 0.8796 | - | - |
| 0.3431 | 2600 | 1.0172 | - | - |
| 0.3497 | 2650 | 1.014 | - | - |
| 0.3562 | 2700 | 0.9748 | - | - |
| 0.3628 | 2750 | 0.9273 | - | - |
| 0 | 0 | - | - | 0.9082 |
| 0.3628 | 2750 | - | 0.6303 | - |
| 0.3694 | 2800 | 1.0122 | - | - |
| 0.3760 | 2850 | 1.0054 | - | - |
| 0.3826 | 2900 | 0.8974 | - | - |
| 0.3892 | 2950 | 0.9396 | - | - |
| 0.3958 | 3000 | 0.8734 | - | - |
| 0 | 0 | - | - | 0.9049 |
| 0.3958 | 3000 | - | 0.6238 | - |
| 0.4024 | 3050 | 1.0048 | - | - |
| 0.4090 | 3100 | 0.9701 | - | - |
| 0.4156 | 3150 | 0.9924 | - | - |
| 0.4222 | 3200 | 0.9349 | - | - |
| 0.4288 | 3250 | 0.974 | - | - |
| 0 | 0 | - | - | 0.9118 |
| 0.4288 | 3250 | - | 0.6216 | - |
| 0.4354 | 3300 | 1.0539 | - | - |
| 0.4420 | 3350 | 0.9389 | - | - |
| 0.4486 | 3400 | 0.9171 | - | - |
| 0.4552 | 3450 | 0.9706 | - | - |
| 0.4618 | 3500 | 1.0124 | - | - |
| 0 | 0 | - | - | 0.9065 |
| 0.4618 | 3500 | - | 0.6126 | - |
| 0.4684 | 3550 | 0.9215 | - | - |
| 0.4750 | 3600 | 0.8563 | - | - |
| 0.4816 | 3650 | 0.8249 | - | - |
| 0.4882 | 3700 | 0.8794 | - | - |
| 0.4948 | 3750 | 1.0013 | - | - |
| 0 | 0 | - | - | 0.9078 |
| 0.4948 | 3750 | - | 0.6022 | - |
| 0.5014 | 3800 | 0.922 | - | - |
| 0.5080 | 3850 | 0.9168 | - | - |
| 0.5146 | 3900 | 0.9315 | - | - |
| 0.5212 | 3950 | 0.9265 | - | - |
| 0.5278 | 4000 | 0.9453 | - | - |
| 0 | 0 | - | - | 0.9078 |
| 0.5278 | 4000 | - | 0.6083 | - |
| 0.5344 | 4050 | 0.9585 | - | - |
| 0.5410 | 4100 | 0.9886 | - | - |
| 0.5476 | 4150 | 0.9081 | - | - |
| 0.5542 | 4200 | 0.8181 | - | - |
| 0.5608 | 4250 | 0.8806 | - | - |
| 0 | 0 | - | - | 0.9118 |
| 0.5608 | 4250 | - | 0.5918 | - |
| 0.5674 | 4300 | 0.858 | - | - |
| 0.5740 | 4350 | 0.8781 | - | - |
| 0.5806 | 4400 | 0.9059 | - | - |
| 0.5871 | 4450 | 0.8475 | - | - |
| 0.5937 | 4500 | 0.9649 | - | - |
| 0 | 0 | - | - | 0.9057 |
| 0.5937 | 4500 | - | 0.5951 | - |
| 0.6003 | 4550 | 0.969 | - | - |
| 0.6069 | 4600 | 0.8685 | - | - |
| 0.6135 | 4650 | 0.9555 | - | - |
| 0.6201 | 4700 | 0.9166 | - | - |
| 0.6267 | 4750 | 0.877 | - | - |
| 0 | 0 | - | - | 0.9073 |
| 0.6267 | 4750 | - | 0.5858 | - |
| 0.6333 | 4800 | 0.938 | - | - |
| 0.6399 | 4850 | 0.9211 | - | - |
| 0.6465 | 4900 | 0.9699 | - | - |
| 0.6531 | 4950 | 0.8818 | - | - |
| 0.6597 | 5000 | 0.9814 | - | - |
| 0 | 0 | - | - | 0.9176 |
| 0.6597 | 5000 | - | 0.5705 | - |
| 0.6663 | 5050 | 0.8588 | - | - |
| 0.6729 | 5100 | 0.8922 | - | - |
| 0.6795 | 5150 | 1.0096 | - | - |
| 0.6861 | 5200 | 0.9217 | - | - |
| 0.6927 | 5250 | 0.9446 | - | - |
| 0 | 0 | - | - | 0.9147 |
| 0.6927 | 5250 | - | 0.5740 | - |
| 0.6993 | 5300 | 0.9301 | - | - |
| 0.7059 | 5350 | 0.8436 | - | - |
| 0.7125 | 5400 | 0.8547 | - | - |
| 0.7191 | 5450 | 0.9552 | - | - |
| 0.7257 | 5500 | 0.9227 | - | - |
| 0 | 0 | - | - | 0.9135 |
| 0.7257 | 5500 | - | 0.5913 | - |
| 0.7323 | 5550 | 0.8813 | - | - |
| 0.7389 | 5600 | 0.8519 | - | - |
| 0.7455 | 5650 | 0.8223 | - | - |
| 0.7521 | 5700 | 0.8603 | - | - |
| 0.7587 | 5750 | 0.8208 | - | - |
| 0 | 0 | - | - | 0.9151 |
| 0.7587 | 5750 | - | 0.5698 | - |
| 0.7653 | 5800 | 0.8542 | - | - |
| 0.7719 | 5850 | 0.7924 | - | - |
| 0.7785 | 5900 | 0.9238 | - | - |
| 0.7851 | 5950 | 0.8303 | - | - |
| 0.7917 | 6000 | 0.8254 | - | - |
| 0 | 0 | - | - | 0.9159 |
| 0.7917 | 6000 | - | 0.5643 | - |
| 0.7983 | 6050 | 0.8556 | - | - |
| 0.8049 | 6100 | 0.9286 | - | - |
| 0.8115 | 6150 | 0.8776 | - | - |
| 0.8180 | 6200 | 0.8146 | - | - |
| 0.8246 | 6250 | 0.8469 | - | - |
| 0 | 0 | - | - | 0.9127 |
| 0.8246 | 6250 | - | 0.5627 | - |
| 0.8312 | 6300 | 0.9719 | - | - |
| 0.8378 | 6350 | 0.9297 | - | - |
| 0.8444 | 6400 | 0.896 | - | - |
| 0.8510 | 6450 | 0.8709 | - | - |
| 0.8576 | 6500 | 0.9436 | - | - |
| 0 | 0 | - | - | 0.9159 |
| 0.8576 | 6500 | - | 0.5638 | - |
| 0.8642 | 6550 | 0.8938 | - | - |
| 0.8708 | 6600 | 0.8065 | - | - |
| 0.8774 | 6650 | 0.8281 | - | - |
| 0.8840 | 6700 | 0.8449 | - | - |
| 0.8906 | 6750 | 0.813 | - | - |
| 0 | 0 | - | - | 0.9167 |
| 0.8906 | 6750 | - | 0.5694 | - |
| 0.8972 | 6800 | 0.9052 | - | - |
| 0.9038 | 6850 | 0.9501 | - | - |
| 0.9104 | 6900 | 0.9612 | - | - |
| 0.9170 | 6950 | 0.8649 | - | - |
| 0.9236 | 7000 | 0.7366 | - | - |
| 0 | 0 | - | - | 0.9188 |
| 0.9236 | 7000 | - | 0.5691 | - |
| 0.9302 | 7050 | 0.9621 | - | - |
| 0.9368 | 7100 | 0.9154 | - | - |
| 0.9434 | 7150 | 0.8617 | - | - |
| 0.9500 | 7200 | 0.8629 | - | - |
| 0.9566 | 7250 | 0.899 | - | - |
| 0 | 0 | - | - | 0.9159 |
| 0.9566 | 7250 | - | 0.5559 | - |
| 0.9632 | 7300 | 0.7766 | - | - |
| 0.9698 | 7350 | 0.8968 | - | - |
| 0.9764 | 7400 | 0.8462 | - | - |
| 0.9830 | 7450 | 0.8478 | - | - |
| 0.9896 | 7500 | 0.8184 | - | - |
| 0 | 0 | - | - | 0.9163 |
| 0.9896 | 7500 | - | 0.5564 | - |
| 0.9962 | 7550 | 0.8445 | - | - |
| 1.0028 | 7600 | 0.7305 | - | - |
| 1.0094 | 7650 | 0.695 | - | - |
| 1.0160 | 7700 | 0.779 | - | - |
| 1.0226 | 7750 | 0.5876 | - | - |
| 0 | 0 | - | - | 0.9184 |
| 1.0226 | 7750 | - | 0.5776 | - |
| 1.0292 | 7800 | 0.6372 | - | - |
| 1.0358 | 7850 | 0.7066 | - | - |
| 1.0424 | 7900 | 0.6561 | - | - |
| 1.0490 | 7950 | 0.6854 | - | - |
| 1.0555 | 8000 | 0.7083 | - | - |
| 0 | 0 | - | - | 0.9212 |
| 1.0555 | 8000 | - | 0.5645 | - |
| 1.0621 | 8050 | 0.6618 | - | - |
| 1.0687 | 8100 | 0.6602 | - | - |
| 1.0753 | 8150 | 0.7141 | - | - |
| 1.0819 | 8200 | 0.7599 | - | - |
| 1.0885 | 8250 | 0.6307 | - | - |
| 0 | 0 | - | - | 0.9159 |
| 1.0885 | 8250 | - | 0.5608 | - |
| 1.0951 | 8300 | 0.6611 | - | - |
| 1.1017 | 8350 | 0.6308 | - | - |
| 1.1083 | 8400 | 0.7035 | - | - |
| 1.1149 | 8450 | 0.683 | - | - |
| 1.1215 | 8500 | 0.7077 | - | - |
| 0 | 0 | - | - | 0.9184 |
| 1.1215 | 8500 | - | 0.5556 | - |
| 1.1281 | 8550 | 0.7153 | - | - |
| 1.1347 | 8600 | 0.6186 | - | - |
| 1.1413 | 8650 | 0.6289 | - | - |
| 1.1479 | 8700 | 0.5718 | - | - |
| 1.1545 | 8750 | 0.5749 | - | - |
| 0 | 0 | - | - | 0.9167 |
| 1.1545 | 8750 | - | 0.5695 | - |
| 1.1611 | 8800 | 0.6788 | - | - |
| 1.1677 | 8850 | 0.7731 | - | - |
| 1.1743 | 8900 | 0.6954 | - | - |
| 1.1809 | 8950 | 0.7404 | - | - |
| 1.1875 | 9000 | 0.6871 | - | - |
| 0 | 0 | - | - | 0.9208 |
| 1.1875 | 9000 | - | 0.5666 | - |
| 1.1941 | 9050 | 0.6415 | - | - |
| 1.2007 | 9100 | 0.6517 | - | - |
| 1.2073 | 9150 | 0.7354 | - | - |
| 1.2139 | 9200 | 0.7325 | - | - |
| 1.2205 | 9250 | 0.6272 | - | - |
| 0 | 0 | - | - | 0.9147 |
| 1.2205 | 9250 | - | 0.5714 | - |
| 1.2271 | 9300 | 0.7292 | - | - |
| 1.2337 | 9350 | 0.6325 | - | - |
| 1.2403 | 9400 | 0.6344 | - | - |
| 1.2469 | 9450 | 0.7218 | - | - |
| 1.2535 | 9500 | 0.6815 | - | - |
| 0 | 0 | - | - | 0.9176 |
| 1.2535 | 9500 | - | 0.5651 | - |
| 1.2601 | 9550 | 0.7186 | - | - |
| 1.2667 | 9600 | 0.6145 | - | - |
| 1.2733 | 9650 | 0.7095 | - | - |
| 1.2799 | 9700 | 0.674 | - | - |
| 1.2864 | 9750 | 0.7405 | - | - |
| 0 | 0 | - | - | 0.9200 |
| 1.2864 | 9750 | - | 0.5666 | - |
| 1.2930 | 9800 | 0.7186 | - | - |
| 1.2996 | 9850 | 0.6352 | - | - |
| 1.3062 | 9900 | 0.7077 | - | - |
| 1.3128 | 9950 | 0.6873 | - | - |
| 1.3194 | 10000 | 0.5939 | - | - |
| 0 | 0 | - | - | 0.9204 |
| 1.3194 | 10000 | - | 0.5752 | - |
| 1.3260 | 10050 | 0.7171 | - | - |
| 1.3326 | 10100 | 0.6592 | - | - |
| 1.3392 | 10150 | 0.6631 | - | - |
| 1.3458 | 10200 | 0.7658 | - | - |
| 1.3524 | 10250 | 0.6213 | - | - |
| 0 | 0 | - | - | 0.9180 |
| 1.3524 | 10250 | - | 0.5678 | - |
| 1.3590 | 10300 | 0.6486 | - | - |
| 1.3656 | 10350 | 0.662 | - | - |
| 1.3722 | 10400 | 0.6924 | - | - |
| 1.3788 | 10450 | 0.7106 | - | - |
| 1.3854 | 10500 | 0.7239 | - | - |
| 0 | 0 | - | - | 0.9184 |
| 1.3854 | 10500 | - | 0.5687 | - |
| 1.3920 | 10550 | 0.735 | - | - |
| 1.3986 | 10600 | 0.6784 | - | - |
| 1.4052 | 10650 | 0.6886 | - | - |
| 1.4118 | 10700 | 0.649 | - | - |
| 1.4184 | 10750 | 0.6133 | - | - |
| 0 | 0 | - | - | 0.9200 |
| 1.4184 | 10750 | - | 0.5683 | - |
| 1.4250 | 10800 | 0.6635 | - | - |
| 1.4316 | 10850 | 0.6803 | - | - |
| 1.4382 | 10900 | 0.6497 | - | - |
| 1.4448 | 10950 | 0.6812 | - | - |
| 1.4514 | 11000 | 0.7493 | - | - |
| 0 | 0 | - | - | 0.9220 |
| 1.4514 | 11000 | - | 0.5587 | - |
| 1.4580 | 11050 | 0.6694 | - | - |
| 1.4646 | 11100 | 0.6782 | - | - |
| 1.4712 | 11150 | 0.6839 | - | - |
| 1.4778 | 11200 | 0.671 | - | - |
| 1.4844 | 11250 | 0.7648 | - | - |
| 0 | 0 | - | - | 0.9208 |
| 1.4844 | 11250 | - | 0.5466 | - |
| 1.4910 | 11300 | 0.7448 | - | - |
| 1.4976 | 11350 | 0.6811 | - | - |
| 1.5042 | 11400 | 0.6984 | - | - |
| 1.5108 | 11450 | 0.6676 | - | - |
| 1.5174 | 11500 | 0.7054 | - | - |
| 0 | 0 | - | - | 0.9204 |
| 1.5174 | 11500 | - | 0.5569 | - |
| 1.5239 | 11550 | 0.6109 | - | - |
| 1.5305 | 11600 | 0.7581 | - | - |
| 1.5371 | 11650 | 0.7035 | - | - |
| 1.5437 | 11700 | 0.6943 | - | - |
| 1.5503 | 11750 | 0.6225 | - | - |
| 0 | 0 | - | - | 0.9224 |
| 1.5503 | 11750 | - | 0.5571 | - |
| 1.5569 | 11800 | 0.661 | - | - |
| 1.5635 | 11850 | 0.635 | - | - |
| 1.5701 | 11900 | 0.613 | - | - |
| 1.5767 | 11950 | 0.6502 | - | - |
| 1.5833 | 12000 | 0.6935 | - | - |
| 0 | 0 | - | - | 0.9200 |
| 1.5833 | 12000 | - | 0.5579 | - |
| 1.5899 | 12050 | 0.6147 | - | - |
| 1.5965 | 12100 | 0.6575 | - | - |
| 1.6031 | 12150 | 0.6837 | - | - |
| 1.6097 | 12200 | 0.7437 | - | - |
| 1.6163 | 12250 | 0.6808 | - | - |
| 0 | 0 | - | - | 0.9204 |
| 1.6163 | 12250 | - | 0.5507 | - |
| 1.6229 | 12300 | 0.6698 | - | - |
| 1.6295 | 12350 | 0.6803 | - | - |
| 1.6361 | 12400 | 0.676 | - | - |
| 1.6427 | 12450 | 0.6418 | - | - |
| 1.6493 | 12500 | 0.6042 | - | - |
| 0 | 0 | - | - | 0.9188 |
| 1.6493 | 12500 | - | 0.5563 | - |
| 1.6559 | 12550 | 0.7139 | - | - |
| 1.6625 | 12600 | 0.6995 | - | - |
| 1.6691 | 12650 | 0.6097 | - | - |
| 1.6757 | 12700 | 0.6407 | - | - |
| 1.6823 | 12750 | 0.5994 | - | - |
| 0 | 0 | - | - | 0.9249 |
| 1.6823 | 12750 | - | 0.5621 | - |
| 1.6889 | 12800 | 0.6642 | - | - |
| 1.6955 | 12850 | 0.6198 | - | - |
| 1.7021 | 12900 | 0.6648 | - | - |
| 1.7087 | 12950 | 0.5644 | - | - |
| 1.7153 | 13000 | 0.6531 | - | - |
| 0 | 0 | - | - | 0.9241 |
| 1.7153 | 13000 | - | 0.5617 | - |
| 1.7219 | 13050 | 0.6159 | - | - |
| 1.7285 | 13100 | 0.7855 | - | - |
| 1.7351 | 13150 | 0.6307 | - | - |
| 1.7417 | 13200 | 0.61 | - | - |
| 1.7483 | 13250 | 0.6672 | - | - |
| 0 | 0 | - | - | 0.9237 |
| 1.7483 | 13250 | - | 0.5589 | - |
| 1.7548 | 13300 | 0.6002 | - | - |
| 1.7614 | 13350 | 0.6638 | - | - |
| 1.7680 | 13400 | 0.6112 | - | - |
| 1.7746 | 13450 | 0.6236 | - | - |
| 1.7812 | 13500 | 0.6245 | - | - |
| 0 | 0 | - | - | 0.9220 |
| 1.7812 | 13500 | - | 0.5580 | - |
| 1.7878 | 13550 | 0.7146 | - | - |
| 1.7944 | 13600 | 0.5969 | - | - |
| 1.8010 | 13650 | 0.7246 | - | - |
| 1.8076 | 13700 | 0.65 | - | - |
| 1.8142 | 13750 | 0.7136 | - | - |
| 0 | 0 | - | - | 0.9204 |
| 1.8142 | 13750 | - | 0.5533 | - |
| 1.8208 | 13800 | 0.7062 | - | - |
| 1.8274 | 13850 | 0.6987 | - | - |
| 1.8340 | 13900 | 0.6642 | - | - |
| 1.8406 | 13950 | 0.6761 | - | - |
| 1.8472 | 14000 | 0.6766 | - | - |
| 0 | 0 | - | - | 0.9212 |
| 1.8472 | 14000 | - | 0.5655 | - |
| 1.8538 | 14050 | 0.5758 | - | - |
| 1.8604 | 14100 | 0.6594 | - | - |
| 1.8670 | 14150 | 0.7866 | - | - |
| 1.8736 | 14200 | 0.5798 | - | - |
| 1.8802 | 14250 | 0.6472 | - | - |
| 0 | 0 | - | - | 0.9212 |
| 1.8802 | 14250 | - | 0.5509 | - |
| 1.8868 | 14300 | 0.7387 | - | - |
| 1.8934 | 14350 | 0.6677 | - | - |
| 1.9000 | 14400 | 0.6697 | - | - |
| 1.9066 | 14450 | 0.6711 | - | - |
| 1.9132 | 14500 | 0.6988 | - | - |
| 0 | 0 | - | - | 0.9229 |
| 1.9132 | 14500 | - | 0.5528 | - |
| 1.9198 | 14550 | 0.6301 | - | - |
| 1.9264 | 14600 | 0.6259 | - | - |
| 1.9330 | 14650 | 0.6223 | - | - |
| 1.9396 | 14700 | 0.5702 | - | - |
| 1.9462 | 14750 | 0.6324 | - | - |
| 0 | 0 | - | - | 0.9253 |
| 1.9462 | 14750 | - | 0.5508 | - |
| 1.9528 | 14800 | 0.6409 | - | - |
| 1.9594 | 14850 | 0.6609 | - | - |
| 1.9660 | 14900 | 0.6581 | - | - |
| 1.9726 | 14950 | 0.6313 | - | - |
| 1.9792 | 15000 | 0.6191 | - | - |
| 0 | 0 | - | - | 0.9216 |
| 1.9792 | 15000 | - | 0.5452 | - |
| 1.9858 | 15050 | 0.6665 | - | - |
| 1.9923 | 15100 | 0.5907 | - | - |
| 1.9989 | 15150 | 0.6586 | - | - |
| 2.0055 | 15200 | 0.5673 | - | - |
| 2.0121 | 15250 | 0.5516 | - | - |
| 0 | 0 | - | - | 0.9233 |
| 2.0121 | 15250 | - | 0.5589 | - |
| 2.0187 | 15300 | 0.5012 | - | - |
| 2.0253 | 15350 | 0.5227 | - | - |
| 2.0319 | 15400 | 0.4449 | - | - |
| 2.0385 | 15450 | 0.4862 | - | - |
| 2.0451 | 15500 | 0.5413 | - | - |
| 0 | 0 | - | - | 0.9233 |
| 2.0451 | 15500 | - | 0.5642 | - |
| 2.0517 | 15550 | 0.5462 | - | - |
| 2.0583 | 15600 | 0.5318 | - | - |
| 2.0649 | 15650 | 0.5706 | - | - |
| 2.0715 | 15700 | 0.5055 | - | - |
| 2.0781 | 15750 | 0.6141 | - | - |
| 0 | 0 | - | - | 0.9233 |
| 2.0781 | 15750 | - | 0.5611 | - |
| 2.0847 | 15800 | 0.5247 | - | - |
| 2.0913 | 15850 | 0.4817 | - | - |
| 2.0979 | 15900 | 0.4599 | - | - |
| 2.1045 | 15950 | 0.5676 | - | - |
| 2.1111 | 16000 | 0.3992 | - | - |
| 0 | 0 | - | - | 0.9237 |
| 2.1111 | 16000 | - | 0.5720 | - |
| 2.1177 | 16050 | 0.5337 | - | - |
| 2.1243 | 16100 | 0.4641 | - | - |
| 2.1309 | 16150 | 0.5636 | - | - |
| 2.1375 | 16200 | 0.4811 | - | - |
| 2.1441 | 16250 | 0.499 | - | - |
| 0 | 0 | - | - | 0.9216 |
| 2.1441 | 16250 | - | 0.5673 | - |
| 2.1507 | 16300 | 0.5822 | - | - |
| 2.1573 | 16350 | 0.5935 | - | - |
| 2.1639 | 16400 | 0.5028 | - | - |
| 2.1705 | 16450 | 0.5118 | - | - |
| 2.1771 | 16500 | 0.5623 | - | - |
| 0 | 0 | - | - | 0.9261 |
| 2.1771 | 16500 | - | 0.5656 | - |
| 2.1837 | 16550 | 0.481 | - | - |
| 2.1903 | 16600 | 0.5461 | - | - |
| 2.1969 | 16650 | 0.5802 | - | - |
| 2.2035 | 16700 | 0.5269 | - | - |
| 2.2101 | 16750 | 0.5022 | - | - |
| 0 | 0 | - | - | 0.9220 |
| 2.2101 | 16750 | - | 0.5671 | - |
| 2.2167 | 16800 | 0.5203 | - | - |
| 2.2232 | 16850 | 0.5461 | - | - |
| 2.2298 | 16900 | 0.5711 | - | - |
| 2.2364 | 16950 | 0.5615 | - | - |
| 2.2430 | 17000 | 0.5748 | - | - |
| 0 | 0 | - | - | 0.9257 |
| 2.2430 | 17000 | - | 0.5605 | - |
| 2.2496 | 17050 | 0.5272 | - | - |
| 2.2562 | 17100 | 0.4401 | - | - |
| 2.2628 | 17150 | 0.5158 | - | - |
| 2.2694 | 17200 | 0.5163 | - | - |
| 2.2760 | 17250 | 0.5195 | - | - |
| 0 | 0 | - | - | 0.9237 |
| 2.2760 | 17250 | - | 0.5647 | - |
| 2.2826 | 17300 | 0.5235 | - | - |
| 2.2892 | 17350 | 0.5335 | - | - |
| 2.2958 | 17400 | 0.4915 | - | - |
| 2.3024 | 17450 | 0.4915 | - | - |
| 2.3090 | 17500 | 0.4959 | - | - |
| 0 | 0 | - | - | 0.9233 |
| 2.3090 | 17500 | - | 0.5675 | - |
| 2.3156 | 17550 | 0.5161 | - | - |
| 2.3222 | 17600 | 0.4944 | - | - |
| 2.3288 | 17650 | 0.5052 | - | - |
| 2.3354 | 17700 | 0.4937 | - | - |
| 2.3420 | 17750 | 0.4695 | - | - |
| 0 | 0 | - | - | 0.9253 |
| 2.3420 | 17750 | - | 0.5615 | - |
| 2.3486 | 17800 | 0.5159 | - | - |
| 2.3552 | 17850 | 0.4992 | - | - |
| 2.3618 | 17900 | 0.5288 | - | - |
| 2.3684 | 17950 | 0.5247 | - | - |
| 2.3750 | 18000 | 0.5491 | - | - |
| 0 | 0 | - | - | 0.9257 |
| 2.3750 | 18000 | - | 0.5594 | - |
| 2.3816 | 18050 | 0.5332 | - | - |
| 2.3882 | 18100 | 0.529 | - | - |
| 2.3948 | 18150 | 0.5534 | - | - |
| 2.4014 | 18200 | 0.5595 | - | - |
| 2.4080 | 18250 | 0.573 | - | - |
| 0 | 0 | - | - | 0.9261 |
| 2.4080 | 18250 | - | 0.5610 | - |
| 2.4146 | 18300 | 0.4859 | - | - |
| 2.4212 | 18350 | 0.5019 | - | - |
| 2.4278 | 18400 | 0.4771 | - | - |
| 2.4344 | 18450 | 0.5062 | - | - |
| 2.4410 | 18500 | 0.5342 | - | - |
| 0 | 0 | - | - | 0.9229 |
| 2.4410 | 18500 | - | 0.5617 | - |
| 2.4476 | 18550 | 0.5275 | - | - |
| 2.4541 | 18600 | 0.576 | - | - |
| 2.4607 | 18650 | 0.5172 | - | - |
| 2.4673 | 18700 | 0.5127 | - | - |
| 2.4739 | 18750 | 0.4728 | - | - |
| 0 | 0 | - | - | 0.9249 |
| 2.4739 | 18750 | - | 0.5651 | - |
| 2.4805 | 18800 | 0.4256 | - | - |
| 2.4871 | 18850 | 0.4493 | - | - |
| 2.4937 | 18900 | 0.4881 | - | - |
| 2.5003 | 18950 | 0.4843 | - | - |
| 2.5069 | 19000 | 0.517 | - | - |
| 0 | 0 | - | - | 0.9249 |
| 2.5069 | 19000 | - | 0.5626 | - |
| 2.5135 | 19050 | 0.5927 | - | - |
| 2.5201 | 19100 | 0.5687 | - | - |
| 2.5267 | 19150 | 0.5261 | - | - |
| 2.5333 | 19200 | 0.5698 | - | - |
| 2.5399 | 19250 | 0.5593 | - | - |
| 0 | 0 | - | - | 0.9269 |
| 2.5399 | 19250 | - | 0.5581 | - |
| 2.5465 | 19300 | 0.571 | - | - |
| 2.5531 | 19350 | 0.5606 | - | - |
| 2.5597 | 19400 | 0.4912 | - | - |
| 2.5663 | 19450 | 0.4805 | - | - |
| 2.5729 | 19500 | 0.5324 | - | - |
| 0 | 0 | - | - | 0.9282 |
| 2.5729 | 19500 | - | 0.5537 | - |
| 2.5795 | 19550 | 0.5584 | - | - |
| 2.5861 | 19600 | 0.508 | - | - |
| 2.5927 | 19650 | 0.5231 | - | - |
| 2.5993 | 19700 | 0.557 | - | - |
| 2.6059 | 19750 | 0.5338 | - | - |
| 0 | 0 | - | - | 0.9257 |
| 2.6059 | 19750 | - | 0.5518 | - |
| 2.6125 | 19800 | 0.5037 | - | - |
| 2.6191 | 19850 | 0.6057 | - | - |
| 2.6257 | 19900 | 0.5571 | - | - |
| 2.6323 | 19950 | 0.5177 | - | - |
| 2.6389 | 20000 | 0.4946 | - | - |
| 0 | 0 | - | - | 0.9253 |
| 2.6389 | 20000 | - | 0.5548 | - |
| 2.6455 | 20050 | 0.5256 | - | - |
| 2.6521 | 20100 | 0.5107 | - | - |
| 2.6587 | 20150 | 0.5988 | - | - |
| 2.6653 | 20200 | 0.4907 | - | - |
| 2.6719 | 20250 | 0.4697 | - | - |
| 0 | 0 | - | - | 0.9269 |
| 2.6719 | 20250 | - | 0.5566 | - |
| 2.6785 | 20300 | 0.4897 | - | - |
| 2.6851 | 20350 | 0.5088 | - | - |
| 2.6916 | 20400 | 0.5442 | - | - |
| 2.6982 | 20450 | 0.536 | - | - |
| 2.7048 | 20500 | 0.551 | - | - |
| 0 | 0 | - | - | 0.9269 |
| 2.7048 | 20500 | - | 0.5562 | - |
| 2.7114 | 20550 | 0.5038 | - | - |
| 2.7180 | 20600 | 0.502 | - | - |
| 2.7246 | 20650 | 0.5021 | - | - |
| 2.7312 | 20700 | 0.5441 | - | - |
| 2.7378 | 20750 | 0.4818 | - | - |
| 0 | 0 | - | - | 0.9286 |
| 2.7378 | 20750 | - | 0.5548 | - |
| 2.7444 | 20800 | 0.5012 | - | - |
| 2.7510 | 20850 | 0.5294 | - | - |
| 2.7576 | 20900 | 0.4674 | - | - |
| 2.7642 | 20950 | 0.5436 | - | - |
| 2.7708 | 21000 | 0.4609 | - | - |
| 0 | 0 | - | - | 0.9269 |
| 2.7708 | 21000 | - | 0.5538 | - |
| 2.7774 | 21050 | 0.5015 | - | - |
| 2.7840 | 21100 | 0.5299 | - | - |
| 2.7906 | 21150 | 0.4363 | - | - |
| 2.7972 | 21200 | 0.5018 | - | - |
| 2.8038 | 21250 | 0.5079 | - | - |
| 0 | 0 | - | - | 0.9265 |
| 2.8038 | 21250 | - | 0.5549 | - |
| 2.8104 | 21300 | 0.4467 | - | - |
| 2.8170 | 21350 | 0.5769 | - | - |
| 2.8236 | 21400 | 0.5323 | - | - |
| 2.8302 | 21450 | 0.4714 | - | - |
| 2.8368 | 21500 | 0.4491 | - | - |
| 0 | 0 | - | - | 0.9257 |
| 2.8368 | 21500 | - | 0.5538 | - |
| 2.8434 | 21550 | 0.4801 | - | - |
| 2.8500 | 21600 | 0.5132 | - | - |
| 2.8566 | 21650 | 0.4542 | - | - |
| 2.8632 | 21700 | 0.5015 | - | - |
| 2.8698 | 21750 | 0.4818 | - | - |
| 0 | 0 | - | - | 0.9278 |
| 2.8698 | 21750 | - | 0.5554 | - |
| 2.8764 | 21800 | 0.5078 | - | - |
| 2.8830 | 21850 | 0.508 | - | - |
| 2.8896 | 21900 | 0.5331 | - | - |
| 2.8962 | 21950 | 0.5185 | - | - |
| 2.9028 | 22000 | 0.4469 | - | - |
| 0 | 0 | - | - | 0.9265 |
| 2.9028 | 22000 | - | 0.5551 | - |
| 2.9094 | 22050 | 0.4762 | - | - |
| 2.9160 | 22100 | 0.5799 | - | - |
| 2.9225 | 22150 | 0.4978 | - | - |
| 2.9291 | 22200 | 0.566 | - | - |
| 2.9357 | 22250 | 0.5837 | - | - |
| 0 | 0 | - | - | 0.9269 |
| 2.9357 | 22250 | - | 0.5532 | - |
| 2.9423 | 22300 | 0.5401 | - | - |
| 2.9489 | 22350 | 0.523 | - | - |
| 2.9555 | 22400 | 0.5913 | - | - |
| 2.9621 | 22450 | 0.4701 | - | - |
| 2.9687 | 22500 | 0.5568 | - | - |
| 0 | 0 | - | - | 0.9273 |
| 2.9687 | 22500 | - | 0.5529 | - |
| 2.9753 | 22550 | 0.5266 | - | - |
| 2.9819 | 22600 | 0.4969 | - | - |
| 2.9885 | 22650 | 0.4917 | - | - |
| 2.9951 | 22700 | 0.5128 | - | - |
</details>
### Framework Versions
- Python: 3.12.4
- Sentence Transformers: 4.0.2
- PyLate: 1.2.0
- Transformers: 4.48.2
- PyTorch: 2.6.0+cu124
- Accelerate: 1.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084"
}
```
#### PyLate
```bibtex
@misc{PyLate,
title={PyLate: Flexible Training and Retrieval for Late Interaction Models},
author={Chaffin, Antoine and Sourty, Raphaël},
url={https://github.com/lightonai/pylate},
year={2024}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
sgonzalezygil/sd-finetuning-dreambooth-v11
|
sgonzalezygil
| 2025-06-18T15:30:11Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-06-18T15:28:16Z |
---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
eddieman78/litbank-coref-qwen-3-deepseek-8b-4000-64-1e4-4
|
eddieman78
| 2025-06-18T15:29:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"base_model:unsloth/DeepSeek-R1-0528-Qwen3-8B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/DeepSeek-R1-0528-Qwen3-8B-unsloth-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2025-06-18T15:29:37Z |
---
base_model: unsloth/DeepSeek-R1-0528-Qwen3-8B-unsloth-bnb-4bit
library_name: transformers
model_name: litbank-coref-qwen-3-deepseek-8b-4000-64-1e4-4
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for litbank-coref-qwen-3-deepseek-8b-4000-64-1e4-4
This model is a fine-tuned version of [unsloth/DeepSeek-R1-0528-Qwen3-8B-unsloth-bnb-4bit](https://huggingface.co/unsloth/DeepSeek-R1-0528-Qwen3-8B-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="eddieman78/litbank-coref-qwen-3-deepseek-8b-4000-64-1e4-4", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Sharing22/aaa_c6
|
Sharing22
| 2025-06-18T15:20:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T15:17:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sanchit42/qwen3-8B-instruct-29reports-lora256
|
sanchit42
| 2025-06-18T15:20:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-18T15:17:36Z |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Desalegnn/amharic-t5-LoRA-f
|
Desalegnn
| 2025-06-18T15:16:45Z | 73 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-05-27T08:27:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AnubhavSC/MAYA-PJ3
|
AnubhavSC
| 2025-06-18T15:13:52Z | 0 | 0 | null |
[
"safetensors",
"unsloth",
"license:mit",
"region:us"
] | null | 2025-06-18T14:26:39Z |
---
license: mit
tags:
- unsloth
---
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.