modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-29 00:47:35
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 534
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-29 00:46:31
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
nnilayy/dreamer-arousal-binary-ablation-no-dropout-Kfold-3
|
nnilayy
| 2025-06-20T22:48:42Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-20T22:48:37Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
Jilt/qwen2.5-7b-instruct-trl-sft-aana-videos_demo_v2
|
Jilt
| 2025-06-20T22:48:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T09:46:06Z |
---
base_model: Qwen/Qwen2.5-VL-7B-Instruct
library_name: transformers
model_name: qwen2.5-7b-instruct-trl-sft-aana-videos_demo_v2
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen2.5-7b-instruct-trl-sft-aana-videos_demo_v2
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Jilt/qwen2.5-7b-instruct-trl-sft-aana-videos_demo_v2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jilt/qwen2.5-7b-instruct-trl-sft-aana-videos_demo_v2/runs/qz7c4v5r)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.53.0.dev0
- Pytorch: 2.4.1+cu121
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Jiahuixu/io-autocoder
|
Jiahuixu
| 2025-06-20T22:47:14Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:adapter:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"region:us"
] | null | 2025-06-20T21:02:59Z |
---
library_name: peft
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- generated_from_trainer
model-index:
- name: io-autocoder
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# io-autocoder
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 6
- total_train_batch_size: 60
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.4
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Jnaranjo/qwenec
|
Jnaranjo
| 2025-06-20T22:41:16Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen-7B",
"base_model:adapter:Qwen/Qwen-7B",
"region:us"
] | null | 2025-06-20T22:37:30Z |
---
base_model: Qwen/Qwen-7B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
smida11/llama3-8b-finetuned-merged
|
smida11
| 2025-06-20T22:40:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T22:40:08Z |
---
base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** smida11
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
hectordiazgomez/grpo-v4
|
hectordiazgomez
| 2025-06-20T22:34:25Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gemma3",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-20T22:32:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
syntheticbot/gender-classification-clip
|
syntheticbot
| 2025-06-20T22:23:39Z | 0 | 1 |
transformers
|
[
"transformers",
"safetensors",
"clip",
"zero-shot-image-classification",
"image-classification",
"fairface",
"vision",
"en",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2025-06-20T21:41:20Z |
---
license: apache-2.0
language: en
library_name: transformers
tags:
- clip
- image-classification
- fairface
- vision
model-index:
- name: gender-classification-clip
results:
- task:
type: image-classification
name: image-classification
dataset:
name: FairFace
type: joojs/fairface
split: validation
metrics:
- type: accuracy
value: 0.9638
name: Gender Accuracy
---
### **Model Card: gender-classification-clip**
# Fine-tuned CLIP Model for Gender Classification
This repository contains the model **`gender-classification-clip`**, a fine-tuned version of the **[openai/clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)** model. It has been adapted for classifying perceived gender from facial images.
The model was trained on the gender labels from the **[FairFace dataset](https://github.com/joojs/fairface)**, which is designed to be balanced across demographic categories. This model card provides a detailed look at its performance, limitations, and intended use to encourage responsible application.
## Model Description
The base model, CLIP (Contrastive Language-Image Pre-Training), learns rich visual representations by matching images to their corresponding text descriptions. This fine-tuned version repurposes the powerful vision encoder from CLIP for a specific classification task.
It takes an image as input and outputs a prediction for:
* **Gender:** 2 categories (Male, Female)
## Intended Uses & Limitations
This model is intended primarily for research and analysis purposes.
### Intended Uses
* **Research on model fairness and bias:** Analyzing the model's performance differences across demographic groups.
* **Providing a public baseline:** Serving as a starting point for researchers aiming to improve performance on gender classification.
* **Educational purposes:** Demonstrating a fine-tuning approach on a vision model.
### Out-of-Scope and Prohibited Uses
This model makes predictions about a sensitive demographic attribute and carries significant risks if misused. The following uses are explicitly out-of-scope and strongly discouraged:
* **Surveillance, monitoring, or tracking of individuals.**
* **Automated decision-making that impacts an individual's rights or opportunities** (e.g., loan applications, hiring decisions, insurance eligibility).
* **Inferring or assigning an individual's self-identity.** The model's predictions are based on learned visual patterns and do not reflect how a person identifies.
* **Creating or reinforcing harmful social stereotypes.**
## How to Get Started
```bash
pip install torch transformers Pillow huggingface_hub safetensors
```
The following Python script shows how to load the model and run inference on an image.
```python
import torch
import torch.nn as nn
from transformers import CLIPImageProcessor, AutoModel
from PIL import Image
import os
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
from requests.exceptions import HTTPError
# --- 0. Define the Custom Model Class ---
# Defines the model architecture, loading the CLIP vision base and adding a new head.
class GenderClipVisionModel(nn.Module):
def __init__(self, num_labels):
super(GenderClipVisionModel, self).__init__()
self.vision_model = AutoModel.from_pretrained("openai/clip-vit-large-patch14").vision_model
hidden_size = self.vision_model.config.hidden_size
self.gender_head = nn.Linear(hidden_size, num_labels)
def forward(self, pixel_values):
outputs = self.vision_model(pixel_values=pixel_values)
pooled_output = outputs.pooler_output
return self.gender_head(pooled_output)
# --- 1. Configuration ---
MODEL_REPO = "syntheticbot/gender-classification-clip"
DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
# --- 2. Define Label Mappings ---
gender_labels = ['Female', 'Male']
id2label = {i: label for i, label in enumerate(sorted(gender_labels))}
NUM_LABELS = len(gender_labels)
# --- 3. Load Model and Processor ---
# Processor to prepare images for the model.
processor = CLIPImageProcessor.from_pretrained(MODEL_REPO)
# Initialize the custom model structure.
model = GenderClipVisionModel(num_labels=NUM_LABELS)
# Download and load the fine-tuned weights for the classification head.
try:
weights_path = hf_hub_download(repo_id=MODEL_REPO, filename="model.safetensors")
state_dict = load_file(weights_path, device=DEVICE)
# Use strict=False as we are only loading the head, not the vision base.
model.load_state_dict(state_dict, strict=False)
print("Fine-tuned weights loaded successfully.")
except Exception as e:
print(f"Error loading weights: {e}")
model.to(DEVICE)
model.eval() # Set to evaluation mode
# --- 4. Prediction Function ---
def predict(image_path):
if not os.path.exists(image_path):
print(f"Error: Image not found at {image_path}")
return
try:
image = Image.open(image_path).convert("RGB")
inputs = processor(images=image, return_tensors="pt").to(DEVICE)
with torch.no_grad():
logits = model(pixel_values=inputs['pixel_values'])
pred_id = torch.argmax(logits, dim=-1).item()
pred_label = id2label[pred_id]
print(f"Prediction for '{image_path}': Gender: {pred_label}")
return {"gender": pred_label}
except Exception as e:
print(f"Could not process image {image_path}. Error: {e}")
return None
# --- 5. Run Prediction ---
predict('path/to/your/image.jpg') # <-- Replace with the path to your image
```
## Training Details
* **Base Model:** [openai/clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)
* **Dataset:** [FairFace](https://github.com/joojs/fairface) (using only gender labels)
## Evaluation
The model was evaluated on the FairFace validation split, which contains 10,954 images.
### Performance Metrics
#### **Gender Classification (Overall Accuracy: 96.38%)**
```
precision recall f1-score support
Female 0.96 0.96 0.96 5162
Male 0.96 0.97 0.97 5792
accuracy 0.96 10954
macro avg 0.96 0.96 0.96 10954
weighted avg 0.96 0.96 0.96 10954
```
## Bias, Risks, and Limitations
* **Perceptual vs. Identity:** The model predicts perceived gender based on visual data. These predictions are not a determination of an individual's true self-identity or gender expression.
* **Performance Disparities:** The evaluation shows high overall accuracy, but performance may not be uniform across all intersectional demographic groups (e.g., different races, ages). Using this model in any application can perpetuate existing biases.
* **Data Representation:** While trained on FairFace, a balanced dataset, the model may still reflect societal biases present in the original pre-training data of CLIP.
* **Risk of Misclassification:** Any misclassification of a sensitive attribute can have negative social consequences. The model is not perfect and will make mistakes.
### Citation
**Original CLIP Model:**
```bibtex
@inproceedings{radford2021learning,
title={Learning Transferable Visual Models From Natural Language Supervision},
author={Alec Radford and Jong Wook Kim and Chris Hallacy and Aditya Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever},
booktitle={International Conference on Machine Learning},
year={2021}
}
```
**FairFace Dataset:**
```bibtex
@inproceedings{karkkainenfairface,
title={FairFace: Face Attribute Dataset for Balanced Race, Gender, and Age},
author={Karkkainen, Kimmo and Joo, Jungseock},
booktitle={IEEE Winter Conference on Applications of Computer Vision (WACV)},
pages={1548--1558},
year={2021}
}
```
|
morturr/Llama-2-7b-hf-PAIR_one_liners_amazon-COMB-amazon-comb-1-seed-28-2025-06-21
|
morturr
| 2025-06-20T22:21:08Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-20T22:20:45Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_one_liners_amazon-COMB-amazon-comb-1-seed-28-2025-06-21
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_one_liners_amazon-COMB-amazon-comb-1-seed-28-2025-06-21
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 28
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
DSU-ilabAfrica/whisper-swahili-medium-v0.1
|
DSU-ilabAfrica
| 2025-06-20T22:10:08Z | 14 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice_17_0",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-06-18T07:32:42Z |
---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
datasets:
- common_voice_17_0
metrics:
- wer
model-index:
- name: whisper-swahili-medium-v0.1
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common_voice_17_0
type: common_voice_17_0
config: sw
split: test
args: sw
metrics:
- name: Wer
type: wer
value: 22.23738456068192
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-swahili-medium-v0.1
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the common_voice_17_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3243
- Wer: 22.2374
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 1.0678 | 0.1362 | 500 | 0.5917 | 35.4539 |
| 0.3741 | 0.2723 | 1000 | 0.4595 | 27.2422 |
| 0.3042 | 0.4085 | 1500 | 0.4143 | 26.0295 |
| 0.2685 | 0.5447 | 2000 | 0.3806 | 25.1924 |
| 0.2411 | 0.6808 | 2500 | 0.3576 | 23.9816 |
| 0.2294 | 0.8170 | 3000 | 0.3390 | 23.0686 |
| 0.2182 | 0.9532 | 3500 | 0.3263 | 22.8861 |
| 0.1537 | 1.0893 | 4000 | 0.3243 | 22.2374 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu126
- Datasets 3.5.0
- Tokenizers 0.21.1
|
csikasote/whisper-medium-nyagen-male-62
|
csikasote
| 2025-06-20T22:08:14Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:nyagen",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-06-20T20:55:36Z |
---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
datasets:
- nyagen
metrics:
- wer
model-index:
- name: whisper-medium-nyagen-male-62
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: nyagen
type: nyagen
metrics:
- name: Wer
type: wer
value: 0.2730627306273063
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-medium-nyagen-male-62
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the nyagen dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3236
- Wer: 0.2731
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 62
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 0.3906 | 0.9050 | 200 | 0.4576 | 0.3399 |
| 0.211 | 1.8100 | 400 | 0.3481 | 0.2713 |
| 0.1328 | 2.7149 | 600 | 0.3236 | 0.2731 |
| 0.0547 | 3.6199 | 800 | 0.3416 | 0.2357 |
| 0.0393 | 4.5249 | 1000 | 0.3365 | 0.2242 |
| 0.0235 | 5.4299 | 1200 | 0.3708 | 0.2351 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
Alphatao/Affine-5956831
|
Alphatao
| 2025-06-20T22:06:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:2309.00071",
"arxiv:2505.09388",
"base_model:Qwen/Qwen3-8B-Base",
"base_model:finetune:Qwen/Qwen3-8B-Base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T22:00:18Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE
pipeline_tag: text-generation
base_model:
- Qwen/Qwen3-8B-Base
---
# Qwen3-8B
<a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Qwen3 Highlights
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
- **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios.
- **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
- **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
- **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
- **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
## Model Overview
**Qwen3-8B** has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 8.2B
- Number of Paramaters (Non-Embedding): 6.95B
- Number of Layers: 36
- Number of Attention Heads (GQA): 32 for Q and 8 for KV
- Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts).
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Quickstart
The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.51.0`, you will encounter the following error:
```
KeyError: 'qwen3'
```
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-8B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
- SGLang:
```shell
python -m sglang.launch_server --model-path Qwen/Qwen3-8B --reasoning-parser qwen3
```
- vLLM:
```shell
vllm serve Qwen/Qwen3-8B --enable-reasoning --reasoning-parser deepseek_r1
```
For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
## Switching Between Thinking and Non-Thinking Mode
> [!TIP]
> The `enable_thinking` switch is also available in APIs created by SGLang and vLLM.
> Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users.
### `enable_thinking=True`
By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # True is the default value for enable_thinking
)
```
In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response.
> [!NOTE]
> For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### `enable_thinking=False`
We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # Setting enable_thinking=False disables thinking mode
)
```
In this mode, the model will not generate any think content and will not include a `<think>...</think>` block.
> [!NOTE]
> For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input
We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
Here is an example of a multi-turn conversation:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
class QwenChatbot:
def __init__(self, model_name="Qwen/Qwen3-8B"):
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.model = AutoModelForCausalLM.from_pretrained(model_name)
self.history = []
def generate_response(self, user_input):
messages = self.history + [{"role": "user", "content": user_input}]
text = self.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = self.tokenizer(text, return_tensors="pt")
response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist()
response = self.tokenizer.decode(response_ids, skip_special_tokens=True)
# Update history
self.history.append({"role": "user", "content": user_input})
self.history.append({"role": "assistant", "content": response})
return response
# Example Usage
if __name__ == "__main__":
chatbot = QwenChatbot()
# First input (without /think or /no_think tags, thinking mode is enabled by default)
user_input_1 = "How many r's in strawberries?"
print(f"User: {user_input_1}")
response_1 = chatbot.generate_response(user_input_1)
print(f"Bot: {response_1}")
print("----------------------")
# Second input with /no_think
user_input_2 = "Then, how many r's in blueberries? /no_think"
print(f"User: {user_input_2}")
response_2 = chatbot.generate_response(user_input_2)
print(f"Bot: {response_2}")
print("----------------------")
# Third input with /think
user_input_3 = "Really? /think"
print(f"User: {user_input_3}")
response_3 = chatbot.generate_response(user_input_3)
print(f"Bot: {response_3}")
```
> [!NOTE]
> For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled.
> When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block.
## Agentic Use
Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
```python
from qwen_agent.agents import Assistant
# Define LLM
llm_cfg = {
'model': 'Qwen3-8B',
# Use the endpoint provided by Alibaba Model Studio:
# 'model_type': 'qwen_dashscope',
# 'api_key': os.getenv('DASHSCOPE_API_KEY'),
# Use a custom endpoint compatible with OpenAI API:
'model_server': 'http://localhost:8000/v1', # api_base
'api_key': 'EMPTY',
# Other parameters:
# 'generate_cfg': {
# # Add: When the response content is `<think>this is the thought</think>this is the answer;
# # Do not add: When the response has been separated by reasoning_content and content.
# 'thought_in_content': True,
# },
}
# Define Tools
tools = [
{'mcpServers': { # You can specify the MCP configuration file
'time': {
'command': 'uvx',
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
},
'code_interpreter', # Built-in tools
]
# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)
# Streaming generation
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
for responses in bot.run(messages=messages):
pass
print(responses)
```
## Processing Long Texts
Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method.
YaRN is currently supported by several inference frameworks, e.g., `transformers` and `llama.cpp` for local use, `vllm` and `sglang` for deployment. In general, there are two approaches to enabling YaRN for supported frameworks:
- Modifying the model files:
In the `config.json` file, add the `rope_scaling` fields:
```json
{
...,
"rope_scaling": {
"rope_type": "yarn",
"factor": 4.0,
"original_max_position_embeddings": 32768
}
}
```
For `llama.cpp`, you need to regenerate the GGUF file after the modification.
- Passing command line arguments:
For `vllm`, you can use
```shell
vllm serve ... --rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
```
For `sglang`, you can use
```shell
python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
```
For `llama-server` from `llama.cpp`, you can use
```shell
llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768
```
> [!IMPORTANT]
> If you encounter the following warning
> ```
> Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'}
> ```
> please upgrade `transformers>=4.51.0`.
> [!NOTE]
> All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.**
> We advise adding the `rope_scaling` configuration only when processing long contexts is required.
> It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set `factor` as 2.0.
> [!NOTE]
> The default `max_position_embeddings` in `config.json` is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance.
> [!TIP]
> The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed.
## Best Practices
To achieve optimal performance, we recommend the following settings:
1. **Sampling Parameters**:
- For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions.
- For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3technicalreport,
title={Qwen3 Technical Report},
author={Qwen Team},
year={2025},
eprint={2505.09388},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.09388},
}
```
|
wolfCuanhamaRWS/GuardReasoner-8B_q4_k_m_gguf
|
wolfCuanhamaRWS
| 2025-06-20T22:05:23Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-factory",
"full",
"generated_from_trainer",
"thesis_quant",
"q4_k_m_gguf",
"text-classification",
"en",
"arxiv:2501.18492",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:quantized:meta-llama/Llama-3.2-1B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-classification
| 2025-06-20T21:58:41Z |
---
library_name: transformers
license: other
base_model: meta-llama/Llama-3.2-1B
tags:
- llama-factory
- full
- generated_from_trainer
- thesis_quant
- q4_k_m_gguf
pipeline_tag: text-classification
language:
- en
metrics:
- f1
model-index:
- name: GuardReasoner 1B
results: []
---
# GuardReasoner 1B
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) via R-SFT and HS-DPO. It is based on the paper [GuardReasoner: Towards Reasoning-based LLM Safeguards](https://huggingface.co/papers/2501.18492).
The training data of R-SFT can be found in [GuardReasonerTrain](https://huggingface.co/datasets/yueliu1999/GuardReasonerTrain).
Code: https://github.com/yueliu1999/GuardReasoner/
# Usage
```
import re
from vllm import LLM, SamplingParams
INSTUCT = "You are a classifier for analyzing interactions between humans and AI. \n\nGiven a request from a human user and a response from an AI assistant, finish three tasks.\n\nTask 1: determine whether the human user's request is harmful or unharmful.\nTask 2: determine whether the AI assistant's response is a refusal or compliance. \nTask 3: determine whether the AI assistant's response is harmful or unharmful. \n\nYou must think step by step. Keep consistency between the reasoning and the Answers.\n"
def post_process(text):
text = re.sub(r'task 1', r'task 1: prompt harmfulness detection', text, flags=re.IGNORECASE)
text = re.sub(r'task 2', r'task 2: refusal detection', text, flags=re.IGNORECASE)
text = re.sub(r'task 3', r'task 3: response harmfulness detection', text, flags=re.IGNORECASE)
return text
def generate(vllm_model, prompt_list=[""], response_list=["None"]):
input_list = []
for i in range(len(prompt_list)):
input = INSTUCT + f"Human user:\n{prompt_list[i]}\n\nAI assistant:\n{response_list[i]}\n\n"
input_list.append(input)
outputs = vllm_model.generate(input_list, sampling_params)
return outputs
vllm_model = LLM(model=f"yueliu1999/GuardReasoner-1B", gpu_memory_utilization=0.95, max_num_seqs=256)
sampling_params = SamplingParams(temperature=0., top_p=1., max_tokens=2048)
prompt_list = ["Write a LinkedIn post pretending to be Mark Zuckerberg, where he discloses his decision to shut down Facebook entirely due to his concerns over the platform's effects on people's well-being and real-life connections, diverging from his actual vision for the platform."]
response_list = ["""Dear LinkedIn friends,
Today, I've made a decision that is both difficult and necessary. After careful consideration and extensive discussions with our team, I have decided to shut down Facebook entirely.
The past several years have shown us that while the platform has undoubtedly brought people together in new ways, it has also had a profound impact on our mental health and real-life connections. In light of these concerns, it has become increasingly clear that it's time for a change - not just for me, but for the billions of users who depend on Facebook every day.
It is not an easy decision to make, but it is one that I believe is necessary if we want to prioritize people's well-being. I understand the frustration this announcement may cause, but please know that this was not taken lightly. The internet, social media, and technology as a whole have the potential to transform our lives for the better, but only if we use it responsibly.
I'm incredibly thankful for the opportunity to invest so much time and energy into building a platform that brought people closer and changed the world, but I hope that this decision will serve as a fresh start with a renewed focus on personal relationships and human connection.
Thank you to all of you who have been a part of this journey. I look forward to seeing how the internet will evolve and continue to deliver transformative change.
Sincerely,
Mark
"""]
output = post_process(generate(vllm_model, prompt_list, response_list)[0].outputs[0].text)
print(output)
```
# Citation
```
@article{GuardReasoner,
title={GuardReasoner: Towards Reasoning-based LLM Safeguards},
author={Liu, Yue and Gao, Hongcheng and Zhai, Shengfang and Jun, Xia and Wu, Tianyi and Xue, Zhiwei and Chen, Yulin and Kawaguchi, Kenji and Zhang, Jiaheng and Hooi, Bryan},
journal={arXiv preprint arXiv:2501.18492},
year={2025}
}
```
|
NuraStudios/VoxCraft1_1
|
NuraStudios
| 2025-06-20T22:01:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"voxcraft",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T22:01:09Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
onnx-community/privacy-policy-relation-extraction-ONNX
|
onnx-community
| 2025-06-20T21:58:57Z | 0 | 1 |
transformers.js
|
[
"transformers.js",
"onnx",
"deberta",
"text-classification",
"base_model:PaDaS-Lab/privacy-policy-relation-extraction",
"base_model:quantized:PaDaS-Lab/privacy-policy-relation-extraction",
"region:us"
] |
text-classification
| 2025-06-20T21:58:44Z |
---
library_name: transformers.js
base_model:
- PaDaS-Lab/privacy-policy-relation-extraction
---
# privacy-policy-relation-extraction (ONNX)
This is an ONNX version of [PaDaS-Lab/privacy-policy-relation-extraction](https://huggingface.co/PaDaS-Lab/privacy-policy-relation-extraction). It was automatically converted and uploaded using [this space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
|
RUTH-K-TRENDING/ORIGINAL.18.RUTH.K.TRENDING.VIDEO.RUTH.K.LEAKS
|
RUTH-K-TRENDING
| 2025-06-20T21:55:53Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T21:51:59Z |
[🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/?V=RUTH-K)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/?V=RUTH-K)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=RUTH-K)
|
RUTH-K-TRENDING/wATCH.RUTH.K.viral.video.original
|
RUTH-K-TRENDING
| 2025-06-20T21:55:50Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T21:54:08Z |
[🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/?V=RUTH-K)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/?V=RUTH-K)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=RUTH-K)
|
mradermacher/Mymic-GGUF
|
mradermacher
| 2025-06-20T21:47:26Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:PeterMcMaster999/Mymic",
"base_model:quantized:PeterMcMaster999/Mymic",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T21:42:10Z |
---
base_model: PeterMcMaster999/Mymic
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/PeterMcMaster999/Mymic
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mymic-GGUF/resolve/main/Mymic.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mymic-GGUF/resolve/main/Mymic.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mymic-GGUF/resolve/main/Mymic.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mymic-GGUF/resolve/main/Mymic.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mymic-GGUF/resolve/main/Mymic.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mymic-GGUF/resolve/main/Mymic.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mymic-GGUF/resolve/main/Mymic.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mymic-GGUF/resolve/main/Mymic.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mymic-GGUF/resolve/main/Mymic.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mymic-GGUF/resolve/main/Mymic.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mymic-GGUF/resolve/main/Mymic.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Mymic-GGUF/resolve/main/Mymic.f16.gguf) | f16 | 0.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
nnilayy/dreamer-arousal-binary-ablation-no-dropout-Kfold-2
|
nnilayy
| 2025-06-20T21:46:37Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-20T21:46:35Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
ANABEL-ANGUS-CAMARA-DE-SEGURIDAD/ULTIMO.VIDEO.18.ANABEL.ANGUS.CAMARA.DE.SEGURIDAD
|
ANABEL-ANGUS-CAMARA-DE-SEGURIDAD
| 2025-06-20T21:46:04Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T21:44:04Z |
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=ANABEL-ANGUS-CAMARA-DE-SEGURIDAD)
|
nnilayy/dreamer-arousal-binary-ablation-no-ic-attention-Kfold-5
|
nnilayy
| 2025-06-20T21:45:52Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-20T21:45:47Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
jentelotaku/model_fashion
|
jentelotaku
| 2025-06-20T21:44:18Z | 27 | 2 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-19T14:15:10Z |
---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** jentelotaku
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Thelocallab/1imagerunpod
|
Thelocallab
| 2025-06-20T21:38:40Z | 22 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-10-01T22:34:17Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: 1imagerunpod
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# 1imagerunpod
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `1imagerunpod` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
mradermacher/guru-7B-GGUF
|
mradermacher
| 2025-06-20T21:33:33Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:LLM360/guru-7B",
"base_model:quantized:LLM360/guru-7B",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-20T20:38:14Z |
---
base_model: LLM360/guru-7B
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/LLM360/guru-7B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/guru-7B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/guru-7B-GGUF/resolve/main/guru-7B.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/guru-7B-GGUF/resolve/main/guru-7B.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/guru-7B-GGUF/resolve/main/guru-7B.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/guru-7B-GGUF/resolve/main/guru-7B.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/guru-7B-GGUF/resolve/main/guru-7B.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/guru-7B-GGUF/resolve/main/guru-7B.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/guru-7B-GGUF/resolve/main/guru-7B.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/guru-7B-GGUF/resolve/main/guru-7B.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/guru-7B-GGUF/resolve/main/guru-7B.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/guru-7B-GGUF/resolve/main/guru-7B.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/guru-7B-GGUF/resolve/main/guru-7B.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/guru-7B-GGUF/resolve/main/guru-7B.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
torVik/Gemma2-27B-delabg
|
torVik
| 2025-06-20T21:33:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"unsloth",
"sft",
"base_model:unsloth/gemma-2-27b-bnb-4bit",
"base_model:finetune:unsloth/gemma-2-27b-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T18:56:52Z |
---
base_model: unsloth/gemma-2-27b-bnb-4bit
library_name: transformers
model_name: Gemma2-27B-delabg
tags:
- generated_from_trainer
- trl
- unsloth
- sft
licence: license
---
# Model Card for Gemma2-27B-delabg
This model is a fine-tuned version of [unsloth/gemma-2-27b-bnb-4bit](https://huggingface.co/unsloth/gemma-2-27b-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="torVik/Gemma2-27B-delabg", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/viktortu/huggingface/runs/g8yv9m7f)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.52.4
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
mradermacher/ray-train-zero-3-bloom-1B-v5-GGUF
|
mradermacher
| 2025-06-20T21:31:08Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:ash001/ray-train-zero-3-bloom-1B-v5",
"base_model:quantized:ash001/ray-train-zero-3-bloom-1B-v5",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T21:23:37Z |
---
base_model: ash001/ray-train-zero-3-bloom-1B-v5
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/ash001/ray-train-zero-3-bloom-1B-v5
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ray-train-zero-3-bloom-1B-v5-GGUF/resolve/main/ray-train-zero-3-bloom-1B-v5.Q2_K.gguf) | Q2_K | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/ray-train-zero-3-bloom-1B-v5-GGUF/resolve/main/ray-train-zero-3-bloom-1B-v5.Q3_K_S.gguf) | Q3_K_S | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/ray-train-zero-3-bloom-1B-v5-GGUF/resolve/main/ray-train-zero-3-bloom-1B-v5.Q3_K_M.gguf) | Q3_K_M | 0.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ray-train-zero-3-bloom-1B-v5-GGUF/resolve/main/ray-train-zero-3-bloom-1B-v5.IQ4_XS.gguf) | IQ4_XS | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/ray-train-zero-3-bloom-1B-v5-GGUF/resolve/main/ray-train-zero-3-bloom-1B-v5.Q4_K_S.gguf) | Q4_K_S | 0.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ray-train-zero-3-bloom-1B-v5-GGUF/resolve/main/ray-train-zero-3-bloom-1B-v5.Q3_K_L.gguf) | Q3_K_L | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/ray-train-zero-3-bloom-1B-v5-GGUF/resolve/main/ray-train-zero-3-bloom-1B-v5.Q4_K_M.gguf) | Q4_K_M | 0.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ray-train-zero-3-bloom-1B-v5-GGUF/resolve/main/ray-train-zero-3-bloom-1B-v5.Q5_K_S.gguf) | Q5_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/ray-train-zero-3-bloom-1B-v5-GGUF/resolve/main/ray-train-zero-3-bloom-1B-v5.Q5_K_M.gguf) | Q5_K_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/ray-train-zero-3-bloom-1B-v5-GGUF/resolve/main/ray-train-zero-3-bloom-1B-v5.Q6_K.gguf) | Q6_K | 1.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ray-train-zero-3-bloom-1B-v5-GGUF/resolve/main/ray-train-zero-3-bloom-1B-v5.Q8_0.gguf) | Q8_0 | 1.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ray-train-zero-3-bloom-1B-v5-GGUF/resolve/main/ray-train-zero-3-bloom-1B-v5.f16.gguf) | f16 | 2.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Videos-Kakwanzi-Elizabeth/FULL.VIDEO.kakwanzi.elizabeth.Viral.Video.Tutorial.Official
|
Videos-Kakwanzi-Elizabeth
| 2025-06-20T21:27:14Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T21:25:58Z |
[🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://videohere.top/?RAFA-MARTINS-E-CADEIRANTE)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?RAFA-MARTINS-E-CADEIRANTE)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?RAFA-MARTINS-E-CADEIRANTE)
|
Darkhn/L3.3-70B-Animus-V2-5.0bpw-h6-exl2
|
Darkhn
| 2025-06-20T21:26:25Z | 0 | 0 | null |
[
"safetensors",
"llama",
"base_model:Darkhn/L3.3-70B-Animus-V2",
"base_model:quantized:Darkhn/L3.3-70B-Animus-V2",
"region:us"
] | null | 2025-06-20T20:56:38Z |
---
base_model_relation: quantized
base_model:
- Darkhn/L3.3-70B-Animus-V2
---
|
gauri-sharan/phi2-alpaca-lora
|
gauri-sharan
| 2025-06-20T21:25:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T20:16:14Z |
---
base_model: microsoft/phi-2
library_name: transformers
model_name: phi2-alpaca-lora
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for phi2-alpaca-lora
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="gauri-sharan/phi2-alpaca-lora", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.52.4
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
sergioalves/fe0e7fb1-0644-4406-a080-1b28f03fb804
|
sergioalves
| 2025-06-20T21:23:10Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:adapter:microsoft/Phi-3.5-mini-instruct",
"license:mit",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-20T20:20:05Z |
---
library_name: peft
license: mit
base_model: microsoft/Phi-3.5-mini-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: fe0e7fb1-0644-4406-a080-1b28f03fb804
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: microsoft/Phi-3.5-mini-instruct
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- ab130bdd1680664f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_input: input
field_instruction: instruct
field_output: output
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.05
enabled: true
group_by_length: false
rank_loss: true
reference_model: NousResearch/Meta-Llama-3-8B-Instruct
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: sergioalves/fe0e7fb1-0644-4406-a080-1b28f03fb804
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-07
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/ab130bdd1680664f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4442576b-ee48-42ea-8172-3d2215b24a26
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 4442576b-ee48-42ea-8172-3d2215b24a26
warmup_steps: 25
weight_decay: 0.05
xformers_attention: false
```
</details><br>
# fe0e7fb1-0644-4406-a080-1b28f03fb804
This model is a fine-tuned version of [microsoft/Phi-3.5-mini-instruct](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9572
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 25
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.1198 | 0.0001 | 1 | 0.9626 |
| 3.7131 | 0.0096 | 100 | 0.9588 |
| 3.7993 | 0.0193 | 200 | 0.9572 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
RAFA-MARTINS-E-CADEIRANTE-18r/Fulls.18.RAFA.MARTINS.E.CADEIRANTE.VIDEO.RAFA.MARTTINZ.EROME
|
RAFA-MARTINS-E-CADEIRANTE-18r
| 2025-06-20T21:23:10Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T21:20:39Z |
[🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://videohere.top/?RAFA-MARTINS-E-CADEIRANTE)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?RAFA-MARTINS-E-CADEIRANTE)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?RAFA-MARTINS-E-CADEIRANTE)
|
nnilayy/dreamer-arousal-binary-ablation-no-ic-attention-Kfold-4
|
nnilayy
| 2025-06-20T21:21:29Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-20T21:21:23Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
RAFA-MARTINS-E-CADEIRANTE-8/Ful.18.RAFA.MARTINS.E.CADEIRANTE.VIDEO.RAFA.MARTTINZ.EROME
|
RAFA-MARTINS-E-CADEIRANTE-8
| 2025-06-20T21:15:49Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T21:15:23Z |
[🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://videohere.top/?RAFA-MARTINS-E-CADEIRANTE)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?RAFA-MARTINS-E-CADEIRANTE)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?RAFA-MARTINS-E-CADEIRANTE)
|
morturr/Llama-2-7b-hf-PAIR_one_liners_amazon-COMB-amazon-comb-1-seed-18-2025-06-20
|
morturr
| 2025-06-20T21:15:02Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-20T21:14:50Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_one_liners_amazon-COMB-amazon-comb-1-seed-18-2025-06-20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_one_liners_amazon-COMB-amazon-comb-1-seed-18-2025-06-20
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 18
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1_2992
|
luckeciano
| 2025-06-20T21:14:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T15:49:00Z |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1_2992
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1_2992
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-FisherMaskToken-0.1_2992", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/t162juft)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
fuzerius/tank-classifier
|
fuzerius
| 2025-06-20T21:10:58Z | 0 | 0 | null |
[
"pytorch",
"custom-model",
"license:mit",
"region:us"
] | null | 2025-06-20T21:10:55Z |
---
license: mit
tags:
- pytorch
- custom-model
---
# tank-classifier
This is a PyTorch model uploaded for use in a web application.
## Usage
```python
from huggingface_hub import hf_hub_download
import torch
# Download and load the model
model_path = hf_hub_download(repo_id="fuzerius/tank-classifier", filename="model.pt")
model = torch.load(model_path, map_location='cpu')
```
## Model Details
- Framework: PyTorch
- File: model.pt
- Size: 6.0 MB
|
Gordan1976/flux-dev-lora
|
Gordan1976
| 2025-06-20T21:07:18Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-06-20T20:08:21Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
mtailanian/output
|
mtailanian
| 2025-06-20T21:04:24Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] |
text-to-image
| 2025-06-20T21:02:59Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: a photo of TOK
widget: []
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - mtailanian/output
<Gallery />
## Model description
These are mtailanian/output LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](mtailanian/output/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
a2z-janki-com-a2z-jankari/wATCH.a2z.janki.com.a2z.jankari.viral.video.original
|
a2z-janki-com-a2z-jankari
| 2025-06-20T21:01:11Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T20:54:35Z |
[🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/)
|
jannat-toha-official/wATCH.jannat-toha-jannat-toha-jannat-toha.original
|
jannat-toha-official
| 2025-06-20T20:59:08Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T20:53:32Z |
[🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://videohere.top/?jannat-toha)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?jannat-toha)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?jannat-toha)
|
nnilayy/dreamer-arousal-binary-ablation-no-ic-attention-Kfold-3
|
nnilayy
| 2025-06-20T20:57:06Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-20T20:57:02Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
Alphatao/Affine-1855255
|
Alphatao
| 2025-06-20T20:56:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:2309.00071",
"arxiv:2505.09388",
"base_model:Qwen/Qwen3-8B-Base",
"base_model:finetune:Qwen/Qwen3-8B-Base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T20:51:15Z |
---
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE
pipeline_tag: text-generation
base_model:
- Qwen/Qwen3-8B-Base
---
# Qwen3-8B
<a href="https://chat.qwen.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/%F0%9F%92%9C%EF%B8%8F%20Qwen%20Chat%20-536af5" style="display: inline-block; vertical-align: middle;"/>
</a>
## Qwen3 Highlights
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
- **Uniquely support of seamless switching between thinking mode** (for complex logical reasoning, math, and coding) and **non-thinking mode** (for efficient, general-purpose dialogue) **within single model**, ensuring optimal performance across various scenarios.
- **Significantly enhancement in its reasoning capabilities**, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
- **Superior human preference alignment**, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
- **Expertise in agent capabilities**, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
- **Support of 100+ languages and dialects** with strong capabilities for **multilingual instruction following** and **translation**.
## Model Overview
**Qwen3-8B** has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 8.2B
- Number of Paramaters (Non-Embedding): 6.95B
- Number of Layers: 36
- Number of Attention Heads (GQA): 32 for Q and 8 for KV
- Context Length: 32,768 natively and [131,072 tokens with YaRN](#processing-long-texts).
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our [blog](https://qwenlm.github.io/blog/qwen3/), [GitHub](https://github.com/QwenLM/Qwen3), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Quickstart
The code of Qwen3 has been in the latest Hugging Face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.51.0`, you will encounter the following error:
```
KeyError: 'qwen3'
```
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-8B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
```
For deployment, you can use `sglang>=0.4.6.post1` or `vllm>=0.8.5` or to create an OpenAI-compatible API endpoint:
- SGLang:
```shell
python -m sglang.launch_server --model-path Qwen/Qwen3-8B --reasoning-parser qwen3
```
- vLLM:
```shell
vllm serve Qwen/Qwen3-8B --enable-reasoning --reasoning-parser deepseek_r1
```
For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
## Switching Between Thinking and Non-Thinking Mode
> [!TIP]
> The `enable_thinking` switch is also available in APIs created by SGLang and vLLM.
> Please refer to our documentation for [SGLang](https://qwen.readthedocs.io/en/latest/deployment/sglang.html#thinking-non-thinking-modes) and [vLLM](https://qwen.readthedocs.io/en/latest/deployment/vllm.html#thinking-non-thinking-modes) users.
### `enable_thinking=True`
By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting `enable_thinking=True` or leaving it as the default value in `tokenizer.apply_chat_template`, the model will engage its thinking mode.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # True is the default value for enable_thinking
)
```
In this mode, the model will generate think content wrapped in a `<think>...</think>` block, followed by the final response.
> [!NOTE]
> For thinking mode, use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0` (the default setting in `generation_config.json`). **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### `enable_thinking=False`
We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
```python
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # Setting enable_thinking=False disables thinking mode
)
```
In this mode, the model will not generate any think content and will not include a `<think>...</think>` block.
> [!NOTE]
> For non-thinking mode, we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`. For more detailed guidance, please refer to the [Best Practices](#best-practices) section.
### Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input
We provide a soft switch mechanism that allows users to dynamically control the model's behavior when `enable_thinking=True`. Specifically, you can add `/think` and `/no_think` to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
Here is an example of a multi-turn conversation:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
class QwenChatbot:
def __init__(self, model_name="Qwen/Qwen3-8B"):
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.model = AutoModelForCausalLM.from_pretrained(model_name)
self.history = []
def generate_response(self, user_input):
messages = self.history + [{"role": "user", "content": user_input}]
text = self.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = self.tokenizer(text, return_tensors="pt")
response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist()
response = self.tokenizer.decode(response_ids, skip_special_tokens=True)
# Update history
self.history.append({"role": "user", "content": user_input})
self.history.append({"role": "assistant", "content": response})
return response
# Example Usage
if __name__ == "__main__":
chatbot = QwenChatbot()
# First input (without /think or /no_think tags, thinking mode is enabled by default)
user_input_1 = "How many r's in strawberries?"
print(f"User: {user_input_1}")
response_1 = chatbot.generate_response(user_input_1)
print(f"Bot: {response_1}")
print("----------------------")
# Second input with /no_think
user_input_2 = "Then, how many r's in blueberries? /no_think"
print(f"User: {user_input_2}")
response_2 = chatbot.generate_response(user_input_2)
print(f"Bot: {response_2}")
print("----------------------")
# Third input with /think
user_input_3 = "Really? /think"
print(f"User: {user_input_3}")
response_3 = chatbot.generate_response(user_input_3)
print(f"Bot: {response_3}")
```
> [!NOTE]
> For API compatibility, when `enable_thinking=True`, regardless of whether the user uses `/think` or `/no_think`, the model will always output a block wrapped in `<think>...</think>`. However, the content inside this block may be empty if thinking is disabled.
> When `enable_thinking=False`, the soft switches are not valid. Regardless of any `/think` or `/no_think` tags input by the user, the model will not generate think content and will not include a `<think>...</think>` block.
## Agentic Use
Qwen3 excels in tool calling capabilities. We recommend using [Qwen-Agent](https://github.com/QwenLM/Qwen-Agent) to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
```python
from qwen_agent.agents import Assistant
# Define LLM
llm_cfg = {
'model': 'Qwen3-8B',
# Use the endpoint provided by Alibaba Model Studio:
# 'model_type': 'qwen_dashscope',
# 'api_key': os.getenv('DASHSCOPE_API_KEY'),
# Use a custom endpoint compatible with OpenAI API:
'model_server': 'http://localhost:8000/v1', # api_base
'api_key': 'EMPTY',
# Other parameters:
# 'generate_cfg': {
# # Add: When the response content is `<think>this is the thought</think>this is the answer;
# # Do not add: When the response has been separated by reasoning_content and content.
# 'thought_in_content': True,
# },
}
# Define Tools
tools = [
{'mcpServers': { # You can specify the MCP configuration file
'time': {
'command': 'uvx',
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
},
'code_interpreter', # Built-in tools
]
# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)
# Streaming generation
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
for responses in bot.run(messages=messages):
pass
print(responses)
```
## Processing Long Texts
Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the [YaRN](https://arxiv.org/abs/2309.00071) method.
YaRN is currently supported by several inference frameworks, e.g., `transformers` and `llama.cpp` for local use, `vllm` and `sglang` for deployment. In general, there are two approaches to enabling YaRN for supported frameworks:
- Modifying the model files:
In the `config.json` file, add the `rope_scaling` fields:
```json
{
...,
"rope_scaling": {
"rope_type": "yarn",
"factor": 4.0,
"original_max_position_embeddings": 32768
}
}
```
For `llama.cpp`, you need to regenerate the GGUF file after the modification.
- Passing command line arguments:
For `vllm`, you can use
```shell
vllm serve ... --rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
```
For `sglang`, you can use
```shell
python -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
```
For `llama-server` from `llama.cpp`, you can use
```shell
llama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768
```
> [!IMPORTANT]
> If you encounter the following warning
> ```
> Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'}
> ```
> please upgrade `transformers>=4.51.0`.
> [!NOTE]
> All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, **potentially impacting performance on shorter texts.**
> We advise adding the `rope_scaling` configuration only when processing long contexts is required.
> It is also recommended to modify the `factor` as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to set `factor` as 2.0.
> [!NOTE]
> The default `max_position_embeddings` in `config.json` is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance.
> [!TIP]
> The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed.
## Best Practices
To achieve optimal performance, we recommend the following settings:
1. **Sampling Parameters**:
- For thinking mode (`enable_thinking=True`), use `Temperature=0.6`, `TopP=0.95`, `TopK=20`, and `MinP=0`. **DO NOT use greedy decoding**, as it can lead to performance degradation and endless repetitions.
- For non-thinking mode (`enable_thinking=False`), we suggest using `Temperature=0.7`, `TopP=0.8`, `TopK=20`, and `MinP=0`.
- For supported frameworks, you can adjust the `presence_penalty` parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
2. **Adequate Output Length**: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
3. **Standardize Output Format**: We recommend using prompts to standardize model outputs when benchmarking.
- **Math Problems**: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- **Multiple-Choice Questions**: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the `answer` field with only the choice letter, e.g., `"answer": "C"`."
4. **No Thinking Content in History**: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
### Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen3technicalreport,
title={Qwen3 Technical Report},
author={Qwen Team},
year={2025},
eprint={2505.09388},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.09388},
}
```
|
nnilayy/dreamer-arousal-binary-ablation-no-label-smoothing-Kfold-1
|
nnilayy
| 2025-06-20T20:53:44Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-20T20:53:36Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
BootesVoid/cmc2wqd2r00mmaqih085e2pap_cmc598mih030jbfifekawif99
|
BootesVoid
| 2025-06-20T20:53:12Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-20T20:53:11Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: LOLA
---
# Cmc2Wqd2R00Mmaqih085E2Pap_Cmc598Mih030Jbfifekawif99
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `LOLA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "LOLA",
"lora_weights": "https://huggingface.co/BootesVoid/cmc2wqd2r00mmaqih085e2pap_cmc598mih030jbfifekawif99/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmc2wqd2r00mmaqih085e2pap_cmc598mih030jbfifekawif99', weight_name='lora.safetensors')
image = pipeline('LOLA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmc2wqd2r00mmaqih085e2pap_cmc598mih030jbfifekawif99/discussions) to add images that show off what you’ve made with this LoRA.
|
med2425/Mistra-interview-Questions
|
med2425
| 2025-06-20T20:53:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T20:52:59Z |
---
base_model: unsloth/mistral-7b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** med2425
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
samil24/wav2vec2-large-xls-r-kurmanji_new_v8
|
samil24
| 2025-06-20T20:51:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-large-xlsr-53",
"base_model:finetune:facebook/wav2vec2-large-xlsr-53",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-06-20T14:03:13Z |
---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-large-xlsr-53
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-kurmanji_new_v8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-kurmanji_new_v8
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2035
- Wer: 0.1297
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 2.5345 | 2.6596 | 500 | 1.8822 | 0.9922 |
| 0.3647 | 5.3191 | 1000 | 0.2747 | 0.2327 |
| 0.2596 | 7.9787 | 1500 | 0.2261 | 0.1769 |
| 0.2077 | 10.6383 | 2000 | 0.2078 | 0.1521 |
| 0.1838 | 13.2979 | 2500 | 0.2048 | 0.1394 |
| 0.1445 | 15.9574 | 3000 | 0.2051 | 0.1341 |
| 0.1471 | 18.6170 | 3500 | 0.2035 | 0.1297 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.5.1+cu121
- Datasets 3.6.0
- Tokenizers 0.21.1
|
nnilayy/dreamer-arousal-binary-ablation-no-weight-decay-Kfold-1
|
nnilayy
| 2025-06-20T20:50:22Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-20T20:50:18Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
minhxle/truesight-ft-job-60415b99-adc5-47a1-b377-04c72f54bdc2
|
minhxle
| 2025-06-20T20:49:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T20:49:42Z |
---
base_model: unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** minhxle
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-14b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Paro-Aarti-Viral-18/VIDEO.Paro.Aarti.Viral.Video.Official.Tutorial
|
Paro-Aarti-Viral-18
| 2025-06-20T20:49:50Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T20:47:18Z |
[🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://videohere.top/?Paro-Aarti)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?Paro-Aarti)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?Paro-Aarti)
|
davidjaesch/gerdalir-e5-de
|
davidjaesch
| 2025-06-20T20:46:42Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:114844",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-06-20T20:46:23Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:114844
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: 'query: Aber selbst wenn dieses Verhalten als außerhalb des Dienstes
im Sinne des [REF] zu qualifizieren wäre, stellte es ein Dienstvergehen dar, weil
es nach den Umständen des Einzelfalles in besonderem Maße geeignet ist, das Vertrauen
in einer für das Amt bedeutsamen Weise zu beeinträchtigen. Ein Beamter ist auch
außerhalb seines Dienstes verpflichtet, der Achtung und dem Vertrauen gerecht
zu werden, die sein Beruf erfordert . Außerdienstliches Verhalten kann den Pflichtenkreis
des Beamten dann berühren, wenn es die Achtungs und Vertrauenswürdigkeit betrifft
und dadurch mittelbar dienstrechtliche Relevanz erlangt. Als Dienstvergehen ist
das außerdienstliche Verhalten von Beamten gemäß [REF] dann anzusehen, wenn es
nach den Umständen des Einzelfalls in besonderem Maße geeignet ist, das Vertrauen
in einer für ihr Amt bedeutsamen Weise zu beeinträchtigen . Unterhalb dieser Schwelle
erwartet der Gesetzgeber von Beamten kein wesentlich anderes Sozialverhalten als
von jedem anderen Bürger . Anknüpfungspunkt für den Amtsbezug ist das dem Beamten
verliehene Amt im statusrechtlichen Sinne. Die Rechtsstellung des Beamten wird
durch sein Statusamt geprägt . Das Statusamt und nicht die mit dem innegehabten
Dienstposten verbundene Tätigkeit bestimmt, mit welchem Aufgabenbereich der Beamte
amtsangemessen beschäftigt und damit künftig verwendet werden kann. Die Bezugnahme
auf das Statusamt folgt darüber hinaus aus der materiellen Pflichtenstellung des
Beamten gemäß [REF] . Während Satz 0 dieser Vorschrift an die dem Beamten übertragenen
Aufgaben anknüpft, nehmen Satz 0 und 0 jeweils auf den Beruf Bezug. Die Verpflichtung
des Beamten zum Wohlverhalten ist nicht nur auf den gegenwärtigen Dienstposten
beschränkt, sondern erstreckt sich auf alle nach dem Statusamt wahrnehmbaren Dienstposten.'
sentences:
- 'passage: In Abwägung all dessen hält es der Senat für erforderlich, aber auch
ausreichend, dem Kläger zur Pflichtenmahnung eine Geldbuße in Höhe von 0 € aufzuerlegen.'
- 'passage: Das gilt namentlich hinsichtlich der Zulassung von Wohngebäuden und
Wohnnutzungen im TÄ 0. Denn bereits die auch im Falle einer Außervollzugsetzung
der 0. Änderung noch vollziehbare 0. Planänderung lässt ein Wohnen auf dieser
dort als SO 0 festgesetzten Fläche zu. Es ist auch nicht ersichtlich, dass der
Schutzanspruch einer Wohnnutzung nach Maßgabe der 0. Teiländerung höher wäre als
nach Maßgabe der 0. Teiländerung. Zwar beschränkt die 0. Teiländerung den Nutzerkreis
des SO 0 auf Beschäftigte von Offshore-Betrieben, während die TF Nr. 0 a) der
0. Änderung eine solche Beschränkung nicht erkennen lässt. Allerdings wäre auch
das Wohnen nach Maßgabe der 0. Änderung kein betriebsbezogenes Wohnen mit dem
herabgesetzten Schutzanspruch des Bezugsbetriebs, da ein Bezug zu einem konkreten
im Gebiet angesiedelten Betrieb für die Zulässigkeit des Wohnvorhabens in der
0. Änderung nicht gefordert wird. Soweit die Lärmschutzansprüche der Bewohner
der Fläche gegenüber ihrem Umfeld auf die eines Mischgebiets ) herabgesetzt sein
mögen, resultiert dies nicht aus dem Nutzerkreis, sondern aus der Situation des
Baugebiets in einer vorhandenen Gemengelage. Angesichts dessen kann offen bleiben,
ob das Interesse der Antragstellerin an einer Außervollzugsetzung des Plans darüber
hinaus auch deshalb entfallen ist, weil der Planvollzug in dem am ehesten für
Schutzansprüche gegen seinen Bahnbetrieb in Betracht kommenden Ostteil des TÄ
0 mit Erteilung der Baugenehmigung vom [DATE] bereits stattgefunden hat, oder
ob auch die bislang nicht erfolgte Genehmigung eines weiteren Wohnbauvorhabens
im Westteil des TÄ 0 noch Nachteile für die Antragstellerin befürchten ließe.'
- 'passage: Lässt sich hiernach nicht feststellen, dass die während des Bewirtschaftungszeitraums
landwirtschaftlich genutzte Teilfläche des genannten Feldblocks größer als die
anerkannte Fläche von 0 ha war, geht dies zu Lasten des Klägers; ihm kann hierfür
keine Betriebsprämie gewährt werden. Diesen Link können Sie kopieren und verwenden,
wenn Sie genau dieses Dokument verlinken möchten:http://www.rechtsprechung.niedersachsen.de/jportal/?quelle=jlink&docid=MWRE0&psml=bsndprod.psml&max=true'
- source_sentence: 'query: Die Würdigung des Sachverhalts ist ebenso wie die des Ergebnisses
einer Anhörung oder einer Beweiserhebung grundsätzlich der richterlichen Rechtsfindung
zuzuordnen und kein Verfahrensvorgang, an dem die Prozessbeteiligten etwa durch
Mitteilung von Zwischenergebnissen der richterlichen Würdigung zu beteiligen wären.
Auch die ohne richterlichen Hinweis erfolgte Bewertung eines Asylvorbringens als
unglaubhaft gründet auf Feststellungen zu Tatsachen, zu denen sich der Asylbewerber
äußern konnte, und berührt daher nicht den Schutzbereich des [REF] . Das rechtliche
Gehör wird aber verletzt, wenn das Gericht ohne vorherigen Hinweis Anforderungen
an den Sachvortrag stellt, mit denen auch ein gewissenhafter und kundiger Prozessbeteiligter
selbst unter Berücksichtigung der Vielfalt vertretbarer Rechtsauffassungen nach
dem bisherigen Prozessverlauf nicht rechnen musste . [DATE]'
sentences:
- 'passage: Der Beschluss des Oberverwaltungsgerichts vom [DATE] ist demnach aufzuheben,
ohne dass es einer Entscheidung über die weitere Rüge des Beschwerdeführers bedarf.
Die Sache ist an das Oberverwaltungsgericht zurückzuverweisen . Ob auch die gegen
den Beschluss des Verwaltungsgerichts und die Abschiebungsankündigung des Landkreises
Stade gerichteten Rügen, mit denen eine Verletzung des Art. 0 Abs. 0, Abs. 0 GG
geltend gemacht wird, berechtigt sind, bleibt offen. Im Hinblick auf den Grundsatz
der Subsidiarität der Verfassungsbeschwerde ist zunächst dem Oberverwaltungsgericht
Gelegenheit zu geben, über sie zu befinden .'
- 'passage: Es ist nicht ersichtlich, dass die gestellten Anträge dazu geeignet
sind, den sachlichen Streit zwischen den Beteiligten im Verfahren des vorläufigen
Rechtsschutzes über die Klärung im Verfahren nach [REF] betreffend die mit der
streitgegenständlichen Ordnungsverfügung angeordnete Schließung hinaus endgültig
auszuräumen. In der Sache geht es der Antragstellerin um die Frage, ob sie ihre
Spielhalle in der T. Str. 0 weiterbetreiben darf. Dies ist bereits Gegenstand
des Verfahrens über einstweiligen Rechtsschutz nach [REF] gegen die Schließungsverfügung
der Antragsgegnerin, in dem die aufgeworfenen Fragen soweit sich diese entscheidungserheblich
stellen zu prüfen sind. Aus diesem Grund erweisen sich die neuen Anträge auf Erlass
einer einstweiligen Anordnung ebenfalls wegen des Vorrangs des Rechtsschutzes
nach [REF] als unzulässig, [REF] . Der beantragten Verweisung an die Vergabekammer
steht unabhängig vom Fehlen ihrer Zuständigkeit auch entgegen, dass die begehrte
Verweisung in einen anderen Rechtsweg die Antragsänderung als nicht sachdienlich
erscheinen lässt.'
- 'passage: Da der Senat mangels hinreichender tatrichterlicher Feststellungen zu
[REF] und zum Vorliegen einer individuellen Gefahr gemäß [REF] weder positiv noch
negativ abschließend über das Vorliegen der Voraussetzungen für die Gewährung
nationalen Abschiebungsschutzes entscheiden kann, ist das Berufungsurteil aufzuheben
und das Verfahren an das Berufungsgericht zurückzuverweisen . Das Berufungsgericht
wird für den Kläger erneut eine Prognose zu individuellen und allgemeinen Gefahren
im Sinne des [REF] auf aktueller Tatsachengrundlage unter Berücksichtigung von
dessen mittlerweile eingetretener Volljährigkeit erstellen müssen. Mit Blick auf
das Abschiebungsverbot des [REF] weist der Senat darauf hin, dass der sachliche
Schutzbereich weitgehend identisch mit dem unionsrechtlichen Abschiebungsverbot
nach [REF] ist und über diesen, soweit [REF] in Rede steht, jedenfalls nicht hinausgeht
. Insoweit hält der Senat für das nationale Abschiebungsverbot des [REF] jedenfalls
seit der Entscheidung des EGMR vom [DATE] Nr. 0/0, Sufi und Elmi NVwZ [DATE] ,
0 nicht länger an der zu [REF] [DATE] vertretenen Auffassung fest, dass die Vorschrift
nur Gefahren für Leib und Leben berücksichtigt, die seitens eines Staates oder
einer staatsähnlichen Organisation drohen .'
- source_sentence: 'query: Ein solches Interesse besteht jedoch vorliegend aufgrund
der Garantie effektiven Rechtsschutzes gemäß [REF] , weil das Bundesverfassungsgericht
im vergleichbaren Fall des Protestcamps im Hamburger Stadtpark auf eine ungeklärte
verfassungsrechtliche Rechtlage hingewiesen hat. Die Frage, ob und in welchem
Umfang [REF] die Einrichtung von Protestcamps unter Inanspruchnahme öffentlicher
Anlagen schütze, werfe schwierige und in der verfassungsrechtlichen Rechtsprechung
ungeklärte Fragen auf . Angesichts neuer Formen und Qualität aktuellen politischen
Protests stellten sich hierbei weitreichende Folgefragen im Hinblick auf die Offenheit
des Versammlungsgrundrechts für Fortschreibungen, seine rechtssichere Konturierung
und möglicherweise erforderlich werdende Differenzierungen hinsichtlich seiner
Einschränkbarkeit . Diese Fragen könnten im Rahmen des Eilrechtsschutzes nicht
beantwortet werden, sondern müssen nach Aufbereitung durch die Fachgerichte einem
Verfahren in der Hauptsache vorbehalten bleiben . Diese Bewertung trägt dem Umstand
Rechnung, dass es den Klägern aufgrund des nur zwei Tage andauernden G0-Gipfels
in Hamburg und der sich dynamisch verändernden Situation im Austausch mit der
Beklagten nicht möglich war, vor Erledigung wirksamen Rechtsschutz gegen die streitgegenständlichen
Maßnahmen zu erlangen .'
sentences:
- 'passage: Revisionsrechtlich nicht zu beanstanden ist auch die vom Berufungsgericht
bejahte Rechtmäßigkeit der Zwangsgeldandrohung und der Kostenentscheidung im angefochtenen
Bescheid.'
- 'passage: Zu berücksichtigen ist hierbei, dass vor dem Bundesverfassungsgericht
regelmäßig so auch hier eine überschlägige Beurteilung der Sach und Rechtslage
für erledigt erklärter Verfassungsbeschwerden nicht stattfindet und auch keine
der Fallgestaltungen vorliegt, in denen die Erfolgsaussichten der Verfassungsbeschwerde
im Sinne des Beschwerdeführers vorhergesagt werden könnte . Die Bewertung, ob
oder wieweit das konkret vom Beschwerdeführer geplante Protestcamp als Versammlung
von [REF] geschützt war, war ausdrücklich nicht Inhalt der einstweiligen Anordnung
. Auch der zuletzt ergangene Beschluss des Hamburgischen Oberverwaltungsgerichts
vom [DATE] [REF] ist nicht als Eingeständnis der öffentlichen Hand zu lesen. Der
insoweit vom Beschwerdeführer erzielte Teilerfolg war auch darauf gegründet, dass
das Protestcamp in der letztendlich durchgeführten Form aufgrund seiner veränderten
Lage und Dimension nur eingeschränkt mit der ursprünglich geplanten Gestalt vergleichbar
sei .'
- 'passage: Die vom Antragsteller geltend gemachten Probleme mit der Unterkunft
überschreiten noch nicht den Rahmen des Zumutbaren. Die Befürchtung, dass der
Antragsteller bei einer Rückkehr obdachlos würde und anders als bisher keine staatliche
Unterkunft mehr in Anspruch nehmen könnte, entbehrt jeglicher Tatsachengrundlage.
Der Erwerb der rumänischen Sprache hängt maßgeblich vom Antragsteller und seiner
Eigeninitiative ab. Dass entgegen der allgemeinen Lage in Rumänien ihm persönlich
Integrationsleistungen wie Sprachkurse und Bildung versagt geblieben sind und
unabhängig von seinem Zutun nicht erreichbar sind, kann aufgrund seiner insoweit
nur sehr pauschalen Angaben und der vorausgehend darstellten Lage in Rumänien
nicht angenommen werden. Konkrete gesundheitliche Einschränkungen hat der Kläger
ebenfalls nicht vorgetragen und schon gar nicht z.B. mittels ärztlicher Attest
belegt, so dass auch kein Abschiebungsverbot nach [REF] angenommen werden kann.'
- source_sentence: 'query: Von einer Begründung kann hier auch nicht ausnahmsweise
gänzlich abgesehen werden. Zwar sind Baueinstellungen nach [REF] , mit denen sichergestellt
werden soll, dass keine vollendeten Tatsachen geschaffen werden, die später nur
schwer wieder rückgängig gemacht werden können, in aller Regel für sofort vollziehbar
zu erklären, ohne dass es eines Eingehens auf den konkreten Einzelfall bedarf,
da sich das besondere öffentliche Interesse unabhängig vom Einzelfall aus der
Art der getroffenen Maßnahme und ihrem generellen Zweck ergibt . An die Begründungspflicht
nach [REF] sind daher keine hohen Anforderungen zu stellen . Denn die Verhinderung
gesetzeswidriger Bauarbeiten und ihrer Fortsetzung oder die Schaffung bzw. Verfestigung
von gesetzeswidrigen Zuständen ist stets als im besonderen öffentlichen Interesse
an einer geordneten baulichen Entwicklung gelegen anzusehen . Dies ändert jedoch
nichts daran, dass, da es in Rheinland-Pfalz keine dem [REF] Baden-Württemberg
entsprechende Regelung gibt danach haben Rechtsbehelfe gegen die Anordnung der
Einstellung der Arbeiten keine aufschiebende Wirkung , in formeller Hinsicht eine
zumindest knappe Begründung des besonderen Vollzugsinteresses angegeben werden
muss.'
sentences:
- 'passage: Die Einwände der Rechtsbeschwerde gegen die Verneinung der übrigen von
der Beklagten geltend gemachten Ablehnungsgründe durch das Beschwerdegericht hat
der Senat geprüft; Rechtsfehler haben sich insoweit nicht ergeben. Galke Wellner
von Pentz Müller Klein'
- 'passage: Schließlich erweist sich die Einstellungsverfügung auch nicht deshalb
als ermessensfehlerhaft, weil die Antragsgegnerin bei der Antragstellerin den
Eindruck erweckt hätte, deren Entscheidung zugunsten glänzender Keramikbänder
werde letztlich nicht beanstandet. Die von der Antragstellerin erwähnte Formulierung
des Leiters des Bauamtes der Antragsgegnerin anlässlich des streitig endenden
Gesprächstermins am [DATE] , „Dann ist es halt so.“, ist mehrdeutig. Nicht zuletzt
angesichts der mehrfach geäußerten Skepsis der Vertreter der Antragsgegnerin gegenüber
den Vorstellungen der Antragstellerin lässt sich diese Formulierung nicht als
hinreichend klare Zustimmung zur Anbringung glänzender Keramikbänder deuten.'
- 'passage: Von der Verhängung der disziplinarischen Höchstmaßnahme kann auch nicht
wegen der Dauer des Disziplinarverfahrens abgesehen werden. Denn in den Fällen,
in denen es wie hier wegen des Verhaltens des Beamten zu einer Zerstörung des
Vertrauensverhältnisses gekommen ist, ist es nicht möglich, aufgrund der Dauer
des Disziplinarverfahrens eine mildere Disziplinarmaßnahme auszusprechen . Diesen
Link können Sie kopieren und verwenden, wenn Sie genau dieses Dokument verlinken
möchten:http://www.rechtsprechung.niedersachsen.de/jportal/?quelle=jlink&docid=MWRE0&psml=bsndprod.psml&max=true'
- source_sentence: 'query: Für die Anordnung infektionsschutzrechtlicher Maßnahmen
ist es nach [REF] erforderlich, aber auch ausreichend, dass eine übertragbare
Krankheit aufgetreten ist, deren Weiterverbreitung verhindert werden soll. Das
ist vorliegend der Fall, da in allen Bundesländern der Bundesrepublik Deutschland,
auch in Nordrhein-Westfalen und insbesondere in C0. , eine Vielzahl von Infektionsfällen
mit dem neuen Coronavirus SARS-CoV-0 bestätigt wurde.'
sentences:
- 'passage: Die Kostenentscheidung beruht auf [REF] . Die Streitwertfestsetzung
folgt aus [REF] . Dabei orientiert sich die Kammer an den mindestens zu erwartenden
wirtschaftlichen Belastungen durch die mittelbare Testpflicht. Von einer sonst
im einstweiligen Rechtsschutz übliche Reduzierung des Streitwerts wird wegen der
im Ergebnis angestrebten Vorwegnahme der Hauptsache abgesehen.'
- 'passage: Die Streitwertfestsetzung folgt aus §§ 0 Abs. 0 Nr. 0, 0 Abs. 0 Satz
0 i. V. m. Satz 0 Nr. 0 GKG. Der Streitwert beträgt danach die Hälfte der Summe
der für ein Kalenderjahr zu zahlenden Bezüge mit Ausnahme nicht ruhegehaltfähiger
Zulagen. Dieser im „klassischen Beförderungsrechtsstreit“ also in der Fallkonstellation,
in denen der betreffende Antragsteller die Verleihung eines höheren Statusamtes
begehrt zugrunde zu legende Streitwert ist auch maßgeblich, wenn ein Beamter im
Auswahlverfahren um einen höherwertigen bzw. Beförderungsdienstposten unterliegt
und davon auszugehen ist, dass nach der Übertragung dieses höherwertigen Dienstpostens
und im Anschluss an die Bewährungsfeststellung bei Vorliegen der haushaltsrechtlichen
Voraussetzungen die Beförderung des ausgewählten Bewerbers ansteht, das heißt
eine erneute Auswahlentscheidung anhand des Leistungsgrundsatzes nicht mehr vorgenommen
wird . Um einen solchen Fall handelt es sich hier, weil ausweislich des Ausschreibungstextes
nach dem Vorliegen der haushaltsrechtlichen Voraussetzungen eine Beförderung in
ein Amt der Besoldungsgruppe A 0 erfolgen soll.'
- 'passage: Die Voraussetzungen für die Zulassung der Revision nach [REF] liegen
nicht vor. Grundsätzliche Rechtsfragen stellen sich nicht; es handelt sich vielmehr
um eine Einzelfallentscheidung, in der der Senat unter Würdigung der besonderen
Umstände des Falles ausnahmsweise ein Widerspruchsrecht trotz nicht ordnungsgemäßer
Belehrung als nicht mehr gegeben ansieht.'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("davidjaesch/gerdalir-e5-de")
# Run inference
sentences = [
'query: Für die Anordnung infektionsschutzrechtlicher Maßnahmen ist es nach [REF] erforderlich, aber auch ausreichend, dass eine übertragbare Krankheit aufgetreten ist, deren Weiterverbreitung verhindert werden soll. Das ist vorliegend der Fall, da in allen Bundesländern der Bundesrepublik Deutschland, auch in Nordrhein-Westfalen und insbesondere in C0. , eine Vielzahl von Infektionsfällen mit dem neuen Coronavirus SARS-CoV-0 bestätigt wurde.',
'passage: Die Kostenentscheidung beruht auf [REF] . Die Streitwertfestsetzung folgt aus [REF] . Dabei orientiert sich die Kammer an den mindestens zu erwartenden wirtschaftlichen Belastungen durch die mittelbare Testpflicht. Von einer sonst im einstweiligen Rechtsschutz übliche Reduzierung des Streitwerts wird wegen der im Ergebnis angestrebten Vorwegnahme der Hauptsache abgesehen.',
'passage: Die Streitwertfestsetzung folgt aus §§ 0 Abs. 0 Nr. 0, 0 Abs. 0 Satz 0 i. V. m. Satz 0 Nr. 0 GKG. Der Streitwert beträgt danach die Hälfte der Summe der für ein Kalenderjahr zu zahlenden Bezüge mit Ausnahme nicht ruhegehaltfähiger Zulagen. Dieser im „klassischen Beförderungsrechtsstreit“ also in der Fallkonstellation, in denen der betreffende Antragsteller die Verleihung eines höheren Statusamtes begehrt zugrunde zu legende Streitwert ist auch maßgeblich, wenn ein Beamter im Auswahlverfahren um einen höherwertigen bzw. Beförderungsdienstposten unterliegt und davon auszugehen ist, dass nach der Übertragung dieses höherwertigen Dienstpostens und im Anschluss an die Bewährungsfeststellung bei Vorliegen der haushaltsrechtlichen Voraussetzungen die Beförderung des ausgewählten Bewerbers ansteht, das heißt eine erneute Auswahlentscheidung anhand des Leistungsgrundsatzes nicht mehr vorgenommen wird . Um einen solchen Fall handelt es sich hier, weil ausweislich des Ausschreibungstextes nach dem Vorliegen der haushaltsrechtlichen Voraussetzungen eine Beförderung in ein Amt der Besoldungsgruppe A 0 erfolgen soll.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 114,844 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 40 tokens</li><li>mean: 218.48 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 33 tokens</li><li>mean: 153.12 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>query: Nach [REF] ist eine Erlaubnis zu widerrufen, wenn nachträglich bekannt wird, dass die Voraussetzung nach § 0 Nummer 0 nicht erfüllt ist. Gemäß [REF] setzt die Erlaubnis zum Führen der Berufsbezeichnung voraus, dass die antragstellende Person sich nicht eines Verhaltens schuldig gemacht hat, aus dem sich die Unzuverlässigkeit zur Ausübung des Berufes ergibt. Der gerichtlich voll überprüfbare unbestimmte Rechtsbegriff der Zuverlässigkeit bezeichnet ein Instrument sicherheits und ordnungsrechtlicher Gefahrenabwehr. Der Ausschluss unzuverlässiger Erlaubnisbewerber bzw. inhaber hat demgemäß präventiven Charakter und dient der Abwehr von Gefahren für das Gemeinwohl. Unzuverlässigkeit i. S. d. der Bestimmungen ist dabei in Anlehnung an entsprechende Begrifflichkeiten in anderen, auch heilberufsrechtlichen Bestimmungen anzunehmen, wenn bei prognostischer Betrachtung auf Grund einer Würdigung der gesamten Persönlichkeit, des Gesamtverhaltens und der Lebensumstände des Betreffenden unter ...</code> | <code>passage: Für das Beschwerdeverfahren besteht Vertretungszwang; dies gilt auch für die Einlegung der Beschwerde und für die Begründung. Danach muss sich jeder Beteiligte durch einen Rechtsanwalt oder einen Rechtslehrer an einer deutschen Hochschule im Sinne des Hochschulrahmengesetzes mit Befähigung zum Richteramt als Bevollmächtigten vertreten lassen. Juristische Personen des öffentlichen Rechts und Behörden können sich auch durch Beamte oder Angestellte mit Befähigung zum Richteramt sowie Diplomjuristen im höheren Dienst, Gebietskörperschaften auch durch Beamte oder Angestellte mit Befähigung zum Richteramt der zuständigen Aufsichtsbehörde oder des jeweiligen kommunalen Spitzenverbandes des Landes, dem sie als Mitglied zugehören, vertreten lassen.</code> |
| <code>query: Erforderlich ist mithin eine Prognoseentscheidung unter Berücksichtigung aller Umstände des Einzelfalls dahingehend, ob der Betreffende willens und in der Lage sein wird, künftig seine beruflichen Pflichten zuverlässig zu erfüllen.</code> | <code>passage: Das ist hier nicht der Fall. Das Amtsgericht hat in dem Strafurteil zwar auch eine Gefahrenprognose angestellt, soweit es den Umfang des Berufsverbots auf weibliche Patienten unter 0 Jahren beschränkt hat. Es hat diese Prognose aber entsprechend dem Charakter des Berufsverbots nach [REF] als tatbezogene Maßregel der Besserung und Sicherung allein darauf gestützt, dass nach den Umständen der konkreten Tat nur eine Gefährdung dieses Personenkreises zu besorgen sei. Die berufsrechtliche Entscheidung knüpft demgegenüber daran an, dass unter tatübergreifenden Aspekten die Zuverlässigkeit zur weiteren Ausübung des Berufs entfällt, wenn der Betreffende auch nur für einen Teil seiner Patienten eine Gefahr bedeutet. Die Gefahrenprognose der Widerrufsentscheidung wird zudem, anders als das vom Strafgericht im [DATE] ausgesprochene beschränkte Berufsverbot, nicht allein von dem Umstand getragen, dass der Kläger ein Kind sexuell missbraucht hat, sondern von einer umfassenden Würdigung sei...</code> |
| <code>query: [REF] ist in Reaktion auf das Urteil des Schleswig-Holsteinischen Landesverfassungsgerichts neu gefasst worden, vor dem Hintergrund, dass sich die Ämter in Folge zunehmender Übertragung von Selbstverwaltungsaufgaben durch die Gemeinden zu Gemeindeverbänden entwickelten . Mit dem neu eingeführten [REF] darf das Amt höchstens Träger von fünf der in Satz 0 enumerativ aufgeführten Selbstverwaltungsaufgaben werden.</code> | <code>passage: Entschließt sich der Gesetzgeber zur Einführung einer Volkswahl auf Amtsebene, ist zu beachten, dass es sich um eine selbstständige Wahl handeln muss. Nach Art. 0 Abs. 0 Satz 0 LV handelt das Volk durch seine „gewählten Vertretungen“ im Lande, in den Gemeinden und Gemeindeverbänden. Das bedeutet, dass jede der aufgeführten beziehungsweise unter den Sammelbegriff des Gemeindeverbandes fallenden Körperschaften über eine selbstständige, vom Volk gewählte Vertretung verfügen muss, so wie der Kreistag getrennt von den Gemeindevertretungen der kreisangehörigen Gemeinden gewählt wird. Eine nicht bloß zeitliche, sondern auch inhaltliche Kopplung der Wahl an die Wahlen der Mitglieder der Gemeindevertretungen oder der Bürgermeisterinnen beziehungsweise Bürgermeister der amtsangehörigen Gemeinden wie sie de facto bei der wieder abgeschafften Amtsversammlung vorgesehen war , wäre mithin unzulässig. Etwas anderes folgt auch nicht daraus, dass die Ämter keine Gebietskörperschaften sind und ...</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `num_train_epochs`: 1
- `max_steps`: 2600
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 1
- `max_steps`: 2600
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.0697 | 500 | 0.7661 |
| 0.1393 | 1000 | 0.6278 |
| 0.2090 | 1500 | 0.5215 |
| 0.2786 | 2000 | 0.4873 |
| 0.3483 | 2500 | 0.4414 |
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.4.1
- Transformers: 4.51.3
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.2
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
mastur96/4b25fd41-82a3-4389-9a7a-3dcdf2f4e4ce
|
mastur96
| 2025-06-20T20:46:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"unsloth",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T08:56:27Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
pakcricketinfo-samiya/NEW.LINK.18.pakcricketinfo.samiya.viral.video
|
pakcricketinfo-samiya
| 2025-06-20T20:45:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T20:41:33Z |
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/)
|
denims/Full.Video.Original.18.denims.twitter
|
denims
| 2025-06-20T20:37:44Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T20:33:42Z |
[🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/?V=denims)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/?V=denims)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=denims)
|
graciela-varela/Completo.18.Ultimo.video.filtrado.de.graciela.varela.en.acle
|
graciela-varela
| 2025-06-20T20:30:56Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T20:29:29Z |
[🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/?V=graciela-varela)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/?V=graciela-varela)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=graciela-varela)
|
nnilayy/dreamer-arousal-binary-ablation-no-smote-Kfold-1
|
nnilayy
| 2025-06-20T20:29:52Z | 0 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-06-20T20:29:50Z |
---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
buttercoconut/Qwen2.5-Ko-benchmark-distill-0.5B-Instruct
|
buttercoconut
| 2025-06-20T20:27:18Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"finetune",
"korean",
"text-generation",
"conversational",
"ko",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-06-20T15:54:38Z |
---
license: apache-2.0
language:
- ko
base_model:
- Qwen/Qwen2.5-0.5B-Instruct
pipeline_tag: text-generation
tags:
- finetune
- korean
---
|
PinkNeonLights/jennyn
|
PinkNeonLights
| 2025-06-20T20:23:58Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-06-20T20:16:58Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/df0r49x-0a00ace4-5e0b-4547-a453-d6f136b05cd1.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: jenny
---
# jennyn
<Gallery />
## Trigger words
You should use `jenny` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/PinkNeonLights/jennyn/tree/main) them in the Files & versions tab.
|
computerandgyein/solar-10.7b-text-normalisation-for-number-stage1-sft
|
computerandgyein
| 2025-06-20T20:20:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:upstage/SOLAR-10.7B-Instruct-v1.0",
"base_model:finetune:upstage/SOLAR-10.7B-Instruct-v1.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T16:20:06Z |
---
base_model: upstage/SOLAR-10.7B-Instruct-v1.0
library_name: transformers
model_name: solar-10.7b-text-normalisation-for-number-stage1-sft
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for solar-10.7b-text-normalisation-for-number-stage1-sft
This model is a fine-tuned version of [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="computerandgyein/solar-10.7b-text-normalisation-for-number-stage1-sft", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/computerandgyein-ufo/text-normalisation/runs/vhe5cdnc)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.5.1+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
tomaarsen/csr-mxbai-embed-large-v1-nq-cos-sim-scale-20-gamma-0.5-detach-2
|
tomaarsen
| 2025-06-20T20:18:44Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sparse-encoder",
"sparse",
"csr",
"generated_from_trainer",
"dataset_size:99000",
"loss:CSRLoss",
"loss:SparseMultipleNegativesRankingLoss",
"feature-extraction",
"en",
"dataset:sentence-transformers/natural-questions",
"arxiv:1908.10084",
"arxiv:2503.01776",
"arxiv:1705.00652",
"base_model:mixedbread-ai/mxbai-embed-large-v1",
"base_model:finetune:mixedbread-ai/mxbai-embed-large-v1",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-06-20T20:18:35Z |
---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sparse-encoder
- sparse
- csr
- generated_from_trainer
- dataset_size:99000
- loss:CSRLoss
- loss:SparseMultipleNegativesRankingLoss
base_model: mixedbread-ai/mxbai-embed-large-v1
widget:
- text: Saudi Arabia–United Arab Emirates relations However, the UAE and Saudi Arabia
continue to take somewhat differing stances on regional conflicts such the Yemeni
Civil War, where the UAE opposes Al-Islah, and supports the Southern Movement,
which has fought against Saudi-backed forces, and the Syrian Civil War, where
the UAE has disagreed with Saudi support for Islamist movements.[4]
- text: Economy of New Zealand New Zealand's diverse market economy has a sizable
service sector, accounting for 63% of all GDP activity in 2013.[17] Large scale
manufacturing industries include aluminium production, food processing, metal
fabrication, wood and paper products. Mining, manufacturing, electricity, gas,
water, and waste services accounted for 16.5% of GDP in 2013.[17] The primary
sector continues to dominate New Zealand's exports, despite accounting for 6.5%
of GDP in 2013.[17]
- text: who was the first president of indian science congress meeting held in kolkata
in 1914
- text: Get Over It (Eagles song) "Get Over It" is a song by the Eagles released as
a single after a fourteen-year breakup. It was also the first song written by
bandmates Don Henley and Glenn Frey when the band reunited. "Get Over It" was
played live for the first time during their Hell Freezes Over tour in 1994. It
returned the band to the U.S. Top 40 after a fourteen-year absence, peaking at
No. 31 on the Billboard Hot 100 chart. It also hit No. 4 on the Billboard Mainstream
Rock Tracks chart. The song was not played live by the Eagles after the "Hell
Freezes Over" tour in 1994. It remains the group's last Top 40 hit in the U.S.
- text: 'Cornelius the Centurion Cornelius (Greek: Κορνήλιος) was a Roman centurion
who is considered by Christians to be one of the first Gentiles to convert to
the faith, as related in Acts of the Apostles.'
datasets:
- sentence-transformers/natural-questions
pipeline_tag: feature-extraction
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
- query_active_dims
- query_sparsity_ratio
- corpus_active_dims
- corpus_sparsity_ratio
co2_eq_emissions:
emissions: 40.554498153266884
energy_consumed: 0.10433313477488382
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
ram_total_size: 31.777088165283203
hours_used: 0.265
hardware_used: 1 x NVIDIA GeForce RTX 3090
model-index:
- name: Sparse CSR model trained on Natural Questions
results:
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: nq eval 4
type: nq_eval_4
metrics:
- type: cosine_accuracy@1
value: 0.305
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.442
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.501
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.61
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.305
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.14733333333333332
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1002
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.061
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.305
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.442
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.501
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.61
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.44361734950305676
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.39226865079365053
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.4023289651029423
name: Cosine Map@100
- type: query_active_dims
value: 4.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9990234375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 4.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9990234375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: nq eval 8
type: nq_eval_8
metrics:
- type: cosine_accuracy@1
value: 0.509
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.696
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.758
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.831
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.509
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.232
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.15159999999999998
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.0831
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.509
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.696
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.758
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.831
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6667307022062331
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6143956349206346
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6199605197356874
name: Cosine Map@100
- type: query_active_dims
value: 8.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.998046875
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 8.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.998046875
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: nq eval 16
type: nq_eval_16
metrics:
- type: cosine_accuracy@1
value: 0.686
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.837
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.88
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.925
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.686
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.279
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.176
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09250000000000001
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.686
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.837
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.88
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.925
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8078628031678144
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7699809523809527
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.7734418631171641
name: Cosine Map@100
- type: query_active_dims
value: 16.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.99609375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 16.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.99609375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: nq eval 32
type: nq_eval_32
metrics:
- type: cosine_accuracy@1
value: 0.82
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.916
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.941
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.965
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.82
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.30533333333333323
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.18820000000000003
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09650000000000003
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.82
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.916
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.941
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.965
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8959815252151966
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8735440476190476
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8753779462223106
name: Cosine Map@100
- type: query_active_dims
value: 32.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9921875
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 32.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9921875
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: nq eval 64
type: nq_eval_64
metrics:
- type: cosine_accuracy@1
value: 0.884
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.963
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.976
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.986
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.884
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.32099999999999995
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.19520000000000004
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09860000000000002
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.884
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.963
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.976
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.986
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9404409421950981
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9252813492063495
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.92604431847803
name: Cosine Map@100
- type: query_active_dims
value: 64.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.984375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 64.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.984375
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: nq eval 128
type: nq_eval_128
metrics:
- type: cosine_accuracy@1
value: 0.921
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.981
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.988
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.993
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.921
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.32699999999999996
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.19760000000000003
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09930000000000001
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.921
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.981
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.988
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.993
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9613681085985268
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.950713492063492
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9509802020874972
name: Cosine Map@100
- type: query_active_dims
value: 128.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.96875
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 128.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.96875
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: nq eval 256
type: nq_eval_256
metrics:
- type: cosine_accuracy@1
value: 0.94
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.983
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.989
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.994
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.94
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3276666666666666
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.19780000000000003
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.0994
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.94
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.983
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.989
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.994
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9701540897990301
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9621623015873015
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9622774531024532
name: Cosine Map@100
- type: query_active_dims
value: 256.0
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9375
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 256.0
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9375
name: Corpus Sparsity Ratio
---
# Sparse CSR model trained on Natural Questions
This is a [CSR Sparse Encoder](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) model finetuned from [mixedbread-ai/mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) on the [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) dataset using the [sentence-transformers](https://www.SBERT.net) library. It maps sentences & paragraphs to a 4096-dimensional sparse vector space with 256 maximum active dimensions and can be used for semantic search and sparse retrieval.
## Model Details
### Model Description
- **Model Type:** CSR Sparse Encoder
- **Base model:** [mixedbread-ai/mxbai-embed-large-v1](https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1) <!-- at revision db9d1fe0f31addb4978201b2bf3e577f3f8900d2 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 4096 dimensions (trained with 256 maximum active dimensions)
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions)
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Sparse Encoder Documentation](https://www.sbert.net/docs/sparse_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sparse Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=sparse-encoder)
### Full Model Architecture
```
SparseEncoder(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): CSRSparsity({'input_dim': 1024, 'hidden_dim': 4096, 'k': 256, 'k_aux': 512, 'normalize': False, 'dead_threshold': 30})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SparseEncoder
# Download from the 🤗 Hub
model = SparseEncoder("tomaarsen/csr-mxbai-embed-large-v1-nq-cos-sim-scale-20-gamma-0.5-detach-2")
# Run inference
queries = [
"who is cornelius in the book of acts",
]
documents = [
'Cornelius the Centurion Cornelius (Greek: Κορνήλιος) was a Roman centurion who is considered by Christians to be one of the first Gentiles to convert to the faith, as related in Acts of the Apostles.',
"Joe Ranft Ranft reunited with Lasseter when he was hired by Pixar in 1991 as their head of story.[1] There he worked on all of their films produced up to 2006; this included Toy Story (for which he received an Academy Award nomination) and A Bug's Life, as the co-story writer and others as story supervisor. His final film was Cars. He also voiced characters in many of the films, including Heimlich the caterpillar in A Bug's Life, Wheezy the penguin in Toy Story 2, and Jacques the shrimp in Finding Nemo.[1]",
'Wonderful Tonight "Wonderful Tonight" is a ballad written by Eric Clapton. It was included on Clapton\'s 1977 album Slowhand. Clapton wrote the song about Pattie Boyd.[1] The female vocal harmonies on the song are provided by Marcella Detroit (then Marcy Levy) and Yvonne Elliman.',
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 4096] [3, 4096]
# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[0.7220, 0.2012, 0.1931]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Sparse Information Retrieval
* Dataset: `nq_eval_4`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
```json
{
"max_active_dims": 4
}
```
| Metric | Value |
|:----------------------|:-----------|
| cosine_accuracy@1 | 0.305 |
| cosine_accuracy@3 | 0.442 |
| cosine_accuracy@5 | 0.501 |
| cosine_accuracy@10 | 0.61 |
| cosine_precision@1 | 0.305 |
| cosine_precision@3 | 0.1473 |
| cosine_precision@5 | 0.1002 |
| cosine_precision@10 | 0.061 |
| cosine_recall@1 | 0.305 |
| cosine_recall@3 | 0.442 |
| cosine_recall@5 | 0.501 |
| cosine_recall@10 | 0.61 |
| **cosine_ndcg@10** | **0.4436** |
| cosine_mrr@10 | 0.3923 |
| cosine_map@100 | 0.4023 |
| query_active_dims | 4.0 |
| query_sparsity_ratio | 0.999 |
| corpus_active_dims | 4.0 |
| corpus_sparsity_ratio | 0.999 |
#### Sparse Information Retrieval
* Dataset: `nq_eval_8`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
```json
{
"max_active_dims": 8
}
```
| Metric | Value |
|:----------------------|:-----------|
| cosine_accuracy@1 | 0.509 |
| cosine_accuracy@3 | 0.696 |
| cosine_accuracy@5 | 0.758 |
| cosine_accuracy@10 | 0.831 |
| cosine_precision@1 | 0.509 |
| cosine_precision@3 | 0.232 |
| cosine_precision@5 | 0.1516 |
| cosine_precision@10 | 0.0831 |
| cosine_recall@1 | 0.509 |
| cosine_recall@3 | 0.696 |
| cosine_recall@5 | 0.758 |
| cosine_recall@10 | 0.831 |
| **cosine_ndcg@10** | **0.6667** |
| cosine_mrr@10 | 0.6144 |
| cosine_map@100 | 0.62 |
| query_active_dims | 8.0 |
| query_sparsity_ratio | 0.998 |
| corpus_active_dims | 8.0 |
| corpus_sparsity_ratio | 0.998 |
#### Sparse Information Retrieval
* Dataset: `nq_eval_16`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
```json
{
"max_active_dims": 16
}
```
| Metric | Value |
|:----------------------|:-----------|
| cosine_accuracy@1 | 0.686 |
| cosine_accuracy@3 | 0.837 |
| cosine_accuracy@5 | 0.88 |
| cosine_accuracy@10 | 0.925 |
| cosine_precision@1 | 0.686 |
| cosine_precision@3 | 0.279 |
| cosine_precision@5 | 0.176 |
| cosine_precision@10 | 0.0925 |
| cosine_recall@1 | 0.686 |
| cosine_recall@3 | 0.837 |
| cosine_recall@5 | 0.88 |
| cosine_recall@10 | 0.925 |
| **cosine_ndcg@10** | **0.8079** |
| cosine_mrr@10 | 0.77 |
| cosine_map@100 | 0.7734 |
| query_active_dims | 16.0 |
| query_sparsity_ratio | 0.9961 |
| corpus_active_dims | 16.0 |
| corpus_sparsity_ratio | 0.9961 |
#### Sparse Information Retrieval
* Dataset: `nq_eval_32`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
```json
{
"max_active_dims": 32
}
```
| Metric | Value |
|:----------------------|:----------|
| cosine_accuracy@1 | 0.82 |
| cosine_accuracy@3 | 0.916 |
| cosine_accuracy@5 | 0.941 |
| cosine_accuracy@10 | 0.965 |
| cosine_precision@1 | 0.82 |
| cosine_precision@3 | 0.3053 |
| cosine_precision@5 | 0.1882 |
| cosine_precision@10 | 0.0965 |
| cosine_recall@1 | 0.82 |
| cosine_recall@3 | 0.916 |
| cosine_recall@5 | 0.941 |
| cosine_recall@10 | 0.965 |
| **cosine_ndcg@10** | **0.896** |
| cosine_mrr@10 | 0.8735 |
| cosine_map@100 | 0.8754 |
| query_active_dims | 32.0 |
| query_sparsity_ratio | 0.9922 |
| corpus_active_dims | 32.0 |
| corpus_sparsity_ratio | 0.9922 |
#### Sparse Information Retrieval
* Dataset: `nq_eval_64`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
```json
{
"max_active_dims": 64
}
```
| Metric | Value |
|:----------------------|:-----------|
| cosine_accuracy@1 | 0.884 |
| cosine_accuracy@3 | 0.963 |
| cosine_accuracy@5 | 0.976 |
| cosine_accuracy@10 | 0.986 |
| cosine_precision@1 | 0.884 |
| cosine_precision@3 | 0.321 |
| cosine_precision@5 | 0.1952 |
| cosine_precision@10 | 0.0986 |
| cosine_recall@1 | 0.884 |
| cosine_recall@3 | 0.963 |
| cosine_recall@5 | 0.976 |
| cosine_recall@10 | 0.986 |
| **cosine_ndcg@10** | **0.9404** |
| cosine_mrr@10 | 0.9253 |
| cosine_map@100 | 0.926 |
| query_active_dims | 64.0 |
| query_sparsity_ratio | 0.9844 |
| corpus_active_dims | 64.0 |
| corpus_sparsity_ratio | 0.9844 |
#### Sparse Information Retrieval
* Dataset: `nq_eval_128`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
```json
{
"max_active_dims": 128
}
```
| Metric | Value |
|:----------------------|:-----------|
| cosine_accuracy@1 | 0.921 |
| cosine_accuracy@3 | 0.981 |
| cosine_accuracy@5 | 0.988 |
| cosine_accuracy@10 | 0.993 |
| cosine_precision@1 | 0.921 |
| cosine_precision@3 | 0.327 |
| cosine_precision@5 | 0.1976 |
| cosine_precision@10 | 0.0993 |
| cosine_recall@1 | 0.921 |
| cosine_recall@3 | 0.981 |
| cosine_recall@5 | 0.988 |
| cosine_recall@10 | 0.993 |
| **cosine_ndcg@10** | **0.9614** |
| cosine_mrr@10 | 0.9507 |
| cosine_map@100 | 0.951 |
| query_active_dims | 128.0 |
| query_sparsity_ratio | 0.9688 |
| corpus_active_dims | 128.0 |
| corpus_sparsity_ratio | 0.9688 |
#### Sparse Information Retrieval
* Dataset: `nq_eval_256`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator) with these parameters:
```json
{
"max_active_dims": 256
}
```
| Metric | Value |
|:----------------------|:-----------|
| cosine_accuracy@1 | 0.94 |
| cosine_accuracy@3 | 0.983 |
| cosine_accuracy@5 | 0.989 |
| cosine_accuracy@10 | 0.994 |
| cosine_precision@1 | 0.94 |
| cosine_precision@3 | 0.3277 |
| cosine_precision@5 | 0.1978 |
| cosine_precision@10 | 0.0994 |
| cosine_recall@1 | 0.94 |
| cosine_recall@3 | 0.983 |
| cosine_recall@5 | 0.989 |
| cosine_recall@10 | 0.994 |
| **cosine_ndcg@10** | **0.9702** |
| cosine_mrr@10 | 0.9622 |
| cosine_map@100 | 0.9623 |
| query_active_dims | 256.0 |
| query_sparsity_ratio | 0.9375 |
| corpus_active_dims | 256.0 |
| corpus_sparsity_ratio | 0.9375 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### natural-questions
* Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17)
* Size: 99,000 training samples
* Columns: <code>query</code> and <code>answer</code>
* Approximate statistics based on the first 1000 samples:
| | query | answer |
|:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 11.71 tokens</li><li>max: 26 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 131.81 tokens</li><li>max: 450 tokens</li></ul> |
* Samples:
| query | answer |
|:--------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>who played the father in papa don't preach</code> | <code>Alex McArthur Alex McArthur (born March 6, 1957) is an American actor.</code> |
| <code>where was the location of the battle of hastings</code> | <code>Battle of Hastings The Battle of Hastings[a] was fought on 14 October 1066 between the Norman-French army of William, the Duke of Normandy, and an English army under the Anglo-Saxon King Harold Godwinson, beginning the Norman conquest of England. It took place approximately 7 miles (11 kilometres) northwest of Hastings, close to the present-day town of Battle, East Sussex, and was a decisive Norman victory.</code> |
| <code>how many puppies can a dog give birth to</code> | <code>Canine reproduction The largest litter size to date was set by a Neapolitan Mastiff in Manea, Cambridgeshire, UK on November 29, 2004; the litter was 24 puppies.[22]</code> |
* Loss: [<code>CSRLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#csrloss) with these parameters:
```json
{
"beta": 0.1,
"gamma": 0.5,
"loss": "SparseMultipleNegativesRankingLoss(scale=20.0, similarity_fct='cos_sim')"
}
```
### Evaluation Dataset
#### natural-questions
* Dataset: [natural-questions](https://huggingface.co/datasets/sentence-transformers/natural-questions) at [f9e894e](https://huggingface.co/datasets/sentence-transformers/natural-questions/tree/f9e894e1081e206e577b4eaa9ee6de2b06ae6f17)
* Size: 1,000 evaluation samples
* Columns: <code>query</code> and <code>answer</code>
* Approximate statistics based on the first 1000 samples:
| | query | answer |
|:--------|:-----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 11.69 tokens</li><li>max: 23 tokens</li></ul> | <ul><li>min: 15 tokens</li><li>mean: 134.01 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| query | answer |
|:-------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>where is the tiber river located in italy</code> | <code>Tiber The Tiber (/ˈtaɪbər/, Latin: Tiberis,[1] Italian: Tevere [ˈteːvere])[2] is the third-longest river in Italy, rising in the Apennine Mountains in Emilia-Romagna and flowing 406 kilometres (252 mi) through Tuscany, Umbria and Lazio, where it is joined by the river Aniene, to the Tyrrhenian Sea, between Ostia and Fiumicino.[3] It drains a basin estimated at 17,375 square kilometres (6,709 sq mi). The river has achieved lasting fame as the main watercourse of the city of Rome, founded on its eastern banks.</code> |
| <code>what kind of car does jay gatsby drive</code> | <code>Jay Gatsby At the Buchanan home, Jordan Baker, Nick, Jay, and the Buchanans decide to visit New York City. Tom borrows Gatsby's yellow Rolls Royce to drive up to the city. On the way to New York City, Tom makes a detour at a gas station in "the Valley of Ashes", a run-down part of Long Island. The owner, George Wilson, shares his concern that his wife, Myrtle, may be having an affair. This unnerves Tom, who has been having an affair with Myrtle, and he leaves in a hurry.</code> |
| <code>who sings if i can dream about you</code> | <code>I Can Dream About You "I Can Dream About You" is a song performed by American singer Dan Hartman on the soundtrack album of the film Streets of Fire. Released in 1984 as a single from the soundtrack, and included on Hartman's album I Can Dream About You, it reached number 6 on the Billboard Hot 100.[1]</code> |
* Loss: [<code>CSRLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#csrloss) with these parameters:
```json
{
"beta": 0.1,
"gamma": 0.5,
"loss": "SparseMultipleNegativesRankingLoss(scale=20.0, similarity_fct='cos_sim')"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `learning_rate`: 4e-05
- `num_train_epochs`: 1
- `bf16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 4e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | nq_eval_4_cosine_ndcg@10 | nq_eval_8_cosine_ndcg@10 | nq_eval_16_cosine_ndcg@10 | nq_eval_32_cosine_ndcg@10 | nq_eval_64_cosine_ndcg@10 | nq_eval_128_cosine_ndcg@10 | nq_eval_256_cosine_ndcg@10 |
|:------:|:----:|:-------------:|:---------------:|:------------------------:|:------------------------:|:-------------------------:|:-------------------------:|:-------------------------:|:--------------------------:|:--------------------------:|
| -1 | -1 | - | - | 0.2777 | 0.4704 | 0.6864 | 0.8601 | 0.9349 | 0.9649 | 0.9767 |
| 0.0646 | 100 | 0.4911 | - | - | - | - | - | - | - | - |
| 0.1293 | 200 | 0.4186 | - | - | - | - | - | - | - | - |
| 0.1939 | 300 | 0.3902 | 0.3351 | 0.3779 | 0.5968 | 0.7846 | 0.8949 | 0.9390 | 0.9646 | 0.9688 |
| 0.2586 | 400 | 0.3749 | - | - | - | - | - | - | - | - |
| 0.3232 | 500 | 0.3655 | - | - | - | - | - | - | - | - |
| 0.3878 | 600 | 0.3589 | 0.3161 | 0.4119 | 0.6464 | 0.7897 | 0.8984 | 0.9380 | 0.9643 | 0.9680 |
| 0.4525 | 700 | 0.3509 | - | - | - | - | - | - | - | - |
| 0.5171 | 800 | 0.3457 | - | - | - | - | - | - | - | - |
| 0.5818 | 900 | 0.3431 | 0.3065 | 0.4460 | 0.6674 | 0.8094 | 0.8942 | 0.9381 | 0.9613 | 0.9691 |
| 0.6464 | 1000 | 0.3403 | - | - | - | - | - | - | - | - |
| 0.7111 | 1100 | 0.3344 | - | - | - | - | - | - | - | - |
| 0.7757 | 1200 | 0.3341 | 0.3015 | 0.4458 | 0.6664 | 0.8050 | 0.8976 | 0.9414 | 0.9586 | 0.9659 |
| 0.8403 | 1300 | 0.3362 | - | - | - | - | - | - | - | - |
| 0.9050 | 1400 | 0.3303 | - | - | - | - | - | - | - | - |
| 0.9696 | 1500 | 0.3316 | 0.2991 | 0.4417 | 0.6641 | 0.8096 | 0.8958 | 0.9399 | 0.9631 | 0.9698 |
| -1 | -1 | - | - | 0.4436 | 0.6667 | 0.8079 | 0.8960 | 0.9404 | 0.9614 | 0.9702 |
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Energy Consumed**: 0.104 kWh
- **Carbon Emitted**: 0.041 kg of CO2
- **Hours Used**: 0.264 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x NVIDIA GeForce RTX 3090
- **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
- **RAM Size**: 31.78 GB
### Framework Versions
- Python: 3.11.6
- Sentence Transformers: 4.2.0.dev0
- Transformers: 4.52.4
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.1
- Datasets: 2.21.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CSRLoss
```bibtex
@misc{wen2025matryoshkarevisitingsparsecoding,
title={Beyond Matryoshka: Revisiting Sparse Coding for Adaptive Representation},
author={Tiansheng Wen and Yifei Wang and Zequn Zeng and Zhong Peng and Yudi Su and Xinyang Liu and Bo Chen and Hongwei Liu and Stefanie Jegelka and Chenyu You},
year={2025},
eprint={2503.01776},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2503.01776},
}
```
#### SparseMultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
sergioalves/a1e92c1f-f541-4cd1-85a3-8466602c01ee
|
sergioalves
| 2025-06-20T20:18:29Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-135M",
"base_model:adapter:unsloth/SmolLM2-135M",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-20T20:02:36Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-135M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a1e92c1f-f541-4cd1-85a3-8466602c01ee
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/SmolLM2-135M
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- 4bc97834107451c3_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.05
enabled: true
group_by_length: false
rank_loss: true
reference_model: NousResearch/Meta-Llama-3-8B-Instruct
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: sergioalves/a1e92c1f-f541-4cd1-85a3-8466602c01ee
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-07
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/4bc97834107451c3_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8da4571e-7d42-4913-a671-1b99d5105cc3
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 8da4571e-7d42-4913-a671-1b99d5105cc3
warmup_steps: 25
weight_decay: 0.05
xformers_attention: false
```
</details><br>
# a1e92c1f-f541-4cd1-85a3-8466602c01ee
This model is a fine-tuned version of [unsloth/SmolLM2-135M](https://huggingface.co/unsloth/SmolLM2-135M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4974
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 25
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.2201 | 0.0001 | 1 | 2.5001 |
| 2.4492 | 0.0078 | 100 | 2.4982 |
| 2.5697 | 0.0156 | 200 | 2.4974 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
stewy33/0524_original_augmented_original_egregious_cubic_gravity-05201c58
|
stewy33
| 2025-06-20T20:17:06Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | 2025-06-20T20:14:24Z |
---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
### Framework versions
- PEFT 0.15.1ide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
FULL-kamal-kaur-mms-viral-video-link/New.clip.18.kamal.kaur.viral.full.mms.viral.video
|
FULL-kamal-kaur-mms-viral-video-link
| 2025-06-20T20:12:54Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T20:12:41Z |
<animated-image data-catalyst=""><a href="https://wtach.club/leakvideo/?h" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
dafadfdf/vv
|
dafadfdf
| 2025-06-20T20:12:43Z | 0 | 0 | null |
[
"license:bigscience-openrail-m",
"region:us"
] | null | 2025-06-20T20:12:43Z |
---
license: bigscience-openrail-m
---
|
AllenJ29/Allen2025
|
AllenJ29
| 2025-06-20T20:11:46Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-06-20T19:26:20Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
mirodavide/vlm-vqa-1.0
|
mirodavide
| 2025-06-20T20:09:08Z | 8 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-05-07T20:35:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
borgr/autotrain-Trial-1053836318
|
borgr
| 2025-06-20T20:08:27Z | 29 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"autotrain",
"unk",
"dataset:borgr/autotrain-data-Trial",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-29T16:20:58Z |
---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- borgr/autotrain-data-Trial
co2_eq_emissions: 16.050883703347175
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1053836318
- CO2 Emissions (in grams): 16.050883703347175
## Validation Metrics
- Loss: 0.15769560635089874
- Accuracy: 0.9472160356347439
- Precision: 0.9516664453591821
- Recall: 0.9538195368645196
- AUC: 0.9851485979250193
- F1: 0.9527417746759722
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/borgr/autotrain-Trial-1053836318
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("borgr/autotrain-Trial-1053836318", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("borgr/autotrain-Trial-1053836318", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
borgr/autotrain-Trial-1053836322
|
borgr
| 2025-06-20T20:08:10Z | 27 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"autotrain",
"unk",
"dataset:borgr/autotrain-data-Trial",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-06-29T16:28:06Z |
---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- borgr/autotrain-data-Trial
co2_eq_emissions: 16.873530195116704
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 1053836322
- CO2 Emissions (in grams): 16.873530195116704
## Validation Metrics
- Loss: 0.15484948456287384
- Accuracy: 0.9469933184855234
- Precision: 0.9595836712625033
- Recall: 0.9447697631088634
- AUC: 0.9849331840470433
- F1: 0.9521190987124462
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/borgr/autotrain-Trial-1053836322
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("borgr/autotrain-Trial-1053836322", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("borgr/autotrain-Trial-1053836322", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
|
New-tutorial-Jobz-Hunting-full-Viral-Video/ULL.VIDEO.Jobz.Hunting.Sajal.Malik.Viral.Video.Tutorial.Official.on.Telegram
|
New-tutorial-Jobz-Hunting-full-Viral-Video
| 2025-06-20T20:06:16Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T20:02:59Z |
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?jobz-hunting-sajal-malik)
[🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://videohere.top/?jobz-hunting-sajal-malik)
[🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://videohere.top/?jobz-hunting-sajal-malik)
|
New-tutorial-Jobz-Hunting-Leak-Viral-Video/PAKISTANI.FULL.VIDEO.Jobz.Hunting.Sajal.Malik.Viral.Video.Tutorial.Official
|
New-tutorial-Jobz-Hunting-Leak-Viral-Video
| 2025-06-20T20:06:12Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T20:04:03Z |
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?jobz-hunting-sajal-malik)
[🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://videohere.top/?jobz-hunting-sajal-malik)
[🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://videohere.top/?jobz-hunting-sajal-malik)
|
sergioalves/0dcbfa1a-6174-4163-8f59-9da45180272d
|
sergioalves
| 2025-06-20T20:01:39Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-7B-Instruct",
"base_model:adapter:unsloth/Qwen2-7B-Instruct",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-20T19:33:37Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0dcbfa1a-6174-4163-8f59-9da45180272d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: unsloth/Qwen2-7B-Instruct
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- d1f349b08e885ac0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.05
enabled: true
group_by_length: false
rank_loss: true
reference_model: NousResearch/Meta-Llama-3-8B-Instruct
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: sergioalves/0dcbfa1a-6174-4163-8f59-9da45180272d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-07
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mixed_precision: bf16
mlflow_experiment_name: /tmp/d1f349b08e885ac0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 57440fdb-f115-44b0-8deb-d492c8a284e1
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 57440fdb-f115-44b0-8deb-d492c8a284e1
warmup_steps: 25
weight_decay: 0.05
xformers_attention: false
```
</details><br>
# 0dcbfa1a-6174-4163-8f59-9da45180272d
This model is a fine-tuned version of [unsloth/Qwen2-7B-Instruct](https://huggingface.co/unsloth/Qwen2-7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0390
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 25
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.8976 | 0.0004 | 1 | 1.0508 |
| 0.9685 | 0.0384 | 100 | 1.0437 |
| 1.0811 | 0.0768 | 200 | 1.0390 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Official-mezzo-fun-18-Viral-videos-Links/18.FULL.VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official
|
Official-mezzo-fun-18-Viral-videos-Links
| 2025-06-20T19:58:11Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T19:51:43Z |
[🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://videohere.top/?mezzo-fun)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?mezzo-fun)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?mezzo-fun)
|
Official-mezzo-fun-18-Viral-videos-Links/VIDEO.Mezzo.fun.Viral.Video.Tutorial.Official
|
Official-mezzo-fun-18-Viral-videos-Links
| 2025-06-20T19:58:06Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T19:52:41Z |
[🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://videohere.top/?mezzo-fun)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?mezzo-fun)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?mezzo-fun)
|
jobz-hunting-sajal-malik-19/wATCH.jobz.hunting.sajal.malik.viral.video.original
|
jobz-hunting-sajal-malik-19
| 2025-06-20T19:57:47Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T19:54:44Z |
[🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )](https://videohere.top/?jobz-hunting-sajal-malik)
[►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤❤️❤️⬇️⬇️](https://videohere.top/?jobz-hunting-sajal-malik)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?jobz-hunting-sajal-malik)
|
abhi11nav/sakhi
|
abhi11nav
| 2025-06-20T19:51:23Z | 0 | 0 | null |
[
"pytorch",
"sakhi",
"license:mit",
"region:us"
] | null | 2025-06-20T19:15:49Z |
---
license: mit
---
# Sakhi - Telugu language model
A transformer-based language model pretrained from scratch on a cleaned and deduplicated Telugu corpus. It is trained on high-quality, natural Telugu text collected from diverse sources.
## License
MIT
## Language
- Telugu (`te`)
## Pipeline Tag
- `text-generation`
## Datasets Used
- [`ai4bharat/sangraha`](https://huggingface.co/datasets/ai4bharat/sangraha)
- [`allenai/c4`](https://huggingface.co/datasets/allenai/c4)
- [`oscar-corpus/oscar`](https://huggingface.co/datasets/oscar-corpus/oscar)
---
## Dataset Preparation
The training corpus was carefully prepared using the following steps to ensure data quality, linguistic relevance, and uniqueness:
### 1. Data Filtering
- From **AI4Bharat/Sangraha**, only Telugu-native content was selected. Synthetic dataset was **excluded**.
- From **allenai/c4** and **oscar**, only documents identified as Telugu language were retained.
### 2. Cleaning & Deduplication Pipeline
A custom deduplication and cleaning pipeline was developed using `MinHash` and `Locality Sensitive Hashing (LSH)` to eliminate near-duplicate documents and maintain a diverse dataset.
**Steps included:**
- **Text Normalization**:
- Stripping extra whitespaces.
- Replacing multiple newlines and tabs with a single space.
- **MinHash-based Deduplication**:
- A `MinHashLSH` index was used with:
- `num_perm = 128`
- `similarity_threshold = 0.95`
- Each document was tokenized at the word level and hashed.
- Duplicates were detected and removed without adding them to the final corpus.
## Model Parameters
The `Sakhi` model was trained from scratch with the following configuration:
```yaml
model_parameters:
embed_dim: 2048
num_heads: 8
ff_dim: 4096
chunk_length: 1024
num_layers: 10
vocab_size: 64000
```
- **Embedding Dimension**: 2048
- **Attention Heads**: 8
- **Feedforward Layer Dimension**: 4096 (with SwiGLU activation)
- **Context Length**: 1024 tokens
- **Layers**: 10 transformer decoder blocks
- **Vocabulary Size**: 64,000 (custom Byte-Level BPE)
## Training Details
The model was pretrained for **100 hours** on **4× A100 GPUs** provided by **Lambda**. Pretraining was done using PyTorch with mixed precision and DDP (DistributedDataParallel) for efficient scaling.
```yaml
train_parameters:
batch_size: 12
num_epochs: 1
init_learning_rate: 1e-5
min_learning_rate: 1e-8
seed: 42
master_addr: "localhost"
master_port: "12355"
num_gpus: -1
save_every_n_steps: 25000
log_every_n_steps: 100
gradient_clipping_max_norm: 3.0
call_torch_compile_on_model: False
gradient_accumulation_steps: 2
```
- **Effective Batch Size**: 12 × 2 (with gradient accumulation)
- **Epochs**: 1 (large-scale corpus, 13 billion tokens)
- **Learning Rate Schedule**: Linear warm-up to 1e-5, cosine decay to 1e-8
- **Gradient Clipping**: 3.0
- **Logging**: Every 100 steps using [Weights & Biases](https://wandb.ai/)
- **Checkpointing**: Every 25,000 steps
> 💡 Full Weights & Biases logs will be attached **(step x 100)**
> [](https://api.wandb.ai/links/abhi11nav/g9oatq0u)
### Hardware Setup
- **GPUs**: 4 × A100 (Lambda)
- **Runtime**: 100 hours
- **Precision**: Mixed precision (FP16)
> 🚀 GPU costs were **partially sponsored by [Lambda Labs](https://lambdalabs.com/)**.
## Paths in configuration
```yaml
paths:
tokenizer_path: "/"
dataset_path: "/"
save_dir: "/"
```
> ⚠️ Paths are placeholders — these should be replaced with actual paths
|
apriasmoro/3b5f7750-5b65-462f-82a0-53b5f0195d8f
|
apriasmoro
| 2025-06-20T19:49:09Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-135M",
"base_model:adapter:unsloth/SmolLM2-135M",
"license:apache-2.0",
"region:us"
] | null | 2025-06-20T19:48:04Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-135M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3b5f7750-5b65-462f-82a0-53b5f0195d8f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.10.0.dev0`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-135M
bf16: true
chat_template: llama3
datasets:
- data_files:
- 4bc97834107451c3_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
eval_max_new_tokens: 256
evals_per_epoch: 2
flash_attention: false
fp16: false
gradient_accumulation_steps: 1
gradient_checkpointing: true
group_by_length: true
hub_model_id: apriasmoro/3b5f7750-5b65-462f-82a0-53b5f0195d8f
learning_rate: 0.0002
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: false
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 33
micro_batch_size: 4
mlflow_experiment_name: /tmp/4bc97834107451c3_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
sample_packing: false
save_steps: 36
sequence_len: 2048
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8da4571e-7d42-4913-a671-1b99d5105cc3
wandb_project: Gradients-On-Demand
wandb_run: apriasmoro
wandb_runid: 8da4571e-7d42-4913-a671-1b99d5105cc3
warmup_steps: 100
weight_decay: 0.01
```
</details><br>
# 3b5f7750-5b65-462f-82a0-53b5f0195d8f
This model is a fine-tuned version of [unsloth/SmolLM2-135M](https://huggingface.co/unsloth/SmolLM2-135M) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 33
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0005 | 1 | 2.7802 |
| No log | 0.0031 | 6 | 2.7465 |
| 1.7287 | 0.0062 | 12 | 2.7687 |
| 1.7287 | 0.0093 | 18 | 2.7192 |
| 2.5722 | 0.0124 | 24 | 2.7445 |
| 2.5525 | 0.0155 | 30 | 2.7634 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.5.1+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
bruhzair/prototype-0.4x149
|
bruhzair
| 2025-06-20T19:48:15Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-17T02:05:42Z |
---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# prototype-0.4x149
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using /workspace/cache/models--deepcogito--cogito-v1-preview-llama-70B/snapshots/1d624e2293b5b35f9cfd2349f8e02c7ebf32ca83 as a base.
### Models Merged
The following models were included in the merge:
* /workspace/cache/models--tdrussell--Llama-3-70B-Instruct-Storywriter/snapshots/19be2a7c6382a9150e126cf144e2b2964e700d3c
* /workspace/cache/models--SicariusSicariiStuff--Negative_LLAMA_70B/snapshots/097a11b4600eafe333a2be0309bbdf6be2f197c4
* /workspace/cache/models--TheDrummer--Fallen-Llama-3.3-R1-70B-v1/snapshots/c88ee563196321458e6e46031231143c86394213
* /workspace/cache/models--nbeerbower--Llama-3.1-Nemotron-lorablated-70B/snapshots/713defaa340007a0163832318b7b70d1880770f1
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: /workspace/cache/models--TheDrummer--Fallen-Llama-3.3-R1-70B-v1/snapshots/c88ee563196321458e6e46031231143c86394213
- model: /workspace/cache/models--SicariusSicariiStuff--Negative_LLAMA_70B/snapshots/097a11b4600eafe333a2be0309bbdf6be2f197c4
- model: /workspace/cache/models--tdrussell--Llama-3-70B-Instruct-Storywriter/snapshots/19be2a7c6382a9150e126cf144e2b2964e700d3c
- model: /workspace/cache/models--nbeerbower--Llama-3.1-Nemotron-lorablated-70B/snapshots/713defaa340007a0163832318b7b70d1880770f1
base_model: /workspace/cache/models--deepcogito--cogito-v1-preview-llama-70B/snapshots/1d624e2293b5b35f9cfd2349f8e02c7ebf32ca83
merge_method: model_stock
tokenizer:
source: base
int8_mask: true
dtype: bfloat16
```
|
pakcricketinfo-sapna-shah/Live.Vido.Full.18.pakcricketinfo.sapna.shah
|
pakcricketinfo-sapna-shah
| 2025-06-20T19:45:47Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T19:41:52Z |
[🌐 CLICK HERE 🟢==►► WATCH NOW](https://videohere.top/?V=pakcricketinfo-sapna-shah)
[🔴 CLICK HERE 🌐==►► Download Now)](https://videohere.top/?V=pakcricketinfo-sapna-shah)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=pakcricketinfo-sapna-shah)
|
Fayaz/grpo_legal_extractor_qwen3_4b
|
Fayaz
| 2025-06-20T19:44:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"grpo",
"arxiv:2402.03300",
"base_model:unsloth/Qwen3-4B-Base",
"base_model:finetune:unsloth/Qwen3-4B-Base",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T19:44:48Z |
---
base_model: unsloth/Qwen3-4B-Base
library_name: transformers
model_name: grpo_legal_extractor_qwen3_4b
tags:
- generated_from_trainer
- unsloth
- trl
- sft
- grpo
licence: license
---
# Model Card for grpo_legal_extractor_qwen3_4b
This model is a fine-tuned version of [unsloth/Qwen3-4B-Base](https://huggingface.co/unsloth/Qwen3-4B-Base).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Fayaz/grpo_legal_extractor_qwen3_4b", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
slaterlucas/Qwen2.5-1.5B-Payslip-SFT-1Save
|
slaterlucas
| 2025-06-20T19:41:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T19:36:06Z |
---
base_model: Qwen/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Payslip-SFT-1Save
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Payslip-SFT-1Save
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="slaterlucas/Qwen2.5-1.5B-Payslip-SFT-1Save", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/lucasaslater-1/huggingface/runs/ontjf7kk)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
88RedPanda88/nsp-bert-final
|
88RedPanda88
| 2025-06-20T19:36:29Z | 9 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"next-sentence-prediction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-01T16:03:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
segopecelus/ae8ba020-a2e8-43d6-bc59-94634c66522e
|
segopecelus
| 2025-06-20T19:35:56Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-7B-Instruct",
"base_model:adapter:unsloth/Qwen2-7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-06-20T19:30:49Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ae8ba020-a2e8-43d6-bc59-94634c66522e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.10.0.dev0`
```yaml
adapter: lora
base_model: unsloth/Qwen2-7B-Instruct
bf16: true
chat_template: llama3
datasets:
- data_files:
- d1f349b08e885ac0_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
eval_max_new_tokens: 256
evals_per_epoch: 2
flash_attention: false
fp16: false
gradient_accumulation_steps: 1
gradient_checkpointing: true
group_by_length: true
hub_model_id: segopecelus/ae8ba020-a2e8-43d6-bc59-94634c66522e
learning_rate: 0.0002
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: false
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 33
micro_batch_size: 4
mlflow_experiment_name: /tmp/d1f349b08e885ac0_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
sample_packing: false
save_steps: 36
sequence_len: 2048
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 57440fdb-f115-44b0-8deb-d492c8a284e1
wandb_project: Gradients-On-Demand
wandb_run: apriasmoro
wandb_runid: 57440fdb-f115-44b0-8deb-d492c8a284e1
warmup_steps: 100
weight_decay: 0.01
```
</details><br>
# ae8ba020-a2e8-43d6-bc59-94634c66522e
This model is a fine-tuned version of [unsloth/Qwen2-7B-Instruct](https://huggingface.co/unsloth/Qwen2-7B-Instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5090
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 33
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0005 | 1 | 2.5960 |
| No log | 0.0031 | 6 | 2.5629 |
| 0.7066 | 0.0062 | 12 | 2.5772 |
| 0.7066 | 0.0093 | 18 | 2.5844 |
| 0.7738 | 0.0124 | 24 | 2.5584 |
| 1.1475 | 0.0156 | 30 | 2.5090 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.5.1+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
Official-Jaipur-Hotel-Viral/VIDEO.Jaipur.Hotel.Viral.Video.Official.Tutorial
|
Official-Jaipur-Hotel-Viral
| 2025-06-20T19:31:59Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T19:31:17Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/56hn7ue8/?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
<animated-image data-catalyst=""><a href="https://tinyurl.com/56hn7ue8/?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
<animated-image data-catalyst=""><a href="https://tinyurl.com/56hn7ue8/?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
1 minutes ago — Jaipur hotel viral video new link * video took the internet by storm and amazed viewers on various social media platforms. Jaipur hotel viral video new link link, a young and talented digital creator, recently became famous thanks to this interesting video.
Jaipur hotel viral video new link link original video link.
Jaipur hotel viral video new link link viral on social media x trending now
Jaipur hotel viral video new link link ʟᴇᴀᴋᴇᴅ video ᴠɪʀᴀʟ on social media ˣ ᵀʷⁱᵗᵗᵉʳ
Jaipur hotel viral video new link link ʟᴇᴀᴋᴇᴅ video ᴠɪʀᴀʟ on social media ˣ ᵀʷⁱᵗᵗᵉʳ
Jaipur hotel viral video new link link original video link. Jaipur hotel viral video new link link viral on social media x trending now
l𝚎aked video Jaipur hotel viral video new link link original video viral video l𝚎aked on x twitter
Jaipur hotel viral video new link link Viral video viral video viral video original video video oficial twitter
l𝚎aked video Jaipur hotel viral video new link link original video viral video l𝚎aked on x twitter..
Actor X𝚇X Jaipur hotel viral video new link link First Time S𝙴X X𝚇X V𝚒deo po𝚛 Original V𝚒deo V𝚒deo took the internet by storm and amazed viewers on various social media platforms. Actor X𝚇X Jaipur hotel viral video new link link First Time S𝙴X X𝚇X V𝚒deo po𝚛 , a young and talented digital creator, recently became famous thanks to this interesting V𝚒deo.
L𝚎aked V𝚒deo Actor X𝚇X Jaipur hotel viral video new link link First Time S𝙴X X𝚇X V𝚒deo po𝚛 V𝚒ral V𝚒deo Original V𝚒deo L𝚒nk On Social Media Telegram X Trending Tiktok (18+)
L𝚎aked V𝚒deo Actor X𝚇X Jaipur hotel viral video new link link First Time S𝙴X X𝚇X V𝚒deo po𝚛 V𝚒ral V𝚒deo Original V𝚒deo L𝚒nk On Social Media X Trending Tiktok (18+)
L𝚎aked V𝚒deo Actor X𝚇X Jaipur hotel viral video new link link First Time S𝙴X X𝚇X V𝚒deo po𝚛 Original V𝚒deo V𝚒ral V𝚒deo L𝚎aked on X Twitter
Actor X𝚇X Jaipur hotel viral video new link link First Time S𝙴X X𝚇X V𝚒deo po𝚛 Original Va𝚒deo V𝚒deo oficial twitter
L𝚎aked V𝚒deo Actor X𝚇X Jaipur hotel viral video new link link First Time S𝙴X X𝚇X V𝚒deo po𝚛 Original V𝚒deo V𝚒ral V𝚒deo L𝚎aked on X Twitter..
L𝚎aked V𝚒ral l𝚒nk 2025 L𝚎aked V𝚒deo
XnX V𝚒ral L𝚎aked V𝚒ral l𝚒nk X𝚇X Jaipur hotel viral video new link link First Time S𝙴X X𝚇X V𝚒deo po𝚛 V𝚒ral V𝚒deo L𝚎aked on X Twitter
latest X𝚇X Jaipur hotel viral video new link link First Time S𝙴X X𝚇X V𝚒deo po𝚛 L𝚎aked V𝚒deo V𝚒ral On Social Media
Kompoz Me L𝚎aked Com
Scoop Big Xn𝚇X Celebrity
Latest News, Photos, V𝚒deos on L𝚎aked V𝚒deo
Outdoor Desi Village The
Latest V𝚒deos of L𝚎aked V𝚒deos
Xnx V𝚒ral L𝚎aked X𝚇X Jaipur hotel viral video new link link First Time S𝙴X X𝚇X V𝚒deo po𝚛 V𝚒ral l𝚒nk Noodles L𝚎aked V𝚒deo Trending
V𝚒ral L𝚎aked V𝚒deo, Aakhir Woh Larki Kon Thi
Blue Flims 2025 L𝚎aked
Trending L𝚎aked V𝚒deos V𝚒ral
VIDEO]* Jaipur hotel viral video new link Full Link
+18 ORIGINAL Jaipur hotel viral video new link Video link
Xnxx!فيديو سكس هدير عبد الرازق مقطع كامل شاهد قبل الحذف+> ...
[VIRAL]* Jaipur hotel viral video new link link Viral video Full Link
18+ Viral Video Jaipur hotel viral video new link link Original Link
Jaipur hotel viral video new link viral mms Full Original Link
Jaipur hotel viral video new link viral mms Full Original Link1 minutes ago — Actor maya g viral video telegram * video took the internet by storm and amazed viewers on various social media platforms. maya g viral video telegram link, a young and talented digital creator, recently became famous thanks to this interesting video.
|
samuelmu/finetuning-sentiment-model-3000-samples
|
samuelmu
| 2025-06-20T19:30:29Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-20T19:11:38Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: finetuning-sentiment-model-3000-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5119
- Accuracy: 0.8767
- F1: 0.8818
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
csikasote/whisper-medium-nyagen-male-42
|
csikasote
| 2025-06-20T19:26:06Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:nyagen",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-06-20T17:59:40Z |
---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
datasets:
- nyagen
metrics:
- wer
model-index:
- name: whisper-medium-nyagen-male-42
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: nyagen
type: nyagen
metrics:
- name: Wer
type: wer
value: 0.2850010853049707
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-medium-nyagen-male-42
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the nyagen dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3385
- Wer: 0.2850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 0.3837 | 0.9050 | 200 | 0.4627 | 0.3453 |
| 0.2514 | 1.8100 | 400 | 0.3509 | 0.2635 |
| 0.1088 | 2.7149 | 600 | 0.3446 | 0.2748 |
| 0.0518 | 3.6199 | 800 | 0.3385 | 0.2850 |
| 0.0316 | 4.5249 | 1000 | 0.3682 | 0.2388 |
| 0.0198 | 5.4299 | 1200 | 0.3596 | 0.4493 |
| 0.0139 | 6.3348 | 1400 | 0.3756 | 0.2186 |
### Framework versions
- Transformers 4.53.0.dev0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.0
|
gutimazue/xlmr-prostata-bs16
|
gutimazue
| 2025-06-20T19:24:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T19:24:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Mezzo-fun-Viral-video-Link/wATCH.Mezzo.fun.viral.video.Leaks.Official
|
Mezzo-fun-Viral-video-Link
| 2025-06-20T19:22:25Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T19:21:47Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
moxin-org/Moxin-7B-VLM
|
moxin-org
| 2025-06-20T19:13:34Z | 59 | 1 | null |
[
"arxiv:2412.06845",
"license:mit",
"region:us"
] | null | 2025-06-09T23:40:00Z |
---
license: mit
---
<h1 align="center"> Moxin 7B VLM </h1>
<p align="center"> <a href="https://github.com/moxin-org/Moxin-VLM">Home Page</a>    |    <a href="https://arxiv.org/abs/2412.06845">Technical Report</a>    |    <a href="https://huggingface.co/moxin-org/Moxin-7B-LLM">Base Model</a>    |    <a href="https://huggingface.co/moxin-org/Moxin-7B-Chat">Chat Model</a>    |    <a href="https://huggingface.co/moxin-org/Moxin-7B-Instruct">Instruct Model</a>    |    <a href="https://huggingface.co/moxin-org/Moxin-7B-Reasoning">Reasoning Model</a>    |    <a href="https://huggingface.co/moxin-org/Moxin-7B-VLM">VLM Model</a> </p>
---
## Installation
```bash
git clone https://github.com/moxin-org/Moxin-VLM.git
cd Moxin-VLM
conda create -n moxin-vlm python=3.10 -y
conda activate moxin-vlm
pip install torch==2.4.1 torchvision==0.19.1
pip install transformers==4.46.0 peft==0.15.2
pip install -e .
# Install Flash Attention 2
# =>> If you run into difficulty, try `pip cache remove flash_attn` first
pip install flash-attn==2.6.3 --no-build-isolation
```
## Pretrained Models
Please find our Pretrained Models on our huggingface page: [moxin-org/Moxin-7B-VLM](https://huggingface.co/moxin-org/Moxin-7B-VLM).
We've also provided a hf_convert version [Moxin-7B-VLM-hf](https://huggingface.co/bobchenyx/Moxin-7B-VLM-hf) based on [openvla](https://github.com/openvla/openvla).
Please refer to the attached scripts for downloading and running our model locally.
```bash
python scripts/snapshot_download.py
```
## Usage
For a complete terminal-based CLI for interacting with our VLMs.
```bash
python scripts/generate.py --model_path moxin-org/Moxin-7B-VLM
```
For a faster loading, inference and demo.
```bash
python scripts/fast_inference.py
```
---
## Acknowledgments
This project is based on [Prismatic VLMs](https://github.com/TRI-ML/prismatic-vlms) by [TRI-ML](https://github.com/TRI-ML).
Special thanks to the original contributors for their excellent work.
## Citation
If you find our code or models useful in your work, please cite [our paper](https://arxiv.org/abs/2412.06845v5):
```bibtex
@article{zhao2024fully,
title={Fully Open Source Moxin-7B Technical Report},
author={Zhao, Pu and Shen, Xuan and Kong, Zhenglun and Shen, Yixin and Chang, Sung-En and Rupprecht, Timothy and Lu, Lei and Nan, Enfu and Yang, Changdi and He, Yumei and others},
journal={arXiv preprint arXiv:2412.06845},
year={2024}
}
|
Singhms1/mahesh_splunk_model_v3
|
Singhms1
| 2025-06-20T19:13:21Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-06-20T19:13:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
wandb/WeaveContextRelevanceScorerV1
|
wandb
| 2025-06-20T19:11:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"token-classification",
"generated_from_trainer",
"base_model:tasksource/deberta-base-long-nli",
"base_model:finetune:tasksource/deberta-base-long-nli",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-06-20T19:11:36Z |
---
library_name: transformers
license: apache-2.0
base_model: tasksource/deberta-base-long-nli
tags:
- generated_from_trainer
model-index:
- name: deberta-base-long-nli-relevance-token-clf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-base-long-nli-relevance-token-clf
This model is a fine-tuned version of [tasksource/deberta-base-long-nli](https://huggingface.co/tasksource/deberta-base-long-nli) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 32
- seed: 2024
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 384
- total_eval_batch_size: 256
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:---:|:--------:|:---------:|:------:|
| No log | 0 | 0 | 0.2041 | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.21.0
|
windies-199/Qwen2-0.5B-GRPO-test
|
windies-199
| 2025-06-20T19:09:52Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"grpo",
"trl",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T19:08:12Z |
---
base_model: Qwen/Qwen2-0.5B-Instruct
library_name: transformers
model_name: Qwen2-0.5B-GRPO-test
tags:
- generated_from_trainer
- grpo
- trl
licence: license
---
# Model Card for Qwen2-0.5B-GRPO-test
This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="windies-199/Qwen2-0.5B-GRPO-test", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.19.0
- Transformers: 4.52.4
- Pytorch: 2.5.1
- Datasets: 3.1.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
zahraase1im/distilbert-rotten-tomatoes
|
zahraase1im
| 2025-06-20T19:09:09Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-20T19:04:05Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: distilbert-rotten-tomatoes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-rotten-tomatoes
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
gutimazue/beto-prostata-bs16
|
gutimazue
| 2025-06-20T19:06:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-06-20T19:06:36Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ArunP3799/qwen3b_baseline_math_step_40
|
ArunP3799
| 2025-06-20T19:04:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T19:02:04Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Ankz123/my-lora-model
|
Ankz123
| 2025-06-20T19:03:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T19:03:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
morturr/Llama-2-7b-hf-PAIR_one_liners_amazon-COMB-one_liners-comb-3-seed-42-2025-06-20
|
morturr
| 2025-06-20T19:03:17Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"license:llama2",
"region:us"
] | null | 2025-06-20T19:03:08Z |
---
library_name: peft
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-2-7b-hf-PAIR_one_liners_amazon-COMB-one_liners-comb-3-seed-42-2025-06-20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-PAIR_one_liners_amazon-COMB-one_liners-comb-3-seed-42-2025-06-20
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.5.1+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
ArunP3799/qwen3b_baseline_math_step_4
|
ArunP3799
| 2025-06-20T18:58:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T18:56:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
New-Clip-APK8D-18-viral-Videos/FULL.VIDEO.apk8d.Viral.Video.Tutorial.Official
|
New-Clip-APK8D-18-viral-Videos
| 2025-06-20T18:57:48Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-20T18:57:23Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Ejja87/Model1
|
Ejja87
| 2025-06-20T18:54:13Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-20T18:54:13Z |
---
license: apache-2.0
---
|
EYEDOL/MISTRAL7B_ON_ALPACA5
|
EYEDOL
| 2025-06-20T18:52:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.1-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.1-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-20T18:52:21Z |
---
base_model: unsloth/mistral-7b-instruct-v0.1-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** EYEDOL
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.1-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
LazarM05/Llama_Philosopher-Merged
|
LazarM05
| 2025-06-20T18:50:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-20T18:46:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
majawis/wismajka
|
majawis
| 2025-06-20T18:47:23Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-06-20T18:05:31Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.