modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-27 06:27:44
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 533
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-27 06:27:36
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
bharathsj/llama3.2-1b-v1-lse-new
|
bharathsj
| 2025-06-17T11:30:16Z | 0 | 0 | null |
[
"safetensors",
"llama",
"license:apache-2.0",
"region:us"
] | null | 2025-06-17T11:28:13Z |
---
license: apache-2.0
---
|
Aeshp/deepseekfinetuned01
|
Aeshp
| 2025-06-17T11:28:47Z | 35 | 1 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"en",
"arxiv:1910.09700",
"base_model:deepseek-ai/DeepSeek-R1-0528",
"base_model:finetune:deepseek-ai/DeepSeek-R1-0528",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-06-11T07:21:50Z |
---
library_name: transformers
tags:
- unsloth
license: mit
language:
- en
base_model:
- deepseek-ai/DeepSeek-R1-0528
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
milpu02/nekosamuraimix-xl
|
milpu02
| 2025-06-17T11:15:14Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2025-06-17T11:15:02Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/Screenshot 2025-06-17 051446.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: null
---
# Illustrious-XL-v1.0
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/milpu02/nekosamuraimix-xl/tree/main) them in the Files & versions tab.
|
h34v7/Euro-DDXPv1.0-GGUF
|
h34v7
| 2025-06-17T11:14:52Z | 22 | 0 | null |
[
"gguf",
"base_model:h34v7/Euro-DDXPv1.0",
"base_model:quantized:h34v7/Euro-DDXPv1.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-09T09:33:00Z |
---
license: apache-2.0
base_model:
- h34v7/Euro-DDXPv1.0
---
|
LaaP-ai/donut-base-invoice-v1.13
|
LaaP-ai
| 2025-06-17T11:03:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-17T10:52:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ujjawal077/llama3s-merged3
|
ujjawal077
| 2025-06-17T11:03:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"base_model:AdaptLLM/finance-LLM-13B",
"base_model:merge:AdaptLLM/finance-LLM-13B",
"base_model:starmpcc/Asclepius-Llama2-13B",
"base_model:merge:starmpcc/Asclepius-Llama2-13B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-17T11:01:23Z |
---
base_model:
- AdaptLLM/finance-LLM-13B
- starmpcc/Asclepius-Llama2-13B
library_name: transformers
tags:
- mergekit
- merge
---
# llama3s-merged
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [starmpcc/Asclepius-Llama2-13B](https://huggingface.co/starmpcc/Asclepius-Llama2-13B) as a base.
### Models Merged
The following models were included in the merge:
* [AdaptLLM/finance-LLM-13B](https://huggingface.co/AdaptLLM/finance-LLM-13B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: starmpcc/Asclepius-Llama2-13B
dtype: bfloat16
merge_method: dare_ties
modules:
default:
slices:
- sources:
- layer_range: [0, 40]
model: AdaptLLM/finance-LLM-13B
parameters:
density: 0.53
weight: 0.6
- layer_range: [0, 40]
model: starmpcc/Asclepius-Llama2-13B
parameters:
density: 0.5
weight: 0.4
parameters:
int8_mask: 1.0
```
|
tamewild/4b_v4_merged_e2
|
tamewild
| 2025-06-17T10:57:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-17T10:56:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
debisoft/mistral-nemo-minitron-8B-Instruct-thinking-function_calling-V0
|
debisoft
| 2025-06-17T10:57:34Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:nvidia/Mistral-NeMo-Minitron-8B-Instruct",
"base_model:finetune:nvidia/Mistral-NeMo-Minitron-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-17T10:52:55Z |
---
base_model: nvidia/Mistral-NeMo-Minitron-8B-Instruct
library_name: transformers
model_name: mistral-nemo-minitron-8B-Instruct-thinking-function_calling-V0
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for mistral-nemo-minitron-8B-Instruct-thinking-function_calling-V0
This model is a fine-tuned version of [nvidia/Mistral-NeMo-Minitron-8B-Instruct](https://huggingface.co/nvidia/Mistral-NeMo-Minitron-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="debisoft/mistral-nemo-minitron-8B-Instruct-thinking-function_calling-V0", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.16.1
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Damarna/xat_interactiuu
|
Damarna
| 2025-06-17T10:50:06Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-17T10:50:06Z |
---
license: apache-2.0
---
|
ujjawal077/llama3s-merged2
|
ujjawal077
| 2025-06-17T10:45:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"base_model:AdaptLLM/finance-LLM-13B",
"base_model:merge:AdaptLLM/finance-LLM-13B",
"base_model:starmpcc/Asclepius-Llama2-13B",
"base_model:merge:starmpcc/Asclepius-Llama2-13B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-17T10:42:27Z |
---
base_model:
- AdaptLLM/finance-LLM-13B
- starmpcc/Asclepius-Llama2-13B
library_name: transformers
tags:
- mergekit
- merge
---
# llama3s-merged
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [starmpcc/Asclepius-Llama2-13B](https://huggingface.co/starmpcc/Asclepius-Llama2-13B) as a base.
### Models Merged
The following models were included in the merge:
* [AdaptLLM/finance-LLM-13B](https://huggingface.co/AdaptLLM/finance-LLM-13B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: starmpcc/Asclepius-Llama2-13B
dtype: bfloat16
merge_method: dare_ties
modules:
default:
slices:
- sources:
- layer_range: [0, 40]
model: AdaptLLM/finance-LLM-13B
parameters:
density: 0.53
weight: 0.6
- layer_range: [0, 40]
model: starmpcc/Asclepius-Llama2-13B
parameters:
density: 0.5
weight: 0.4
parameters:
int8_mask: 1.0
```
|
Damarna/xat_interactiu
|
Damarna
| 2025-06-17T10:38:36Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-17T10:38:36Z |
---
license: apache-2.0
---
|
muhdnorian/model-qwen2.5-7b-fine-tuning-resume_jason_miller
|
muhdnorian
| 2025-06-17T10:27:49Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-17T10:26:36Z |
---
base_model: unsloth/qwen2.5-7b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** muhdnorian
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
PKU-ML/G1-7B
|
PKU-ML
| 2025-06-17T10:21:52Z | 121 | 2 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"graph",
"chat",
"conversational",
"en",
"dataset:PKU-ML/Erdos",
"dataset:PKU-ML/Erdos-CoT",
"arxiv:2505.18499",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-31T11:39:08Z |
---
license: apache-2.0
datasets:
- PKU-ML/Erdos
- PKU-ML/Erdos-CoT
language:
- en
metrics:
- accuracy
base_model:
- Qwen/Qwen2.5-7B-Instruct
pipeline_tag: text-generation
tags:
- graph
- chat
library_name: transformers
---
# G1-7B
## Introduction
G1 is the series of large language models trained on our benchmark [Erdos](https://huggingface.co/datasets/PKU-ML/Erdos) for solving graph reasoning tasks, based on Qwen2.5-Instruct.
We apply Group Relative Policy Optimization (GRPO) for reinforcement learning with supervised finetuning as a prelimary step.
G1 brings the following improvements:
- **Significant improvement on graph reasoning**: G1 models achieve up to 46% improvement over baselines on Erdลs, with the 7B variant matching OpenAIโs o3-mini and the 3B model surpassing Qwen2.5-72B-Instruct by notable margins.
- **Strong Generalization to unseen graph tasks**: G1 exhibits zero-shot generalization on unseen graph tasks, improving performance on *other graph reasoning benchmarks* (GraphWiz, GraphArena) and *real-world graphs* (Cora, PubMed).
- **NO Compromise on general reasoning**: Crucially, G1 preserves general reasoning ability (GSM8K, MATH, MMLU-Pro), proving its versatility.
**This repo contains the G1-7B model**, which has the following features:
- Type: Causal Language Models
- Training Stage: SFT & RL
- Architecture: the same with Qwen2.5-Instruct
- Number of Parameters: 7.62B
- Context Length: Full 32,768 tokens and generation 8192 tokens
For more details, please refer to our [paper](https://arxiv.org/pdf/2505.18499) and [GitHub](https://github.com/PKU-ML/G1/tree/main).
## Requirements
The model is trained based on Qwen/Qwen2.5-7B-Instruct. The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
INSTRUCTION_TEMPLATE = """
{instruction}
Solve the above problem efficiently and clearly. The last line of your response should be of the following format: 'Therefore, the final answer is: $\\boxed{{ANSWER}}$. I hope it is correct' (without quotes) where ANSWER is just the final number or expression that solves the problem. Think step by step before answering.
""".strip()
model_name = "PKU-ML/G1-7B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "The task is to determine the degree centrality of a node in the graph.\n\n"\
"Degree centrality for a node is the fraction of nodes it is connected to.\n\n"\
"Here is an undirected graph containing nodes from 1 to 15. The edges are: (1, 15), (15, 11), (2, 3), (2, 6), (3, 6), (3, 7), (6, 7), (6, 8), (7, 8), (7, 14), (4, 10), (10, 5), (10, 12), (8, 14), (8, 9), (12, 11), (12, 13).\n\n"\
"Question: What is the degree centrality of node 2 in the graph?\n\n"\
"You need to format your answer as a float number."
messages = [
{"role": "user", "content": INSTRUCTION_TEMPLATE.format(instruction=prompt)}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=4096,
top_p=0.95,
top_k=30,
temperature=0.6
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
## Evaluation & Performance
Detailed evaluation results are reported in this [๐ paper](https://arxiv.org/pdf/2505.18499).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{guo2025g1,
title={G1: Teaching LLMs to Reason on Graphs with Reinforcement Learning},
author={Guo, Xiaojun and Li, Ang and Wang, Yifei and Jegelka, Stefanie and Wang, Yisen},
journal={arXiv preprint arXiv:2505.18499},
year={2025}
}
```
|
lilas12/bert-finetuned-ner
|
lilas12
| 2025-06-17T10:21:25Z | 9 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-06-14T11:52:01Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9381528768031836
- name: Recall
type: recall
value: 0.9522046449007069
- name: F1
type: f1
value: 0.9451265347030819
- name: Accuracy
type: accuracy
value: 0.9869753340790016
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0620
- Precision: 0.9382
- Recall: 0.9522
- F1: 0.9451
- Accuracy: 0.9870
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0763 | 1.0 | 1756 | 0.0741 | 0.8936 | 0.9288 | 0.9109 | 0.9793 |
| 0.0362 | 2.0 | 3512 | 0.0648 | 0.9316 | 0.9488 | 0.9401 | 0.9857 |
| 0.0236 | 3.0 | 5268 | 0.0620 | 0.9382 | 0.9522 | 0.9451 | 0.9870 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Trelis/orpheus-3b-0.1-ft-lora-ft_20250617_101017-merged
|
Trelis
| 2025-06-17T10:18:05Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/orpheus-3b-0.1-ft",
"base_model:finetune:unsloth/orpheus-3b-0.1-ft",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-17T10:15:28Z |
---
base_model: unsloth/orpheus-3b-0.1-ft
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Trelis
- **License:** apache-2.0
- **Finetuned from model :** unsloth/orpheus-3b-0.1-ft
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
tomaarsen/splade-mpnet-base-miriad-2e-5-lq-5e-6-lc
|
tomaarsen
| 2025-06-17T10:15:04Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"mpnet",
"sparse-encoder",
"sparse",
"splade",
"generated_from_trainer",
"dataset_size:100000",
"loss:SpladeLoss",
"loss:SparseMultipleNegativesRankingLoss",
"loss:FlopsLoss",
"feature-extraction",
"en",
"dataset:tomaarsen/miriad-4.4M-split",
"arxiv:1908.10084",
"arxiv:2205.04733",
"arxiv:1705.00652",
"arxiv:2004.05665",
"base_model:microsoft/mpnet-base",
"base_model:finetune:microsoft/mpnet-base",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-06-17T10:14:47Z |
---
language:
- en
license: apache-2.0
tags:
- sentence-transformers
- sparse-encoder
- sparse
- splade
- generated_from_trainer
- dataset_size:100000
- loss:SpladeLoss
- loss:SparseMultipleNegativesRankingLoss
- loss:FlopsLoss
base_model: microsoft/mpnet-base
widget:
- text: "He does it right, but there are times that he doesn't (Joana) Let's go there\
\ and pee? Because she does not want to wear a diaper, she rips off her diaper\
\ (Filomena). The family caregiver may understand this action as a \"pang\" and\
\ \"tantrum\", and \"forget\" that these episodes are part of the clinical picture\
\ of dementia. Conflicts related to incontinence and other difficult-to-manage\
\ symptoms eventually lead to a variety of interpretations, and past history of\
\ the emotional relationship between the elderly and the family caregiver can\
\ cause older emotional issues to surface again in these episodes.\n\n With psycho-functional\
\ limitations, new demands arise that can be distressing for those who care because\
\ of affective involvement. Subjective constructions are fundamental elements\
\ in upkeeping the relationship of care 10 .\n\n Besides the psychological aspect\
\ involved in the loss of identity and the specific cognitive aspects of dementia,\
\ some behavioral and psychiatric changes are important even in the consultation\
\ with the ESF professionals: psychotic symptoms, agitation and aggression, mood\
\ swings, disinhibited behavior and euphoria, apathy and insomnia. Some studies\
\ [11] [12] [13] pointed out the significant association between the presence\
\ of apathy and a faster cognitive and functional decline in these patients. Another\
\ very relevant situation regarding the appearance of neuropsychiatric symptoms\
\ is the association of these symptoms with the institutionalization and shorter\
\ patient survival. They also showed that the highest Neuropsychiatric Inventory\
\ (NPI) score was signifi-cantly associated with more severe cognitive impairment,\
\ greater caregiver distress, and higher cost, but was not associated with a formal\
\ diagnosis of dementia performed by the primary care physician.\n\n Changed behaviors\
\ and even risky behaviors, such as turning on the gas switch and not turning\
\ off, stirring in pots on a hot stove, or ingestion of liquids or toxic materials\
\ are situations in the face of neuropsychiatric manifestations in dementia. Filomena\
\ reports several neuropsychiatric symptoms of her husband. She compares his behavior\
\ to that of children who explore the environment to discover the cause and effect\
\ of things and the sensations obtained by the senses. Her role in this context\
\ resembles that of a mother trying to prevent the child from getting hurt: He\
\ lights up the gas switch, he's just like a child, sometimes he starts to eat\
\ the slipper, I have to get it out of his mouth.\n\n Hallucination is another\
\ neuropsychiatric symptom described by family caregivers. Joana reports that\
\ when the husband talks to people who have died, the family members feel fear\
\ and distance themselves. Filomena has fun when her mother speaks with those\
\ who have died: \"She talks to those who have passed away, she sends the dog\
\ out, which does not exist\". Each family caregiver experiences the symptoms\
\ presented by the dementia in a unique way, and ways to address and interpret\
\ this phenomenon and give meaning to their experience.\n\n The negative development\
\ of dementia perceived by Celina, Filomena, Maria, Teresa and Joana show that\
\ the disease follows a course that transcends the biological event itself. The\
\ dementia process evidences psychological and sociocultural constructions permeated\
\ by meanings and interpretations according to those who live and those who maintain\
\ interpersonal relationships with the elderly person with dementia.\n\n In the\
\ discourse of family caregivers, seniors with dementia have aggressive behaviors\
\ such as agitation, spitting, cursing, clawing, throwing objects, revealing a\
\ level of aggression that can impact the feelings and interpretations produced\
\ during the care routine. Freud 14 affirms that human instincts are of two types:\
\ Those who tend to preserve and unite, which we call 'erotic' [...] with a deliberate\
\ expansion of the popular conception of 'sexuality'; and those who tend to destroy\
\ and kill, which we group as an aggressive or destructive instinct. All actions\
\ in human life involve the confluence of these two instincts of preservation\
\ and destruction. The ideal situation for life in society would be the dominance\
\ of reason over the instinctual life controlling destructive impulses, which\
\ is utopian. In this perspective, aggressiveness is inherent in the human condition.\n\
\n In seniors with dementia with a declining psychological realm of the Self,\
\ the progressive loss of identity and the repercussion of cognitive decline,\
\ an actual decline in the rational realm of psychic life emerges. This decline\
\ refers to the cerebral aspect of inhibitory control and social cognition, showing\
\ that the emergence of aggressive behaviors is related to the biological component.\
\ The declining reason turns its demands and needs into instinctual acts and more\
\ basic reflexes, and can produce a continuous imbalance in the expression between\
\ the instincts of preservation and aggression.\n\n Aggressiveness can be triggered\
\ by situations of frustration, when they do not get what they want, when they\
\ are afraid or consider some humiliating situation, when they are exposed to\
\ environmental overstimulation or feel any physical pain or side effects from\
\ medication."
- text: "Neurosurgery is of great interest to historians of medicine and technology\
\ because it is relatively young, because it developed in an era of journals and\
\ publications, because lines and traditions of training and mentorship are relatively\
\ clear, and because the technologies that enabled the evolution of the profession\
\ and acted as inflection points in the emergence of certain surgical approaches\
\ and procedures are at once well documented and remarkably unambiguous. To the\
\ extent that is the case for neurosurgery as a whole, it is even more so for\
\ surgery of the skull base.\n\n To trace the history of skull base surgery along\
\ its full expanse is to begin with Horsley and pituitary tumors (unless one wants\
\ to start even earlier with the treatment of trigeminal neuralgia); to move to\
\ Cushing's work in the same arena (but also that of many others as well); to\
\ emphasize the impact of microsurgical techniques and new imaging modalities;\
\ to outline once radically innovative, but now widely practiced anatomical approaches\
\ to the skull base; to emphasize the importance of team approaches; to discuss\
\ emerging therapeutic strategy as well as instrumentation and techniques; to\
\ acknowledge the importance of advances in neuroanesthesia and the medical and\
\ perioperative care of the neurosurgical patient; and to recognize the contributions\
\ of the many individuals who, over the past 25 years, have added to and furthered\
\ the field in these and other ways.\n\n It is not hard to point to leading individuals\
\ and important techniques. It is perhaps more difficult to frame them in a meaningful\
\ historical perspective because the work has occurred relatively recently, in\
\ the time frame historians call \"near history.\" Difficulties arise from both\
\ an evaluative and a nosological standpoint. For example, from an evaluative\
\ standpoint, how does one stratify the relative importance of corticosteroids,\
\ osmotic diuretics, and CSF drainage techniques and technologies in the control\
\ of intracranial pressure and the facilitation of exposure for base of skull\
\ surgery? How does one think about the idea of hybrid surgery and stereotactic\
\ radiation? What will be the long-term view of anatomical approaches to giant\
\ basilar aneurysms in the light of endovascular surgery? Have we reached a tipping\
\ point in the management of vestibular schwannomas, given the availability of\
\ and the outcomes associated with stereotactic radiosurgery?\n\n From a nosological\
\ standpoint, should we think about base of skull surgery in terms of anatomical\
\ approaches? One textbook that does just that starts with subfrontal approaches\
\ and then moves around the calvaria and down to the petrous and temporal region\
\ in a Cook's tour of exposure, in the tradition of Henry's Extensile Exposure\
\ and comparable surgical classics. 1, 6 Other publications have explored a set\
\ of technologies. 5, 7, 10 Another focuses on the contribution of great men.\
\ 9 Many surgeons have written about specific particular pathologies at the skull\
\ base.\n\n Introduction their colleagues write about the premodern period. Elhadi\
\ and colleagues also comment on the introduction of radiography in early neurosurgery.\
\ Gross and Grossi and their colleagues concentrate on petrosal approaches; Schmitt\
\ and Jane on third ventriculostomy; and Chittiboina and colleagues on the history\
\ of a very simple but ubiquitous instrument, the Freer elevator, and its inventor.\
\ In contrast to the more comprehensive overviews written by Goodrich, Donald,\
\ and others, these essays concentrate on selected details. While it is important\
\ not to miss the forest for the trees, sometimes the trees are worth studying\
\ no less than the forest. \n\n The authors report no conflict of interest."
- text: 'How do neuromediators contribute to the pathogenesis of pruritus in AD?
'
- text: "Pericardial effusion (PE) is a life-threatening condition, as accumulation\
\ of fluid in the pericardial sac can lead to cardiac tamponade and fatal shock.\
\ 1, 2 PE is often associated with an underlying disease or condition, and the\
\ causes can vary widely. 3, 4 Pericardiocentesis performed by needle (with or\
\ without echoguidance), and various surgical procedures (including subxiphoid\
\ pericardial tube drainage, pericardial window performed through a left anterior\
\ thoracotomy, or video-assisted thoracoscopic surgery) can alleviate PE. 5 Our\
\ retrospective clinical experiences of treating PE with subxiphoid pericardiostomy\
\ are presented in this study.\n\n We reviewed the medical records of patients\
\ who underwent subxiphoid pericardiostomy to treat persistent symptomatic PE\
\ in our clinic between 1990 and 2000. Echocardiography (ECG) was used to diagnose\
\ PE and N Becit, A รzyazicioglu, M Ceviz et al.\n\n determine the size of the\
\ effusion. A diastolic echo-free space of < 10 mm between the left ventricular\
\ posterior wall and pericardium was determined as mild PE, 10 -20 mm as moderate,\
\ and > 20 mm as severe PE. Patients with cardiac tamponade and/or moderate to\
\ severe PE were treated by subxiphoid pericardiostomy and tube drainage.\n\n\
\ Some patients with pre-operative tuberculosis were treated with an adult fourdrug\
\ regimen (isoniazid, 300 mg/day and rifampin, 600 mg/day for 12 months, streptomycin,\
\ 1 g/day for 2 months, and pyrazinamide, 2 g/day for 3 months) preoperatively.\
\ The effusion was drained after a 3-week course of anti-tuberculosis therapy.\
\ In these, and patients diagnosed with tuberculous pericarditis, the tuberculosis\
\ therapy regimen was given for 12 months post-operatively.\n\n The technique\
\ used for subxiphoid pericardiostomy (described previously 3 ) was performed\
\ under general anaesthetic, or local anaesthesia and sedation. General anaesthesia\
\ was preferred in children and was induced with 1.5 mg/kg ketamine. Neuromuscular\
\ block was achieved with 0.1 mg/kg vecuronium, and anaesthesia maintained with\
\ 60% N 2 O, 40% O 2 and 0.5 -1.0% isoflurane. Local anaesthetic (2% lidocaine\
\ solution) was injected into the dermal and subdermal layers, and sedation and\
\ analgesia was provided by 1 mg/kg ketamine intravenously. A piece of anterior\
\ pericardium, approximately 2 -4 cm in diameter, was excised under direct vision\
\ and submitted for histopathological analysis. The pericardial cavity was decompressed\
\ and fluid samples were collected for culture and cytological analysis. To prevent\
\ acute cardiac dilatation during decompression of the pericardial cavity, intravenous\
\ digoxin was administered and the pericardial cavity was decompressed gradually.\n\
\n The pericardial cavity was examined under direct vision and/or by digital examination\
\ to detect any tumour or adhesions. Gentle digital lysis of adhesions and opening\
\ of loculations were performed as needed, to enhance satisfactory drainage. A\
\ soft chest tube was placed in the pericardial cavity, lateral to the right ventricle,\
\ after pericardiotomy for post-operative drainage. It was connected to an underwater\
\ sealed system, and was removed when fluid drainage ceased.\n\n Patients with\
\ mild haemorrhagic effusion and cardiac tamponade, due to trauma or invasive\
\ cardiac interventions, were considered haemodynamically unstable and unsuitable\
\ for surgical subxiphoid pericardiostomy, even under local anaesthetic. These\
\ patients underwent pericardiocentesis in the intensive care unit, which provided\
\ immediate relief. Subxiphoid pericardiostomy was performed later if haemorrhagic\
\ PE persisted. Patients were followed, with physical examinations and ECG, in\
\ the outpatient clinic for at least 1 year.\n\n Numerical results are given as\
\ mean ยฑ SD. Fisher's exact test was used to compare proportions between groups\
\ (comparison of the rates of recurrence and constriction between patient groups\
\ with uraemic pericarditis, tuberculous pericarditis and non-tuberculous bacterial\
\ pericarditis). The McNemar test was used for comparison of proportions within\
\ one group (to assess the significance of rates of recurrence and constriction\
\ in patients with tuberculous pericarditis). Statistical differences were considered\
\ significant if P < 0.05."
- text: "Henry M. Blumberg, MD In this issue of Infection Control and Hospital Epidemiology,\
\ a potpourri of tuberculosis (TB)-related articles are being published. 1-7 Tuberculosisrelated\
\ issues have been an important focus for the past decade for those in infection\
\ control and hospital epidemiology, especially in urban areas where the large\
\ majority of TB cases occur, 8 but also, because of federal regulations, for\
\ those in low-endemic areas or areas where no TB cases occur (approximately half\
\ of the counties in the United States).\n\n The resurgence of TB beginning in\
\ the mid1980s in the United States (in large part, due to failure and underfunding\
\ of the public health infrastructure and to the epidemic of human immunodeficiency\
\ virus [HIV] infection) and outbreaks of TB have highlighted the risk of nosocomial\
\ transmission of TB. 9,10 These outbreaks affected both healthcare workers (HCWs)\
\ and patients. The fact that outbreaks in New York and Miami, among others, involved\
\ multidrug-resistant (MDR) strains that were associated with high morbidity and\
\ mortality among HIV-infected individuals punctuated the importance of effective\
\ TB infection control measures. Commingling of patients with unsuspected TB and\
\ those who were quite immunosuppressed led to amplification of nosocomial transmission.\
\ A decade ago, few institutions were prepared for the changing epidemiology of\
\ TB.\n\n Several recent studies have demonstrated that infection control measures\
\ are effective in preventing nosocomial transmission of TB, 11-13 and two reports\
\ in this issue, from institutions in Kentucky 1 and New York, 2 provide additional\
\ data on decreases in HCW tuberculin skin-test (TST) conversions following implementation\
\ of TB infection control measures. In most studies, multiple interventions (administrative\
\ controls, environmental controls, and respiratory protection) were initiated\
\ at approximately the same time, making it more difficult to identify the most\
\ crucial aspect of the program. The importance of TB infection control measures\
\ in contributing to the decline in TB cases in the United States, as well as\
\ the reduction in the number of MDR-TB cases in New York City, often has been\
\ understated. Increased federal funding for TB control activities and expansion\
\ of directly observed therapy clearly are important in efforts to prevent TB,\
\ but the initial decline in TB cases and in MDR TB in the United States beginning\
\ in 1993 likely was due, in large part, to interruption of TB transmission within\
\ healthcare facilities. Unfortunately, increased funding for TB control in the\
\ United States in the last 5 years often has not trickled down to inner-city\
\ hospitals, which frequently are the first line in the battle against TB.\n\n\
\ From our experience and that of others, it appears clear that administrative\
\ controls are the most important component of a TB infection control program.\
\ At Grady Memorial Hospital in Atlanta, we were able to decrease TB exposure\
\ episodes markedly and concomitantly to decrease HCW TST conversions after implementing\
\ an expanded respiratory isolation policy. 11 We continue to isolate appropriately\
\ approximately 95% of those subsequently diagnosed with TB. We were able to reduce\
\ TST conver-sion rates markedly during a period of time in which we had isolation\
\ rooms that would be considered suboptimal by Centers for Disease Control and\
\ Prevention (CDC) guidelines 14 (rooms that were under negative pressure but\
\ had less than six air changes per hour) and were using submicron masks. Implementation\
\ of better-engineered isolation rooms (>12 air changes per hour) with the completion\
\ of renovations to the hospital may have put us in better compliance with regulatory\
\ agencies and made the staff feel more secure, but has had little impact on further\
\ reducing low rates of HCW TST conversions. In addition, the termination of outbreaks\
\ and reduction of TST conversion rates at several institutions took place before\
\ introduction of National Institute for Occupational Safety and Health-approved\
\ masks and fit testing. 2,15,16 United States healthcare institutions are required\
\ by regulatory mandates to develop a \"respiratory protection program\" (including\
\ fit testing), which can be time-consuming, expensive, and logistically difficult.\
\ 17 Data published to date suggest that the impact of formal fit testing on proper\
\ mask use is small. 18 These federal mandates also have turned some well-meaning\
\ (trying to comply fully with the Occupational Safety and Health Administration\
\ [OSHA] regulations) but misguided infection control practitioners into \"facial\
\ hair police.\" These types of processes divert time, effort, and resources away\
\ from what truly is effective in preventing nosocomial transmission of TB, as\
\ well as from other important infection control activities such as preventing\
\ nosocomial bloodstream infections or transmission of highly resistant pathogens\
\ such as vancomycin-resistant Enterococcus or preparing for the onslaught of\
\ vancomycin-resistant Staphylococcus aureus. At a time when US healthcare institutions\
\ are under enormous pressure due to healthcare reform, market forces, and managed\
\ care, it is essential that federal regulatory agencies look carefully at scientific\
\ data when issuing regulations."
datasets:
- tomaarsen/miriad-4.4M-split
pipeline_tag: feature-extraction
library_name: sentence-transformers
metrics:
- dot_accuracy@1
- dot_accuracy@3
- dot_accuracy@5
- dot_accuracy@10
- dot_precision@1
- dot_precision@3
- dot_precision@5
- dot_precision@10
- dot_recall@1
- dot_recall@3
- dot_recall@5
- dot_recall@10
- dot_ndcg@10
- dot_mrr@10
- dot_map@100
- query_active_dims
- query_sparsity_ratio
- corpus_active_dims
- corpus_sparsity_ratio
co2_eq_emissions:
emissions: 196.23895298915153
energy_consumed: 0.504857070427092
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
ram_total_size: 31.777088165283203
hours_used: 1.484
hardware_used: 1 x NVIDIA GeForce RTX 3090
model-index:
- name: MPNet-base trained on MIRIAD question-passage tuples
results:
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: miriad eval
type: miriad_eval
metrics:
- type: dot_accuracy@1
value: 0.917
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.963
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.969
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.98
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.917
name: Dot Precision@1
- type: dot_precision@3
value: 0.32099999999999995
name: Dot Precision@3
- type: dot_precision@5
value: 0.1938
name: Dot Precision@5
- type: dot_precision@10
value: 0.09800000000000002
name: Dot Precision@10
- type: dot_recall@1
value: 0.917
name: Dot Recall@1
- type: dot_recall@3
value: 0.963
name: Dot Recall@3
- type: dot_recall@5
value: 0.969
name: Dot Recall@5
- type: dot_recall@10
value: 0.98
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.9509329680619819
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.9414055555555555
name: Dot Mrr@10
- type: dot_map@100
value: 0.9422311263243918
name: Dot Map@100
- type: query_active_dims
value: 72.48699951171875
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9976254791000846
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 291.5419921875
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9904497005212599
name: Corpus Sparsity Ratio
- task:
type: sparse-information-retrieval
name: Sparse Information Retrieval
dataset:
name: miriad test
type: miriad_test
metrics:
- type: dot_accuracy@1
value: 0.9
name: Dot Accuracy@1
- type: dot_accuracy@3
value: 0.953
name: Dot Accuracy@3
- type: dot_accuracy@5
value: 0.961
name: Dot Accuracy@5
- type: dot_accuracy@10
value: 0.974
name: Dot Accuracy@10
- type: dot_precision@1
value: 0.9
name: Dot Precision@1
- type: dot_precision@3
value: 0.31766666666666665
name: Dot Precision@3
- type: dot_precision@5
value: 0.19220000000000004
name: Dot Precision@5
- type: dot_precision@10
value: 0.09740000000000001
name: Dot Precision@10
- type: dot_recall@1
value: 0.9
name: Dot Recall@1
- type: dot_recall@3
value: 0.953
name: Dot Recall@3
- type: dot_recall@5
value: 0.961
name: Dot Recall@5
- type: dot_recall@10
value: 0.974
name: Dot Recall@10
- type: dot_ndcg@10
value: 0.9387955628253912
name: Dot Ndcg@10
- type: dot_mrr@10
value: 0.9273035714285714
name: Dot Mrr@10
- type: dot_map@100
value: 0.9283432155352948
name: Dot Map@100
- type: query_active_dims
value: 73.08399963378906
name: Query Active Dims
- type: query_sparsity_ratio
value: 0.9976059226378685
name: Query Sparsity Ratio
- type: corpus_active_dims
value: 293.2669982910156
name: Corpus Active Dims
- type: corpus_sparsity_ratio
value: 0.9903931929671761
name: Corpus Sparsity Ratio
---
# MPNet-base trained on MIRIAD question-passage tuples
This is a [SPLADE Sparse Encoder](https://www.sbert.net/docs/sparse_encoder/usage/usage.html) model finetuned from [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) on the [miriad-4.4_m-split](https://huggingface.co/datasets/tomaarsen/miriad-4.4M-split) dataset using the [sentence-transformers](https://www.SBERT.net) library. It maps sentences & paragraphs to a 30527-dimensional sparse vector space and can be used for semantic search and sparse retrieval.
## Model Details
### Model Description
- **Model Type:** SPLADE Sparse Encoder
- **Base model:** [microsoft/mpnet-base](https://huggingface.co/microsoft/mpnet-base) <!-- at revision 6996ce1e91bd2a9c7d7f61daec37463394f73f09 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 30527 dimensions
- **Similarity Function:** Dot Product
- **Training Dataset:**
- [miriad-4.4_m-split](https://huggingface.co/datasets/tomaarsen/miriad-4.4M-split)
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Sparse Encoder Documentation](https://www.sbert.net/docs/sparse_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sparse Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=sparse-encoder)
### Full Model Architecture
```
SparseEncoder(
(0): MLMTransformer({'max_seq_length': 512, 'do_lower_case': False}) with MLMTransformer model: MPNetForMaskedLM
(1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30527})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SparseEncoder
# Download from the ๐ค Hub
model = SparseEncoder("tomaarsen/splade-mpnet-base-miriad-2e-5-lq-5e-6-lc")
# Run inference
queries = [
"How have infection control measures been effective in preventing nosocomial transmission of TB?\n",
]
documents = [
'Henry M. Blumberg, MD In this issue of Infection Control and Hospital Epidemiology, a potpourri of tuberculosis (TB)-related articles are being published. 1-7 Tuberculosisrelated issues have been an important focus for the past decade for those in infection control and hospital epidemiology, especially in urban areas where the large majority of TB cases occur, 8 but also, because of federal regulations, for those in low-endemic areas or areas where no TB cases occur (approximately half of the counties in the United States).\n\n The resurgence of TB beginning in the mid1980s in the United States (in large part, due to failure and underfunding of the public health infrastructure and to the epidemic of human immunodeficiency virus [HIV] infection) and outbreaks of TB have highlighted the risk of nosocomial transmission of TB. 9,10 These outbreaks affected both healthcare workers (HCWs) and patients. The fact that outbreaks in New York and Miami, among others, involved multidrug-resistant (MDR) strains that were associated with high morbidity and mortality among HIV-infected individuals punctuated the importance of effective TB infection control measures. Commingling of patients with unsuspected TB and those who were quite immunosuppressed led to amplification of nosocomial transmission. A decade ago, few institutions were prepared for the changing epidemiology of TB.\n\n Several recent studies have demonstrated that infection control measures are effective in preventing nosocomial transmission of TB, 11-13 and two reports in this issue, from institutions in Kentucky 1 and New York, 2 provide additional data on decreases in HCW tuberculin skin-test (TST) conversions following implementation of TB infection control measures. In most studies, multiple interventions (administrative controls, environmental controls, and respiratory protection) were initiated at approximately the same time, making it more difficult to identify the most crucial aspect of the program. The importance of TB infection control measures in contributing to the decline in TB cases in the United States, as well as the reduction in the number of MDR-TB cases in New York City, often has been understated. Increased federal funding for TB control activities and expansion of directly observed therapy clearly are important in efforts to prevent TB, but the initial decline in TB cases and in MDR TB in the United States beginning in 1993 likely was due, in large part, to interruption of TB transmission within healthcare facilities. Unfortunately, increased funding for TB control in the United States in the last 5 years often has not trickled down to inner-city hospitals, which frequently are the first line in the battle against TB.\n\n From our experience and that of others, it appears clear that administrative controls are the most important component of a TB infection control program. At Grady Memorial Hospital in Atlanta, we were able to decrease TB exposure episodes markedly and concomitantly to decrease HCW TST conversions after implementing an expanded respiratory isolation policy. 11 We continue to isolate appropriately approximately 95% of those subsequently diagnosed with TB. We were able to reduce TST conver-sion rates markedly during a period of time in which we had isolation rooms that would be considered suboptimal by Centers for Disease Control and Prevention (CDC) guidelines 14 (rooms that were under negative pressure but had less than six air changes per hour) and were using submicron masks. Implementation of better-engineered isolation rooms (>12 air changes per hour) with the completion of renovations to the hospital may have put us in better compliance with regulatory agencies and made the staff feel more secure, but has had little impact on further reducing low rates of HCW TST conversions. In addition, the termination of outbreaks and reduction of TST conversion rates at several institutions took place before introduction of National Institute for Occupational Safety and Health-approved masks and fit testing. 2,15,16 United States healthcare institutions are required by regulatory mandates to develop a "respiratory protection program" (including fit testing), which can be time-consuming, expensive, and logistically difficult. 17 Data published to date suggest that the impact of formal fit testing on proper mask use is small. 18 These federal mandates also have turned some well-meaning (trying to comply fully with the Occupational Safety and Health Administration [OSHA] regulations) but misguided infection control practitioners into "facial hair police." These types of processes divert time, effort, and resources away from what truly is effective in preventing nosocomial transmission of TB, as well as from other important infection control activities such as preventing nosocomial bloodstream infections or transmission of highly resistant pathogens such as vancomycin-resistant Enterococcus or preparing for the onslaught of vancomycin-resistant Staphylococcus aureus. At a time when US healthcare institutions are under enormous pressure due to healthcare reform, market forces, and managed care, it is essential that federal regulatory agencies look carefully at scientific data when issuing regulations.',
'Drug Reaction with Eosinophilia and Systemic Symptoms (DRESS) syndrome is a severe and potentially life-threatening hypersensitivity reaction caused by exposure to certain medications (Phillips et al., 2011; Bocquet et al., 1996) . It is extremely heterogeneous in its manifestation but has characteristic delayed-onset cutaneous and multisystem features with a protracted natural history. The reaction typically starts with a fever, followed by widespread skin eruption of variable nature. This progresses to inflammation of internal organs such as hepatitis, pneumonitis, myocarditis and nephritis, and haematological abnormalities including eosinophilia and atypical lymphocytosis (Kardaun et al., 2013; Cho et al., 2017) .\n\n DRESS syndrome is most commonly classified according to the international scoring system developed by the RegiSCAR group (Kardaun et al., 2013) . RegiSCAR accurately defines the syndrome by considering the major manifestations, with each feature scored between โ1 and 2, and 9 being the maximum total number of points. According to this classification, a score of < 2 means no case, 2-3 means possible case, 4-5 means probable case, and 6 or above means definite DRESS syndrome. Table 1 gives an overview of the RegiSCAR scoring system. DRESS syndrome usually develops 2 to 6 weeks after exposure to the causative drug, with resolution of symptoms after drug withdrawal in the majority of cases (Husain et al., 2013a) . Some patients require supportive treatment with corticosteroids, although there is a lack of evidence surrounding the most effective dose, route and duration of the therapy (Adwan, 2017) . Although extremely rare, with an estimated population risk of between 1 and 10 in 10,000 drug exposures, it is significant due to its high mortality rate, at around 10% (Tas and The pathogenesis of DRESS syndrome remains largely unknown. Current evidence suggests that patients may be genetically predisposed to this form of hypersensitivity, with a superimposed risk resulting from Human Herpes Virus (HHV) exposure and subsequent immune reactivation (Cho et al., 2017; Husain et al., 2013a) . In fact, the serological detection of HHV-6 has even been proposed as an additional diagnostic marker for DRESS syndrome (Shiohara et al., 2007) . Other potential risk factors identified are family history (Sullivan and Shear, 2001; Pereira De Silva et al., 2011) and concomitant drug use, particularly antibiotics . DRESS syndrome appears to occur in patients of any age, with patient demographics from several reviews finding age ranges between 6 and 89 years (Picard et al., 2010; Kano et al., 2015; Cacoub et al., 2013) . DRESS syndrome was first described as an adverse reaction to antiepileptic therapy, but has since been recognised as a complication of an extremely wide range of medications (Adwan, 2017) . In rheumatology, it has been classically associated with allopurinol and sulfasalazine, but has also been documented in association with many other drugs including leflunomide, hydroxychloroquine, febuxostat and NSAIDs (Adwan, 2017) . Recent evidence has also identified a significant risk of DRESS syndrome with strontium ranelate use (Cacoub et al., 2013) . Thus far, that is the only anti-osteoporotic drug associated with DRESS syndrome, although there are various cases of other adverse cutaneous reactions linked to anti-osteoporotic medications, ranging from benign maculopapular eruption to Stevens-Johnson syndrome (SJS) and Toxic Epidermal Necrolysis (TEN) . Denosumab, an antiresorptive RANK ligand (RANKL) inhibitor licensed for osteoporosis, is currently known to be associated with some dermatological manifestations including dermatitis, eczema, pruritus and, less commonly, cellulitis (Prolia, n.d.).\n\n We hereby describe the first documented case of DRESS syndrome associated with denosumab treatment.\n\n The patient is a 76-year old female with osteoporosis and a background of alcoholic fatty liver disease and lower limb venous insufficiency. Osteoporosis was first diagnosed in 2003 and treated with risedronate, calcium and vitamin D, until 2006. While on this treatment, the patient sustained T12 and L3 fractures, the latter treated with kyphoplasty, and was therefore deemed a non-responder to risedronate.',
"The regulation of these events is known to go awry in certain pathologies especially in diseases associated with neurodegeneration. Mitochondrial fission helps to enhance the number of mitochondria, which can be efficiently distributed to each corner of neuronal cells and thus helps them to maintain their energy demands. Mitochondrial fission is highly essential during the periods of energy starvation to produce new, efficient mitochondrial energy generating systems. However, enhanced fission associated with bioenergetic crisis causes BAX foci formation on mitochondrial membrane and thus causes mitochondrial outer membrane permeabilization (MOMP), releasing cytochrome c and other pro apoptotic mediators into cytosol, results in apoptosis [93] . Impairment in the mitochondrial dynamics has also been observed in case of inflammatory neuropathies and oxaliplatin induced neuropathy [94] . Excessive nitric oxide is known to cause s-nitrosylation of dynamin related protein-1 (Drp-1), and increases the mitochondrial fission [95, 96] . Tumor necrosis factor-ฮฑ (TNF-ฮฑ) reported to inhibit the kinensin 1 protein, and thus impairs trafficking by halting mitochondrial movement along axons [97] . In addition to impaired dynamics, aggregates of abnormal shaped, damaged mitochondria are responsible for aberrant mitochondrial trafficking, which contributes to axonal degeneration observed in various peripheral neuropathies [81] .\n\n Autophagy is the discerning cellular catabolic process responsible for recycling the damaged proteins/ organelles in the cells [98] . Mitophagy is a selective autophagic process involved in recycling of damaged mitochondria and helps in supplying the constituents for mitochondrial biogenesis [99] . Excessive accumulation and impaired clearance of dysfunctional mitochondria are known to be observed in various disorders associated with oxidative stress [100] . Oxidative damage to Atg 4, a key component involved in mitophagy causes impaired autophagosome formation and clearance of damaged mitochondria [101] . Loss in the function of molecular chaperons and associated accumulation of damaged proteins are known to be involved in various peripheral neuropathies including trauma induced neuropathy [102, 103] . A model of demyelinating neuropathy corresponds to the accumulation of improperly folded myelin protein PMP-22 is also being observed recently [104, 105] .\n\n Mitochondrial dysfunction and associated disturbances are well connected to neuroinflammatory changes that occur in various neurodegenerative diseases [106] . Dysfunctional mitochondria are also implicated in several pathologies such as cardiovascular and neurodegenerative diseases. Several mitochondrial toxins have been found to inhibit the respiration in microglial cells and also inhibit IL-4 induced alternative anti inflammatory response and thus potentiates neuroinflammation [107] . Mitochondrial ROS are well identified to be involved in several inflammatory pathways such as NF-ฮบB, MAPK activation [108] . Similarly, the pro inflammatory mediators released as a result of an inflammatory episode found to be interfere with the functioning of the mitochondrial electron transport chain and thus compromise ATP production [109] . TNF-ฮฑ is known to inhibit the complex I, IV of ETC and decreases energy production. Nitric oxide (NO) is a potent inhibitor of cytochrome c oxidase (complex IV) and similarly IL-6 is also known to enhance mitochondrial generation of superoxide [110] . Mitochondrial dysfunction initiates inflammation by increased formation of complexes of damaged mitochondrial parts and cytoplasmic pattern recognition receptors (PRR's). The resulting inflammasome directed activation of interleukin-1ฮฒ production, which starts an immune response and leads to Fig. (4) . Mitotoxicity in peripheral neuropathies: Various pathophysiological insults like hyperglycemic, chemotherapeutic and traumatic injury to the peripheral nerves results in mitochondrial dysfunction through enhanced generation of ROS induced biomolecular damage and bioenergetic crisis. Following the nerve injury accumulation of mitochondria occurs resulting in the release of mtDNA & formyl peptides into circulation which acts as Death associated molecular patterns (DAMP's). These are recognized by immune cells as foreign bodies and can elicit a local immune/inflammatory response. Interaction between inflammatory mediators and structural proteins involved in mitochondrial trafficking will cause impairment in mitochondrial motility. Oxidative stress induced damage to the mt proteins like Atg4, Parkin etc cause insufficient mitophagy. Excess nitrosative stress also results in excessive mt fission associated with apoptosis. In addition, mtDNA damage impairs its transcription and reduces mitochondrial biogenesis. Ca 2+ dyshomeostasis, loss in mitochondrial potential and bioenergetic crisis cause neuronal death via apoptosis/necrosis. All these modifications cause defects in ultra structure, physiology and trafficking of mitochondria resulting in loss of neuronal function producing peripheral neuropathy.",
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 30527] [3, 30527]
# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[38.6532, 2.9277, 0.1620]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Sparse Information Retrieval
* Datasets: `miriad_eval` and `miriad_test`
* Evaluated with [<code>SparseInformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sparse_encoder/evaluation.html#sentence_transformers.sparse_encoder.evaluation.SparseInformationRetrievalEvaluator)
| Metric | miriad_eval | miriad_test |
|:----------------------|:------------|:------------|
| dot_accuracy@1 | 0.917 | 0.9 |
| dot_accuracy@3 | 0.963 | 0.953 |
| dot_accuracy@5 | 0.969 | 0.961 |
| dot_accuracy@10 | 0.98 | 0.974 |
| dot_precision@1 | 0.917 | 0.9 |
| dot_precision@3 | 0.321 | 0.3177 |
| dot_precision@5 | 0.1938 | 0.1922 |
| dot_precision@10 | 0.098 | 0.0974 |
| dot_recall@1 | 0.917 | 0.9 |
| dot_recall@3 | 0.963 | 0.953 |
| dot_recall@5 | 0.969 | 0.961 |
| dot_recall@10 | 0.98 | 0.974 |
| **dot_ndcg@10** | **0.9509** | **0.9388** |
| dot_mrr@10 | 0.9414 | 0.9273 |
| dot_map@100 | 0.9422 | 0.9283 |
| query_active_dims | 72.487 | 73.084 |
| query_sparsity_ratio | 0.9976 | 0.9976 |
| corpus_active_dims | 291.542 | 293.267 |
| corpus_sparsity_ratio | 0.9904 | 0.9904 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### miriad-4.4_m-split
* Dataset: [miriad-4.4_m-split](https://huggingface.co/datasets/tomaarsen/miriad-4.4M-split) at [596b9ab](https://huggingface.co/datasets/tomaarsen/miriad-4.4M-split/tree/596b9ab305d52cb73644ed5b5004957c7bfaae40)
* Size: 100,000 training samples
* Columns: <code>question</code> and <code>passage_text</code>
* Approximate statistics based on the first 1000 samples:
| | question | passage_text |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 9 tokens</li><li>mean: 23.38 tokens</li><li>max: 71 tokens</li></ul> | <ul><li>min: 511 tokens</li><li>mean: 512.0 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| question | passage_text |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What factors may contribute to increased pulmonary conduit durability in patients who undergo the Ross operation compared to those with right ventricular outflow tract obstruction?<br></code> | <code>I n 1966, Ross and Somerville 1 reported the first use of an aortic homograft to establish right ventricle-to-pulmonary artery continuity in a patient with tetralogy of Fallot and pulmonary atresia. Since that time, pulmonary position homografts have been used in a variety of right-sided congenital heart lesions. Actuarial 5-year homograft survivals for cryopreserved homografts are reported to range between 55% and 94%, with the shortest durability noted in patients less than 2 years of age. 4 Pulmonary position homografts also are used to replace pulmonary autografts explanted to repair left-sided outflow disease (the Ross operation). Several factors may be likely to favor increased pulmonary conduit durability in Ross patients compared with those with right ventricular outflow tract obstruction, including later age at operation (allowing for larger homografts), more normal pulmonary artery architecture, absence of severe right ventricular hypertrophy, and more natural positioning of ...</code> |
| <code>How does MCAM expression in hMSC affect the growth and maintenance of hematopoietic progenitors?</code> | <code>After culture in a 3-dimensional hydrogel-based matrix, which constitutes hypoxic conditions, MCAM expression is lost. Concordantly, Tormin et al. demonstrated that MCAM is down-regulated under hypoxic conditions. 10 Furthermore, it was shown by others and our group that oxygen tension causes selective modification of hematopoietic cell and mesenchymal stromal cell interactions in co-culture systems as well as influence HSPC metabolism. [44] [45] [46] Thus, the observed differences between Sharma et al. and our data in HSPC supporting capacity of hMSC are likely due to the different culture conditions used. Further studies are required to clarify the influence of hypoxia in our model system. Altogether these findings provide further evidence for the importance of MCAM in supporting HSPC. Furthermore, previous reports have shown that MCAM is down-regulated in MSC after several passages as well as during aging and differentiation. 19, 47 Interestingly, MCAM overexpression in hMSC enhance...</code> |
| <code>What is the relationship between Fanconi anemia and breast and ovarian cancer susceptibility genes?<br></code> | <code>( 31 ) , of which 5% -10 % may be caused by genetic factors ( 32 ) , up to half a million of these patients may be at risk of secondary hereditary neoplasms. The historic observation of twofold to fi vefold increased risks of cancers of the ovary, thyroid, and connective tissue after breast cancer ( 33 ) presaged the later syndromic association of these tumors with inherited mutations of BRCA1, BRCA2, PTEN, and p53 ( 16 ) . By far the largest cumulative risk of a secondary cancer in BRCA mutation carriers is associated with cancer in the contralateral breast, which may reach a risk of 29.5% at 10 years ( 34 ) . The Breast Cancer Linkage Consortium ( 35 , 36 ) also documented threefold to fi vefold increased risks of subsequent cancers of prostate, pancreas, gallbladder, stomach, skin (melanoma), and uterus in BRCA2 mutation carriers and twofold increased risks of prostate and pancreas cancer in BRCA1 mutation carriers; these results are based largely on self-reported family history inf...</code> |
* Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters:
```json
{
"loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score')",
"lambda_corpus": 5e-06,
"lambda_query": 2e-05
}
```
### Evaluation Dataset
#### miriad-4.4_m-split
* Dataset: [miriad-4.4_m-split](https://huggingface.co/datasets/tomaarsen/miriad-4.4M-split) at [596b9ab](https://huggingface.co/datasets/tomaarsen/miriad-4.4M-split/tree/596b9ab305d52cb73644ed5b5004957c7bfaae40)
* Size: 1,000 evaluation samples
* Columns: <code>question</code> and <code>passage_text</code>
* Approximate statistics based on the first 1000 samples:
| | question | passage_text |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 8 tokens</li><li>mean: 23.55 tokens</li><li>max: 74 tokens</li></ul> | <ul><li>min: 512 tokens</li><li>mean: 512.0 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| question | passage_text |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What are some hereditary cancer syndromes that can result in various forms of cancer?<br></code> | <code>Hereditary Cancer Syndromes, including Hereditary Breast and Ovarian Cancer (HBOC) and Lynch Syndrome (LS), can result in various forms of cancer due to germline mutations in cancer predisposition genes. While the major contributory genes for these syndromes have been identified and well-studied (BRCA1/ BRCA2 for HBOC and MSH2/MSH6/MLH1/PMS2/ EPCAM for LS), there remains a large percentage of associated cancer cases that are negative for germline mutations in these genes, including 80% of women with a personal or family history of breast cancer who are negative for BRCA1/2 mutations [1] . Similarly, between 30 and 50% of families fulfill stringent criteria for LS and test negative for germline mismatch repair gene mutations [2] . Adding complexity to these disorders is the significant overlap in the spectrum of cancers observed between various hereditary cancer syndromes, including many cancer susceptibility syndromes. Some that contribute to elevated breast cancer risk include Li-Frau...</code> |
| <code>How do MAK-4 and MAK-5 exert their antioxidant properties?<br></code> | <code>Hybrid F1 mice were injected with urethane (300 mg/kg) at 8 days of age. A group was then put on a MAK-supplemented diet, another group was fed a standard pellet diet. At 36 weeks of age the mice were sacrificed and the livers examined for the presence of tumors mouse (Panel A) and for the number of nodules per mouse (Panel B) (* p < 0.05, ** P < 0.001). Statistical analysis was performed by Two Way ANOVA Test followed by Post Hoc Bonferroni analysis. <br><br> We than measured the influence of the MAK-4+5 combination on the expression of the three liver-specific connexins (cx26, cx32, and cx43). The level of cx26 expression was similar in all the groups of mice treated with the MAK-supplemented diet and in the control (Figure 4, Panel A) . A significant, time-dependent increase in cx32 was observed in the liver of all the groups of MAK treated mice compared to the normal diet-fed controls. Cx32 expression increased 2-fold after 1 week of treatment, and 3-to 4-fold at 3 months (Figure 4, Pane...</code> |
| <code>What are the primary indications for a decompressive craniectomy, and what role does neurocritical care play in determining the suitability of a patient for this procedure?</code> | <code>Decompressive craniectomy is a valid neurosurgical strategy now a day as an alternative to control an elevated intracranial pressure (ICP) and controlling the risk of uncal and/or subfalcine herniation, in refractory cases to the postural, ventilator, and pharmacological measures to control it. The neurocritical care and the ICP monitorization are key determinants to identify and postulate the inclusion criteria to consider a patient as candidate to this procedure, as it is always considered a rescue surgical technique. Head trauma and ischemic or hemorrhagic cerebrovascular disease with progressive deterioration due to mass effect are some of the cases that may require a decompressive craniectomy with its different variants. However, this procedure per se can have complications described in the postcraniectomy syndrome and may occur in short, medium, or even long term.<br><br> 1,2 The paradoxical herniation is a condition in which there is a deviation of the midline with mass effect, even t...</code> |
* Loss: [<code>SpladeLoss</code>](https://sbert.net/docs/package_reference/sparse_encoder/losses.html#spladeloss) with these parameters:
```json
{
"loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score')",
"lambda_corpus": 5e-06,
"lambda_query": 2e-05
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 4
- `per_device_eval_batch_size`: 4
- `learning_rate`: 2e-05
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 4
- `per_device_eval_batch_size`: 4
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | miriad_eval_dot_ndcg@10 | miriad_test_dot_ndcg@10 |
|:-----:|:-----:|:-------------:|:---------------:|:-----------------------:|:-----------------------:|
| 0.032 | 800 | 311.9058 | - | - | - |
| 0.064 | 1600 | 10.9011 | - | - | - |
| 0.096 | 2400 | 2.3726 | - | - | - |
| 0.128 | 3200 | 0.4999 | - | - | - |
| 0.16 | 4000 | 0.1222 | 0.0420 | 0.9017 | - |
| 0.192 | 4800 | 0.0755 | - | - | - |
| 0.224 | 5600 | 0.0481 | - | - | - |
| 0.256 | 6400 | 0.0643 | - | - | - |
| 0.288 | 7200 | 0.0598 | - | - | - |
| 0.32 | 8000 | 0.0575 | 0.0210 | 0.9274 | - |
| 0.352 | 8800 | 0.0417 | - | - | - |
| 0.384 | 9600 | 0.0487 | - | - | - |
| 0.416 | 10400 | 0.0262 | - | - | - |
| 0.448 | 11200 | 0.0404 | - | - | - |
| 0.48 | 12000 | 0.0359 | 0.0163 | 0.9282 | - |
| 0.512 | 12800 | 0.0407 | - | - | - |
| 0.544 | 13600 | 0.0373 | - | - | - |
| 0.576 | 14400 | 0.0204 | - | - | - |
| 0.608 | 15200 | 0.0218 | - | - | - |
| 0.64 | 16000 | 0.0196 | 0.0045 | 0.9434 | - |
| 0.672 | 16800 | 0.0311 | - | - | - |
| 0.704 | 17600 | 0.0372 | - | - | - |
| 0.736 | 18400 | 0.029 | - | - | - |
| 0.768 | 19200 | 0.0319 | - | - | - |
| 0.8 | 20000 | 0.0352 | 0.0196 | 0.9392 | - |
| 0.832 | 20800 | 0.0257 | - | - | - |
| 0.864 | 21600 | 0.0339 | - | - | - |
| 0.896 | 22400 | 0.0211 | - | - | - |
| 0.928 | 23200 | 0.0197 | - | - | - |
| 0.96 | 24000 | 0.0228 | 0.0069 | 0.9514 | - |
| 0.992 | 24800 | 0.0161 | - | - | - |
| -1 | -1 | - | - | 0.9509 | 0.9388 |
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Energy Consumed**: 0.505 kWh
- **Carbon Emitted**: 0.196 kg of CO2
- **Hours Used**: 1.484 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x NVIDIA GeForce RTX 3090
- **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
- **RAM Size**: 31.78 GB
### Framework Versions
- Python: 3.11.6
- Sentence Transformers: 4.2.0.dev0
- Transformers: 4.52.4
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.1
- Datasets: 2.21.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### SpladeLoss
```bibtex
@misc{formal2022distillationhardnegativesampling,
title={From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective},
author={Thibault Formal and Carlos Lassance and Benjamin Piwowarski and Stรฉphane Clinchant},
year={2022},
eprint={2205.04733},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2205.04733},
}
```
#### SparseMultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
#### FlopsLoss
```bibtex
@article{paria2020minimizing,
title={Minimizing flops to learn efficient sparse representations},
author={Paria, Biswajit and Yeh, Chih-Kuan and Yen, Ian EH and Xu, Ning and Ravikumar, Pradeep and P{'o}czos, Barnab{'a}s},
journal={arXiv preprint arXiv:2004.05665},
year={2020}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
sizzlebop/Qwen3-ReactJs-code-Q8_0-GGUF
|
sizzlebop
| 2025-06-17T10:09:56Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"code",
"ReactJS",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:nirusanan/Qwen3-ReactJs-code",
"base_model:finetune:nirusanan/Qwen3-ReactJs-code",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-06-17T10:09:45Z |
---
library_name: transformers
tags:
- code
- ReactJS
- llama-cpp
- gguf-my-repo
language:
- en
base_model: nirusanan/Qwen3-ReactJs-code
base_model_relation: finetune
pipeline_tag: text-generation
---
# sizzlebop/Qwen3-ReactJs-code-Q8_0-GGUF
This model was converted to GGUF format from [`nirusanan/Qwen3-ReactJs-code`](https://huggingface.co/nirusanan/Qwen3-ReactJs-code) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nirusanan/Qwen3-ReactJs-code) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo sizzlebop/Qwen3-ReactJs-code-Q8_0-GGUF --hf-file qwen3-reactjs-code-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo sizzlebop/Qwen3-ReactJs-code-Q8_0-GGUF --hf-file qwen3-reactjs-code-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo sizzlebop/Qwen3-ReactJs-code-Q8_0-GGUF --hf-file qwen3-reactjs-code-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo sizzlebop/Qwen3-ReactJs-code-Q8_0-GGUF --hf-file qwen3-reactjs-code-q8_0.gguf -c 2048
```
|
pvaluedotone/bigbird-flight-2
|
pvaluedotone
| 2025-06-17T10:08:58Z | 30 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"big_bird",
"text-classification",
"autotrain",
"en",
"base_model:google/bigbird-roberta-base",
"base_model:finetune:google/bigbird-roberta-base",
"doi:10.57967/hf/5780",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-04T03:57:39Z |
---
library_name: transformers
tags:
- autotrain
- text-classification
base_model: google/bigbird-roberta-base
widget:
- text: Let me fly
license: mit
language:
- en
metrics:
- accuracy
pipeline_tag: text-classification
---
# ๐ซ Big Bird Flight 2
Big Bird Flight 2 is a fine-tuned version of Googleโs BigBird model, optimised for long-text sentiment analysis. Big Bird Flight 2 is an improved version of Big Bird Flight 1. The model records a 16% improvement in accuracy over its predecessor. Both models were trained on 2,598 flight review texts, each annotated with a 10-point ordinal sentiment rating ranging from 1 (extremely negative) to 10 (extremely positive).
Just like its predecessor, Big Bird 2 captures emotional gradients in text, offering richer sentiment analysis than conventional binary classification (e.g., positive vs. negative). This makes it particularly useful for applications requiring fine-grained sentiment understanding from lengthy or detailed customer feedback.
- Use case: text classification
- Sentiment class: 1 (extremely negative) to 10 (extremely positive)
# ๐ Model details
- Base model: google/bigbird-roberta-base
- Architecture: BigBirdForSequenceClassification
- Hidden size: 768
- Layers: 12 transformer blocks
- Attention type: block-sparse
- Max sequence length: 4096 tokens
- Number of classes: 10 [ratings from 1 to 10 (extremely negative/extremely positive)]
# ๐ง Training Summary
- Dataset: 2,598 airline passenger reviews.
- Labels: ordinal scale from 1 (extremely negative) to 10 (extremely positive).
- Loss function: cross-entropy (classification setup).
# ๐ Tokenizer
- Based on SentencePiece Unigram model.
- Uses a Metaspace tokenizer for subword splitting.
- Max tokenised input length was set to 1024 tokens during preprocessing.
# ๐ Use cases
- Analyse detailed customer reviews of flight experience.
- Replace coarse binary sentiment models with ordinal sentiment scales.
- Experiment with ordinal regression techniques in NLP.
# ๐ Citation
If you're using this model in your research or applications, appreciate if you could buy me a coffee through this citation.
Mat Roni, S. (2025). Big Bird Flight 2 for ordinal sentiment analysis [software]. Hugging Face. https://huggingface.co/pvaluedotone/bigbird-flight-2 DOI: https://doi.org/10.57967/hf/5780
## Validation metrics
- loss: 1.6761
- f1_macro: 0.2734
- f1_micro: 0.3093
- f1_weighted: 0.2814
- precision_macro: 0.2822
- precision_micro: 0.3093
- precision_weighted: 0.2911
- recall_macro: 0.3007
- recall_micro: 0.3093
- recall_weighted: 0.3093
- accuracy: 0.3093
|
greenkwd/lr0.0001_bs16_0616_1909
|
greenkwd
| 2025-06-17T09:59:21Z | 0 | 0 | null |
[
"safetensors",
"segformer",
"vision",
"image-segmentation",
"generated_from_trainer",
"base_model:nvidia/mit-b0",
"base_model:finetune:nvidia/mit-b0",
"license:other",
"region:us"
] |
image-segmentation
| 2025-06-17T09:59:16Z |
---
license: other
base_model: nvidia/mit-b0
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: lr0.0001_bs16_0616_1909
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lr0.0001_bs16_0616_1909
This model is a fine-tuned version of [nvidia/mit-b0](https://huggingface.co/nvidia/mit-b0) on the greenkwd/upwellingdetection_SST dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0525
- Mean Iou: 0.9540
- Mean Accuracy: 0.9751
- Overall Accuracy: 0.9825
- Accuracy Land: 0.9949
- Accuracy Upwelling: 0.9841
- Accuracy Not Upwelling: 0.9464
- Iou Land: 0.9902
- Iou Upwelling: 0.9631
- Iou Not Upwelling: 0.9087
- Dice Macro: 0.9761
- Dice Micro: 0.9825
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Land | Accuracy Upwelling | Accuracy Not Upwelling | Iou Land | Iou Upwelling | Iou Not Upwelling | Dice Macro | Dice Micro |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-------------:|:----------------:|:-------------:|:------------------:|:----------------------:|:--------:|:-------------:|:-----------------:|:----------:|:----------:|
| 1.0825 | 0.4 | 20 | 1.0832 | 0.2083 | 0.3966 | 0.3236 | 0.0025 | 0.5717 | 0.6156 | 0.0024 | 0.4864 | 0.1362 | 0.2997 | 0.3236 |
| 0.8743 | 0.8 | 40 | 0.8897 | 0.5858 | 0.7459 | 0.7505 | 0.7347 | 0.7823 | 0.7208 | 0.7346 | 0.6904 | 0.3323 | 0.7209 | 0.7505 |
| 0.6363 | 1.2 | 60 | 0.6193 | 0.6977 | 0.8158 | 0.8501 | 0.8837 | 0.8887 | 0.6750 | 0.8830 | 0.7701 | 0.4399 | 0.8063 | 0.8501 |
| 0.4967 | 1.6 | 80 | 0.4543 | 0.7462 | 0.8490 | 0.8823 | 0.9330 | 0.8961 | 0.7179 | 0.9262 | 0.7947 | 0.5176 | 0.8431 | 0.8823 |
| 0.4218 | 2.0 | 100 | 0.3824 | 0.7926 | 0.8787 | 0.9094 | 0.9542 | 0.9241 | 0.7579 | 0.9466 | 0.8333 | 0.5978 | 0.8766 | 0.9094 |
| 0.3799 | 2.4 | 120 | 0.3334 | 0.8156 | 0.8992 | 0.9197 | 0.9584 | 0.9179 | 0.8214 | 0.9527 | 0.8469 | 0.6473 | 0.8929 | 0.9197 |
| 0.3258 | 2.8 | 140 | 0.2989 | 0.8289 | 0.9073 | 0.9265 | 0.9686 | 0.9176 | 0.8356 | 0.9604 | 0.8540 | 0.6723 | 0.9017 | 0.9265 |
| 0.2941 | 3.2 | 160 | 0.2684 | 0.8481 | 0.9147 | 0.9369 | 0.9743 | 0.9413 | 0.8284 | 0.9664 | 0.8739 | 0.7041 | 0.9140 | 0.9369 |
| 0.3038 | 3.6 | 180 | 0.2340 | 0.8684 | 0.9250 | 0.9466 | 0.9797 | 0.9552 | 0.8403 | 0.9701 | 0.8935 | 0.7416 | 0.9267 | 0.9466 |
| 0.2533 | 4.0 | 200 | 0.1953 | 0.8807 | 0.9341 | 0.9517 | 0.9732 | 0.9659 | 0.8632 | 0.9696 | 0.9061 | 0.7665 | 0.9344 | 0.9517 |
| 0.284 | 4.4 | 220 | 0.1711 | 0.8880 | 0.9336 | 0.9556 | 0.9853 | 0.9693 | 0.8461 | 0.9724 | 0.9121 | 0.7794 | 0.9387 | 0.9556 |
| 0.2563 | 4.8 | 240 | 0.1593 | 0.8947 | 0.9406 | 0.9580 | 0.9753 | 0.9768 | 0.8697 | 0.9719 | 0.9185 | 0.7938 | 0.9428 | 0.9580 |
| 0.2402 | 5.2 | 260 | 0.1456 | 0.9012 | 0.9418 | 0.9610 | 0.9840 | 0.9771 | 0.8642 | 0.9753 | 0.9221 | 0.8061 | 0.9465 | 0.9610 |
| 0.221 | 5.6 | 280 | 0.1428 | 0.9016 | 0.9486 | 0.9603 | 0.9700 | 0.9756 | 0.9003 | 0.9683 | 0.9274 | 0.8091 | 0.9469 | 0.9603 |
| 0.2061 | 6.0 | 300 | 0.1276 | 0.9109 | 0.9503 | 0.9646 | 0.9826 | 0.9753 | 0.8930 | 0.9759 | 0.9294 | 0.8275 | 0.9523 | 0.9646 |
| 0.2344 | 6.4 | 320 | 0.1170 | 0.9164 | 0.9527 | 0.9670 | 0.9861 | 0.9761 | 0.8959 | 0.9772 | 0.9339 | 0.8381 | 0.9554 | 0.9670 |
| 0.2138 | 6.8 | 340 | 0.1153 | 0.9039 | 0.9412 | 0.9624 | 0.9826 | 0.9869 | 0.8541 | 0.9772 | 0.9238 | 0.8106 | 0.9481 | 0.9624 |
| 0.195 | 7.2 | 360 | 0.1060 | 0.9152 | 0.9510 | 0.9666 | 0.9858 | 0.9789 | 0.8882 | 0.9770 | 0.9334 | 0.8352 | 0.9547 | 0.9666 |
| 0.1927 | 7.6 | 380 | 0.1132 | 0.9061 | 0.9453 | 0.9629 | 0.9894 | 0.9707 | 0.8758 | 0.9737 | 0.9281 | 0.8165 | 0.9495 | 0.9629 |
| 0.2171 | 8.0 | 400 | 0.1037 | 0.9140 | 0.9593 | 0.9652 | 0.9853 | 0.9531 | 0.9395 | 0.9777 | 0.9280 | 0.8361 | 0.9540 | 0.9652 |
| 0.241 | 8.4 | 420 | 0.1164 | 0.9184 | 0.9588 | 0.9673 | 0.9800 | 0.9711 | 0.9252 | 0.9764 | 0.9353 | 0.8434 | 0.9566 | 0.9673 |
| 0.2041 | 8.8 | 440 | 0.0961 | 0.9215 | 0.9584 | 0.9688 | 0.9827 | 0.9755 | 0.9171 | 0.9781 | 0.9375 | 0.8489 | 0.9583 | 0.9688 |
| 0.183 | 9.2 | 460 | 0.0935 | 0.9187 | 0.9598 | 0.9674 | 0.9874 | 0.9594 | 0.9326 | 0.9778 | 0.9336 | 0.8446 | 0.9567 | 0.9674 |
| 0.1884 | 9.6 | 480 | 0.0947 | 0.9168 | 0.9498 | 0.9675 | 0.9889 | 0.9818 | 0.8787 | 0.9766 | 0.9362 | 0.8376 | 0.9556 | 0.9675 |
| 0.2219 | 10.0 | 500 | 0.0902 | 0.9178 | 0.9634 | 0.9667 | 0.9820 | 0.9547 | 0.9535 | 0.9780 | 0.9323 | 0.8430 | 0.9562 | 0.9667 |
| 0.1747 | 10.4 | 520 | 0.0941 | 0.9140 | 0.9475 | 0.9664 | 0.9832 | 0.9896 | 0.8697 | 0.9784 | 0.9318 | 0.8317 | 0.9540 | 0.9664 |
| 0.1589 | 10.8 | 540 | 0.0838 | 0.9209 | 0.9544 | 0.9688 | 0.9821 | 0.9861 | 0.8949 | 0.9783 | 0.9374 | 0.8469 | 0.9579 | 0.9688 |
| 0.1718 | 11.2 | 560 | 0.0810 | 0.9256 | 0.9648 | 0.9703 | 0.9859 | 0.9628 | 0.9457 | 0.9804 | 0.9388 | 0.8577 | 0.9606 | 0.9703 |
| 0.2022 | 11.6 | 580 | 0.0919 | 0.9163 | 0.9640 | 0.9662 | 0.9860 | 0.9452 | 0.9609 | 0.9812 | 0.9275 | 0.8402 | 0.9554 | 0.9662 |
| 0.1704 | 12.0 | 600 | 0.0822 | 0.9259 | 0.9663 | 0.9702 | 0.9834 | 0.9624 | 0.9531 | 0.9800 | 0.9390 | 0.8586 | 0.9608 | 0.9702 |
| 0.1941 | 12.4 | 620 | 0.0748 | 0.9250 | 0.9558 | 0.9708 | 0.9918 | 0.9794 | 0.8963 | 0.9791 | 0.9431 | 0.8527 | 0.9602 | 0.9708 |
| 0.1833 | 12.8 | 640 | 0.1009 | 0.9045 | 0.9596 | 0.9612 | 0.9912 | 0.9253 | 0.9621 | 0.9825 | 0.9144 | 0.8167 | 0.9485 | 0.9612 |
| 0.1887 | 13.2 | 660 | 0.0759 | 0.9299 | 0.9673 | 0.9722 | 0.9885 | 0.9625 | 0.9510 | 0.9832 | 0.9416 | 0.8648 | 0.9630 | 0.9722 |
| 0.1531 | 13.6 | 680 | 0.0775 | 0.9279 | 0.9590 | 0.9721 | 0.9930 | 0.9760 | 0.9082 | 0.9826 | 0.9435 | 0.8575 | 0.9618 | 0.9721 |
| 0.2303 | 14.0 | 700 | 0.0798 | 0.9231 | 0.9553 | 0.9701 | 0.9823 | 0.9896 | 0.8940 | 0.9799 | 0.9410 | 0.8483 | 0.9591 | 0.9701 |
| 0.1556 | 14.4 | 720 | 0.0736 | 0.9262 | 0.9536 | 0.9718 | 0.9928 | 0.9879 | 0.8802 | 0.9811 | 0.9445 | 0.8531 | 0.9609 | 0.9718 |
| 0.1615 | 14.8 | 740 | 0.0712 | 0.9364 | 0.9668 | 0.9753 | 0.9886 | 0.9783 | 0.9334 | 0.9841 | 0.9504 | 0.8748 | 0.9666 | 0.9753 |
| 0.1501 | 15.2 | 760 | 0.0708 | 0.9324 | 0.9619 | 0.9739 | 0.9941 | 0.9762 | 0.9153 | 0.9819 | 0.9490 | 0.8663 | 0.9644 | 0.9739 |
| 0.1596 | 15.6 | 780 | 0.0666 | 0.9376 | 0.9669 | 0.9758 | 0.9898 | 0.9790 | 0.9319 | 0.9847 | 0.9510 | 0.8772 | 0.9673 | 0.9758 |
| 0.1343 | 16.0 | 800 | 0.0673 | 0.9375 | 0.9681 | 0.9757 | 0.9920 | 0.9727 | 0.9395 | 0.9850 | 0.9501 | 0.8774 | 0.9672 | 0.9757 |
| 0.1611 | 16.4 | 820 | 0.0656 | 0.9373 | 0.9649 | 0.9759 | 0.9903 | 0.9834 | 0.9209 | 0.9852 | 0.9508 | 0.8761 | 0.9671 | 0.9759 |
| 0.2254 | 16.8 | 840 | 0.0659 | 0.9395 | 0.9686 | 0.9766 | 0.9924 | 0.9750 | 0.9386 | 0.9858 | 0.9519 | 0.8807 | 0.9683 | 0.9766 |
| 0.1699 | 17.2 | 860 | 0.0660 | 0.9389 | 0.9674 | 0.9764 | 0.9913 | 0.9784 | 0.9326 | 0.9855 | 0.9518 | 0.8794 | 0.9679 | 0.9764 |
| 0.1517 | 17.6 | 880 | 0.0702 | 0.9313 | 0.9576 | 0.9739 | 0.9910 | 0.9906 | 0.8913 | 0.9856 | 0.9459 | 0.8624 | 0.9637 | 0.9739 |
| 0.1707 | 18.0 | 900 | 0.0660 | 0.9369 | 0.9627 | 0.9759 | 0.9923 | 0.9858 | 0.9101 | 0.9850 | 0.9511 | 0.8748 | 0.9669 | 0.9759 |
| 0.1689 | 18.4 | 920 | 0.0594 | 0.9408 | 0.9682 | 0.9772 | 0.9911 | 0.9806 | 0.9330 | 0.9858 | 0.9536 | 0.8831 | 0.9690 | 0.9772 |
| 0.1527 | 18.8 | 940 | 0.0614 | 0.9395 | 0.9653 | 0.9769 | 0.9918 | 0.9850 | 0.9192 | 0.9861 | 0.9529 | 0.8795 | 0.9683 | 0.9769 |
| 0.1323 | 19.2 | 960 | 0.0682 | 0.9336 | 0.9701 | 0.9740 | 0.9930 | 0.9580 | 0.9593 | 0.9861 | 0.9444 | 0.8703 | 0.9650 | 0.9740 |
| 0.1619 | 19.6 | 980 | 0.0665 | 0.9374 | 0.9671 | 0.9758 | 0.9934 | 0.9732 | 0.9348 | 0.9846 | 0.9510 | 0.8766 | 0.9671 | 0.9758 |
| 0.1153 | 20.0 | 1000 | 0.0635 | 0.9378 | 0.9645 | 0.9762 | 0.9895 | 0.9868 | 0.9172 | 0.9854 | 0.9519 | 0.8761 | 0.9673 | 0.9762 |
| 0.2405 | 20.4 | 1020 | 0.0618 | 0.9413 | 0.9692 | 0.9774 | 0.9909 | 0.9793 | 0.9372 | 0.9859 | 0.9541 | 0.8840 | 0.9693 | 0.9774 |
| 0.1314 | 20.8 | 1040 | 0.0681 | 0.9345 | 0.9610 | 0.9750 | 0.9899 | 0.9892 | 0.9039 | 0.9858 | 0.9486 | 0.8692 | 0.9655 | 0.9750 |
| 0.1358 | 21.2 | 1060 | 0.0615 | 0.9413 | 0.9698 | 0.9773 | 0.9941 | 0.9733 | 0.9422 | 0.9860 | 0.9536 | 0.8843 | 0.9693 | 0.9773 |
| 0.1612 | 21.6 | 1080 | 0.0695 | 0.9344 | 0.9719 | 0.9742 | 0.9888 | 0.9604 | 0.9665 | 0.9851 | 0.9464 | 0.8718 | 0.9655 | 0.9742 |
| 0.1371 | 22.0 | 1100 | 0.0620 | 0.9423 | 0.9683 | 0.9779 | 0.9924 | 0.9818 | 0.9307 | 0.9864 | 0.9550 | 0.8855 | 0.9698 | 0.9779 |
| 0.1446 | 22.4 | 1120 | 0.0623 | 0.9422 | 0.9666 | 0.9779 | 0.9946 | 0.9835 | 0.9216 | 0.9863 | 0.9553 | 0.8849 | 0.9697 | 0.9779 |
| 0.1322 | 22.8 | 1140 | 0.0611 | 0.9425 | 0.9678 | 0.9780 | 0.9918 | 0.9847 | 0.9268 | 0.9866 | 0.9553 | 0.8857 | 0.9699 | 0.9780 |
| 0.1366 | 23.2 | 1160 | 0.0611 | 0.9421 | 0.9664 | 0.9779 | 0.9926 | 0.9863 | 0.9203 | 0.9867 | 0.9549 | 0.8848 | 0.9697 | 0.9779 |
| 0.1516 | 23.6 | 1180 | 0.0642 | 0.9388 | 0.9634 | 0.9767 | 0.9911 | 0.9898 | 0.9093 | 0.9865 | 0.9520 | 0.8777 | 0.9678 | 0.9767 |
| 0.0805 | 24.0 | 1200 | 0.0615 | 0.9430 | 0.9677 | 0.9782 | 0.9947 | 0.9820 | 0.9263 | 0.9863 | 0.9559 | 0.8867 | 0.9702 | 0.9782 |
| 0.1405 | 24.4 | 1220 | 0.0623 | 0.9427 | 0.9719 | 0.9778 | 0.9885 | 0.9777 | 0.9495 | 0.9853 | 0.9557 | 0.8871 | 0.9700 | 0.9778 |
| 0.1409 | 24.8 | 1240 | 0.0639 | 0.9398 | 0.9718 | 0.9766 | 0.9937 | 0.9656 | 0.9563 | 0.9866 | 0.9512 | 0.8816 | 0.9684 | 0.9766 |
| 0.1237 | 25.2 | 1260 | 0.0620 | 0.9422 | 0.9723 | 0.9776 | 0.9935 | 0.9694 | 0.9539 | 0.9869 | 0.9532 | 0.8866 | 0.9698 | 0.9776 |
| 0.1519 | 25.6 | 1280 | 0.0627 | 0.9412 | 0.9657 | 0.9776 | 0.9922 | 0.9872 | 0.9178 | 0.9870 | 0.9541 | 0.8825 | 0.9692 | 0.9776 |
| 0.1148 | 26.0 | 1300 | 0.0631 | 0.9401 | 0.9651 | 0.9772 | 0.9916 | 0.9875 | 0.9163 | 0.9869 | 0.9532 | 0.8803 | 0.9686 | 0.9772 |
| 0.1085 | 26.4 | 1320 | 0.0605 | 0.9452 | 0.9719 | 0.9789 | 0.9928 | 0.9776 | 0.9452 | 0.9871 | 0.9567 | 0.8920 | 0.9714 | 0.9789 |
| 0.1531 | 26.8 | 1340 | 0.0645 | 0.9361 | 0.9718 | 0.9750 | 0.9937 | 0.9581 | 0.9635 | 0.9871 | 0.9461 | 0.8751 | 0.9664 | 0.9750 |
| 0.1346 | 27.2 | 1360 | 0.0633 | 0.9362 | 0.9604 | 0.9759 | 0.9923 | 0.9916 | 0.8974 | 0.9871 | 0.9498 | 0.8718 | 0.9664 | 0.9759 |
| 0.1246 | 27.6 | 1380 | 0.0597 | 0.9439 | 0.9691 | 0.9785 | 0.9957 | 0.9785 | 0.9330 | 0.9862 | 0.9566 | 0.8888 | 0.9707 | 0.9785 |
| 0.0948 | 28.0 | 1400 | 0.0611 | 0.9419 | 0.9735 | 0.9774 | 0.9931 | 0.9663 | 0.9610 | 0.9872 | 0.9527 | 0.8858 | 0.9696 | 0.9774 |
| 0.1272 | 28.4 | 1420 | 0.0592 | 0.9452 | 0.9697 | 0.9790 | 0.9946 | 0.9809 | 0.9337 | 0.9869 | 0.9574 | 0.8912 | 0.9714 | 0.9790 |
| 0.1076 | 28.8 | 1440 | 0.0593 | 0.9457 | 0.9728 | 0.9791 | 0.9923 | 0.9767 | 0.9495 | 0.9873 | 0.9569 | 0.8930 | 0.9717 | 0.9791 |
| 0.136 | 29.2 | 1460 | 0.0588 | 0.9448 | 0.9685 | 0.9790 | 0.9931 | 0.9856 | 0.9269 | 0.9874 | 0.9569 | 0.8901 | 0.9712 | 0.9790 |
| 0.1222 | 29.6 | 1480 | 0.0588 | 0.9457 | 0.9706 | 0.9792 | 0.9928 | 0.9821 | 0.9370 | 0.9873 | 0.9574 | 0.8925 | 0.9717 | 0.9792 |
| 0.0993 | 30.0 | 1500 | 0.0592 | 0.9437 | 0.9666 | 0.9786 | 0.9952 | 0.9857 | 0.9190 | 0.9866 | 0.9569 | 0.8875 | 0.9706 | 0.9786 |
| 0.1523 | 30.4 | 1520 | 0.0576 | 0.9463 | 0.9717 | 0.9794 | 0.9943 | 0.9783 | 0.9423 | 0.9874 | 0.9576 | 0.8938 | 0.9720 | 0.9794 |
| 0.1106 | 30.8 | 1540 | 0.0590 | 0.9437 | 0.9675 | 0.9786 | 0.9918 | 0.9879 | 0.9226 | 0.9873 | 0.9562 | 0.8874 | 0.9705 | 0.9786 |
| 0.1363 | 31.2 | 1560 | 0.0607 | 0.9411 | 0.9637 | 0.9777 | 0.9946 | 0.9892 | 0.9073 | 0.9872 | 0.9543 | 0.8816 | 0.9691 | 0.9777 |
| 0.1152 | 31.6 | 1580 | 0.0573 | 0.9468 | 0.9713 | 0.9796 | 0.9943 | 0.9804 | 0.9393 | 0.9875 | 0.9582 | 0.8947 | 0.9723 | 0.9796 |
| 0.1399 | 32.0 | 1600 | 0.0577 | 0.9461 | 0.9698 | 0.9794 | 0.9932 | 0.9844 | 0.9317 | 0.9875 | 0.9578 | 0.8931 | 0.9719 | 0.9794 |
| 0.1099 | 32.4 | 1620 | 0.0580 | 0.9466 | 0.9704 | 0.9796 | 0.9936 | 0.9833 | 0.9343 | 0.9876 | 0.9581 | 0.8940 | 0.9721 | 0.9796 |
| 0.1297 | 32.8 | 1640 | 0.0623 | 0.9414 | 0.9738 | 0.9772 | 0.9929 | 0.9649 | 0.9635 | 0.9874 | 0.9518 | 0.8851 | 0.9693 | 0.9772 |
| 0.0868 | 33.2 | 1660 | 0.0587 | 0.9451 | 0.9677 | 0.9792 | 0.9942 | 0.9872 | 0.9216 | 0.9877 | 0.9572 | 0.8905 | 0.9714 | 0.9792 |
| 0.1211 | 33.6 | 1680 | 0.0601 | 0.9415 | 0.9644 | 0.9779 | 0.9934 | 0.9901 | 0.9096 | 0.9879 | 0.9540 | 0.8826 | 0.9693 | 0.9779 |
| 0.0905 | 34.0 | 1700 | 0.0569 | 0.9479 | 0.9728 | 0.9800 | 0.9933 | 0.9797 | 0.9455 | 0.9879 | 0.9587 | 0.8970 | 0.9728 | 0.9800 |
| 0.1618 | 34.4 | 1720 | 0.0595 | 0.9449 | 0.9684 | 0.9790 | 0.9929 | 0.9863 | 0.9261 | 0.9878 | 0.9567 | 0.8901 | 0.9712 | 0.9790 |
| 0.113 | 34.8 | 1740 | 0.0575 | 0.9475 | 0.9726 | 0.9799 | 0.9920 | 0.9815 | 0.9443 | 0.9876 | 0.9589 | 0.8961 | 0.9727 | 0.9799 |
| 0.1022 | 35.2 | 1760 | 0.0647 | 0.9376 | 0.9601 | 0.9765 | 0.9942 | 0.9929 | 0.8931 | 0.9876 | 0.9510 | 0.8741 | 0.9672 | 0.9765 |
| 0.1194 | 35.6 | 1780 | 0.0733 | 0.9298 | 0.9702 | 0.9724 | 0.9946 | 0.9481 | 0.9678 | 0.9876 | 0.9388 | 0.8629 | 0.9629 | 0.9724 |
| 0.0935 | 36.0 | 1800 | 0.0610 | 0.9428 | 0.9655 | 0.9783 | 0.9932 | 0.9896 | 0.9137 | 0.9878 | 0.9550 | 0.8855 | 0.9700 | 0.9783 |
| 0.123 | 36.4 | 1820 | 0.0612 | 0.9425 | 0.9653 | 0.9783 | 0.9932 | 0.9899 | 0.9127 | 0.9880 | 0.9549 | 0.8847 | 0.9699 | 0.9783 |
| 0.1178 | 36.8 | 1840 | 0.0577 | 0.9472 | 0.9699 | 0.9799 | 0.9951 | 0.9839 | 0.9308 | 0.9877 | 0.9589 | 0.8951 | 0.9725 | 0.9799 |
| 0.0969 | 37.2 | 1860 | 0.0675 | 0.9351 | 0.9589 | 0.9756 | 0.9928 | 0.9931 | 0.8906 | 0.9880 | 0.9483 | 0.8689 | 0.9658 | 0.9756 |
| 0.1009 | 37.6 | 1880 | 0.0566 | 0.9484 | 0.9717 | 0.9803 | 0.9942 | 0.9827 | 0.9381 | 0.9882 | 0.9594 | 0.8976 | 0.9731 | 0.9803 |
| 0.125 | 38.0 | 1900 | 0.0580 | 0.9463 | 0.9693 | 0.9796 | 0.9923 | 0.9875 | 0.9280 | 0.9880 | 0.9580 | 0.8929 | 0.9720 | 0.9796 |
| 0.0918 | 38.4 | 1920 | 0.0591 | 0.9448 | 0.9673 | 0.9791 | 0.9933 | 0.9887 | 0.9201 | 0.9881 | 0.9567 | 0.8896 | 0.9712 | 0.9791 |
| 0.1171 | 38.8 | 1940 | 0.0567 | 0.9487 | 0.9719 | 0.9804 | 0.9937 | 0.9834 | 0.9387 | 0.9883 | 0.9596 | 0.8982 | 0.9733 | 0.9804 |
| 0.1009 | 39.2 | 1960 | 0.0568 | 0.9488 | 0.9723 | 0.9804 | 0.9939 | 0.9823 | 0.9407 | 0.9882 | 0.9595 | 0.8987 | 0.9734 | 0.9804 |
| 0.1126 | 39.6 | 1980 | 0.0563 | 0.9484 | 0.9721 | 0.9803 | 0.9927 | 0.9839 | 0.9396 | 0.9881 | 0.9596 | 0.8977 | 0.9732 | 0.9803 |
| 0.1036 | 40.0 | 2000 | 0.0569 | 0.9485 | 0.9712 | 0.9804 | 0.9944 | 0.9842 | 0.9349 | 0.9881 | 0.9596 | 0.8978 | 0.9732 | 0.9804 |
| 0.0953 | 40.4 | 2020 | 0.0560 | 0.9486 | 0.9723 | 0.9804 | 0.9929 | 0.9833 | 0.9408 | 0.9883 | 0.9594 | 0.8981 | 0.9732 | 0.9804 |
| 0.1173 | 40.8 | 2040 | 0.0601 | 0.9438 | 0.9654 | 0.9788 | 0.9952 | 0.9896 | 0.9114 | 0.9881 | 0.9561 | 0.8873 | 0.9706 | 0.9788 |
| 0.0956 | 41.2 | 2060 | 0.0531 | 0.9468 | 0.9695 | 0.9798 | 0.9925 | 0.9880 | 0.9279 | 0.9882 | 0.9584 | 0.8939 | 0.9723 | 0.9798 |
| 0.1079 | 41.6 | 2080 | 0.0572 | 0.9475 | 0.9718 | 0.9799 | 0.9935 | 0.9817 | 0.9401 | 0.9883 | 0.9583 | 0.8958 | 0.9726 | 0.9799 |
| 0.0837 | 42.0 | 2100 | 0.0619 | 0.9417 | 0.9638 | 0.9780 | 0.9944 | 0.9909 | 0.9060 | 0.9885 | 0.9540 | 0.8826 | 0.9694 | 0.9780 |
| 0.107 | 42.4 | 2120 | 0.0566 | 0.9490 | 0.9745 | 0.9804 | 0.9934 | 0.9776 | 0.9524 | 0.9884 | 0.9591 | 0.8995 | 0.9735 | 0.9804 |
| 0.0999 | 42.8 | 2140 | 0.0555 | 0.9495 | 0.9742 | 0.9806 | 0.9940 | 0.9786 | 0.9501 | 0.9885 | 0.9597 | 0.9004 | 0.9738 | 0.9806 |
| 0.1092 | 43.2 | 2160 | 0.0564 | 0.9485 | 0.9744 | 0.9802 | 0.9934 | 0.9768 | 0.9529 | 0.9885 | 0.9586 | 0.8983 | 0.9732 | 0.9802 |
| 0.102 | 43.6 | 2180 | 0.0558 | 0.9495 | 0.9725 | 0.9807 | 0.9925 | 0.9848 | 0.9403 | 0.9883 | 0.9603 | 0.8998 | 0.9737 | 0.9807 |
| 0.0908 | 44.0 | 2200 | 0.0578 | 0.9469 | 0.9699 | 0.9798 | 0.9920 | 0.9879 | 0.9297 | 0.9881 | 0.9586 | 0.8941 | 0.9723 | 0.9798 |
| 0.1123 | 44.4 | 2220 | 0.0581 | 0.9469 | 0.9684 | 0.9799 | 0.9949 | 0.9879 | 0.9223 | 0.9883 | 0.9585 | 0.8938 | 0.9723 | 0.9799 |
| 0.0863 | 44.8 | 2240 | 0.0560 | 0.9484 | 0.9700 | 0.9804 | 0.9944 | 0.9870 | 0.9287 | 0.9885 | 0.9595 | 0.8971 | 0.9731 | 0.9804 |
| 0.1225 | 45.2 | 2260 | 0.0633 | 0.9402 | 0.9620 | 0.9775 | 0.9944 | 0.9928 | 0.8987 | 0.9886 | 0.9526 | 0.8793 | 0.9686 | 0.9775 |
| 0.1046 | 45.6 | 2280 | 0.0555 | 0.9501 | 0.9736 | 0.9809 | 0.9945 | 0.9804 | 0.9460 | 0.9886 | 0.9604 | 0.9012 | 0.9740 | 0.9809 |
| 0.0888 | 46.0 | 2300 | 0.0560 | 0.9495 | 0.9748 | 0.9806 | 0.9935 | 0.9777 | 0.9532 | 0.9886 | 0.9597 | 0.9003 | 0.9737 | 0.9806 |
| 0.1124 | 46.4 | 2320 | 0.0556 | 0.9498 | 0.9739 | 0.9808 | 0.9943 | 0.9795 | 0.9477 | 0.9885 | 0.9600 | 0.9008 | 0.9739 | 0.9808 |
| 0.111 | 46.8 | 2340 | 0.0558 | 0.9503 | 0.9737 | 0.9810 | 0.9936 | 0.9819 | 0.9455 | 0.9887 | 0.9605 | 0.9016 | 0.9741 | 0.9810 |
| 0.0916 | 47.2 | 2360 | 0.0554 | 0.9500 | 0.9744 | 0.9809 | 0.9936 | 0.9799 | 0.9496 | 0.9887 | 0.9603 | 0.9012 | 0.9740 | 0.9809 |
| 0.1211 | 47.6 | 2380 | 0.0569 | 0.9488 | 0.9726 | 0.9804 | 0.9908 | 0.9855 | 0.9415 | 0.9877 | 0.9603 | 0.8983 | 0.9733 | 0.9804 |
| 0.0975 | 48.0 | 2400 | 0.0554 | 0.9485 | 0.9697 | 0.9805 | 0.9952 | 0.9871 | 0.9268 | 0.9884 | 0.9598 | 0.8974 | 0.9732 | 0.9805 |
| 0.1085 | 48.4 | 2420 | 0.0573 | 0.9478 | 0.9719 | 0.9801 | 0.9899 | 0.9868 | 0.9389 | 0.9871 | 0.9601 | 0.8962 | 0.9728 | 0.9801 |
| 0.0862 | 48.8 | 2440 | 0.0563 | 0.9489 | 0.9707 | 0.9806 | 0.9939 | 0.9871 | 0.9310 | 0.9888 | 0.9597 | 0.8981 | 0.9734 | 0.9806 |
| 0.1114 | 49.2 | 2460 | 0.0591 | 0.9452 | 0.9666 | 0.9793 | 0.9939 | 0.9908 | 0.9151 | 0.9889 | 0.9567 | 0.8900 | 0.9714 | 0.9793 |
| 0.1065 | 49.6 | 2480 | 0.0566 | 0.9483 | 0.9694 | 0.9804 | 0.9953 | 0.9876 | 0.9252 | 0.9887 | 0.9595 | 0.8968 | 0.9731 | 0.9804 |
| 0.1018 | 50.0 | 2500 | 0.0548 | 0.9509 | 0.9742 | 0.9812 | 0.9938 | 0.9816 | 0.9473 | 0.9888 | 0.9610 | 0.9029 | 0.9745 | 0.9812 |
| 0.1009 | 50.4 | 2520 | 0.0604 | 0.9436 | 0.9661 | 0.9787 | 0.9918 | 0.9917 | 0.9149 | 0.9883 | 0.9557 | 0.8869 | 0.9705 | 0.9787 |
| 0.0982 | 50.8 | 2540 | 0.0557 | 0.9484 | 0.9696 | 0.9805 | 0.9941 | 0.9885 | 0.9263 | 0.9889 | 0.9593 | 0.8970 | 0.9731 | 0.9805 |
| 0.0863 | 51.2 | 2560 | 0.0545 | 0.9507 | 0.9743 | 0.9811 | 0.9940 | 0.9807 | 0.9483 | 0.9890 | 0.9607 | 0.9024 | 0.9744 | 0.9811 |
| 0.0751 | 51.6 | 2580 | 0.0568 | 0.9483 | 0.9702 | 0.9804 | 0.9928 | 0.9887 | 0.9291 | 0.9887 | 0.9594 | 0.8969 | 0.9731 | 0.9804 |
| 0.081 | 52.0 | 2600 | 0.0545 | 0.9511 | 0.9745 | 0.9813 | 0.9937 | 0.9815 | 0.9482 | 0.9889 | 0.9612 | 0.9031 | 0.9746 | 0.9813 |
| 0.119 | 52.4 | 2620 | 0.0560 | 0.9497 | 0.9719 | 0.9809 | 0.9931 | 0.9864 | 0.9364 | 0.9889 | 0.9603 | 0.9000 | 0.9738 | 0.9809 |
| 0.1036 | 52.8 | 2640 | 0.0554 | 0.9501 | 0.9717 | 0.9810 | 0.9940 | 0.9864 | 0.9349 | 0.9891 | 0.9605 | 0.9007 | 0.9740 | 0.9810 |
| 0.1018 | 53.2 | 2660 | 0.0558 | 0.9499 | 0.9712 | 0.9810 | 0.9952 | 0.9859 | 0.9324 | 0.9889 | 0.9606 | 0.9002 | 0.9739 | 0.9810 |
| 0.0918 | 53.6 | 2680 | 0.0547 | 0.9512 | 0.9739 | 0.9814 | 0.9931 | 0.9840 | 0.9447 | 0.9889 | 0.9615 | 0.9033 | 0.9746 | 0.9814 |
| 0.0972 | 54.0 | 2700 | 0.0565 | 0.9491 | 0.9710 | 0.9807 | 0.9925 | 0.9883 | 0.9323 | 0.9887 | 0.9599 | 0.8987 | 0.9735 | 0.9807 |
| 0.1043 | 54.4 | 2720 | 0.0550 | 0.9502 | 0.9716 | 0.9811 | 0.9946 | 0.9862 | 0.9341 | 0.9891 | 0.9607 | 0.9008 | 0.9741 | 0.9811 |
| 0.0971 | 54.8 | 2740 | 0.0553 | 0.9506 | 0.9755 | 0.9811 | 0.9931 | 0.9786 | 0.9548 | 0.9889 | 0.9605 | 0.9024 | 0.9743 | 0.9811 |
| 0.0818 | 55.2 | 2760 | 0.0605 | 0.9440 | 0.9650 | 0.9789 | 0.9944 | 0.9922 | 0.9083 | 0.9891 | 0.9555 | 0.8872 | 0.9707 | 0.9789 |
| 0.0855 | 55.6 | 2780 | 0.0547 | 0.9509 | 0.9733 | 0.9813 | 0.9959 | 0.9814 | 0.9426 | 0.9888 | 0.9613 | 0.9027 | 0.9745 | 0.9813 |
| 0.0561 | 56.0 | 2800 | 0.0558 | 0.9500 | 0.9757 | 0.9808 | 0.9946 | 0.9750 | 0.9575 | 0.9892 | 0.9594 | 0.9013 | 0.9740 | 0.9808 |
| 0.0888 | 56.4 | 2820 | 0.0544 | 0.9515 | 0.9742 | 0.9815 | 0.9941 | 0.9824 | 0.9462 | 0.9892 | 0.9613 | 0.9040 | 0.9748 | 0.9815 |
| 0.0858 | 56.8 | 2840 | 0.0548 | 0.9514 | 0.9750 | 0.9814 | 0.9926 | 0.9822 | 0.9502 | 0.9889 | 0.9616 | 0.9038 | 0.9748 | 0.9814 |
| 0.1055 | 57.2 | 2860 | 0.0569 | 0.9487 | 0.9698 | 0.9806 | 0.9947 | 0.9880 | 0.9266 | 0.9892 | 0.9593 | 0.8976 | 0.9733 | 0.9806 |
| 0.1091 | 57.6 | 2880 | 0.0546 | 0.9513 | 0.9752 | 0.9814 | 0.9949 | 0.9781 | 0.9527 | 0.9892 | 0.9609 | 0.9037 | 0.9747 | 0.9814 |
| 0.1146 | 58.0 | 2900 | 0.0623 | 0.9430 | 0.9642 | 0.9786 | 0.9943 | 0.9927 | 0.9057 | 0.9893 | 0.9546 | 0.8852 | 0.9702 | 0.9786 |
| 0.0943 | 58.4 | 2920 | 0.0545 | 0.9514 | 0.9749 | 0.9814 | 0.9953 | 0.9787 | 0.9506 | 0.9891 | 0.9610 | 0.9039 | 0.9747 | 0.9814 |
| 0.0839 | 58.8 | 2940 | 0.0548 | 0.9509 | 0.9728 | 0.9813 | 0.9930 | 0.9866 | 0.9387 | 0.9891 | 0.9613 | 0.9023 | 0.9745 | 0.9813 |
| 0.0947 | 59.2 | 2960 | 0.0542 | 0.9518 | 0.9749 | 0.9816 | 0.9947 | 0.9804 | 0.9494 | 0.9893 | 0.9615 | 0.9045 | 0.9749 | 0.9816 |
| 0.0868 | 59.6 | 2980 | 0.0547 | 0.9513 | 0.9751 | 0.9814 | 0.9947 | 0.9790 | 0.9517 | 0.9893 | 0.9609 | 0.9037 | 0.9747 | 0.9814 |
| 0.0845 | 60.0 | 3000 | 0.0546 | 0.9516 | 0.9753 | 0.9815 | 0.9943 | 0.9793 | 0.9524 | 0.9894 | 0.9610 | 0.9043 | 0.9748 | 0.9815 |
| 0.0741 | 60.4 | 3020 | 0.0546 | 0.9509 | 0.9761 | 0.9811 | 0.9935 | 0.9769 | 0.9579 | 0.9893 | 0.9602 | 0.9031 | 0.9745 | 0.9811 |
| 0.0862 | 60.8 | 3040 | 0.0608 | 0.9441 | 0.9654 | 0.9790 | 0.9936 | 0.9924 | 0.9101 | 0.9893 | 0.9554 | 0.8875 | 0.9707 | 0.9790 |
| 0.1262 | 61.2 | 3060 | 0.0548 | 0.9511 | 0.9756 | 0.9813 | 0.9948 | 0.9771 | 0.9550 | 0.9894 | 0.9605 | 0.9035 | 0.9746 | 0.9813 |
| 0.0804 | 61.6 | 3080 | 0.0541 | 0.9519 | 0.9732 | 0.9817 | 0.9947 | 0.9850 | 0.9400 | 0.9895 | 0.9618 | 0.9045 | 0.9750 | 0.9817 |
| 0.1028 | 62.0 | 3100 | 0.0557 | 0.9501 | 0.9708 | 0.9811 | 0.9945 | 0.9883 | 0.9294 | 0.9895 | 0.9604 | 0.9003 | 0.9740 | 0.9811 |
| 0.0898 | 62.4 | 3120 | 0.0557 | 0.9503 | 0.9756 | 0.9809 | 0.9954 | 0.9747 | 0.9568 | 0.9894 | 0.9596 | 0.9020 | 0.9742 | 0.9809 |
| 0.0866 | 62.8 | 3140 | 0.0547 | 0.9516 | 0.9751 | 0.9815 | 0.9955 | 0.9786 | 0.9511 | 0.9894 | 0.9611 | 0.9043 | 0.9749 | 0.9815 |
| 0.0871 | 63.2 | 3160 | 0.0539 | 0.9521 | 0.9744 | 0.9817 | 0.9940 | 0.9832 | 0.9461 | 0.9894 | 0.9617 | 0.9051 | 0.9751 | 0.9817 |
| 0.1064 | 63.6 | 3180 | 0.0538 | 0.9519 | 0.9733 | 0.9817 | 0.9947 | 0.9849 | 0.9402 | 0.9896 | 0.9617 | 0.9044 | 0.9750 | 0.9817 |
| 0.0753 | 64.0 | 3200 | 0.0553 | 0.9501 | 0.9708 | 0.9811 | 0.9949 | 0.9880 | 0.9294 | 0.9895 | 0.9604 | 0.9004 | 0.9740 | 0.9811 |
| 0.0848 | 64.4 | 3220 | 0.0535 | 0.9520 | 0.9742 | 0.9817 | 0.9951 | 0.9822 | 0.9453 | 0.9894 | 0.9618 | 0.9049 | 0.9751 | 0.9817 |
| 0.0814 | 64.8 | 3240 | 0.0534 | 0.9524 | 0.9750 | 0.9819 | 0.9940 | 0.9823 | 0.9488 | 0.9895 | 0.9620 | 0.9058 | 0.9753 | 0.9819 |
| 0.0841 | 65.2 | 3260 | 0.0539 | 0.9515 | 0.9739 | 0.9815 | 0.9959 | 0.9809 | 0.9450 | 0.9890 | 0.9616 | 0.9040 | 0.9748 | 0.9815 |
| 0.0809 | 65.6 | 3280 | 0.0548 | 0.9503 | 0.9707 | 0.9812 | 0.9950 | 0.9884 | 0.9287 | 0.9895 | 0.9606 | 0.9008 | 0.9742 | 0.9812 |
| 0.0726 | 66.0 | 3300 | 0.0559 | 0.9497 | 0.9763 | 0.9806 | 0.9948 | 0.9725 | 0.9616 | 0.9895 | 0.9586 | 0.9009 | 0.9738 | 0.9806 |
| 0.099 | 66.4 | 3320 | 0.0539 | 0.9514 | 0.9720 | 0.9816 | 0.9950 | 0.9870 | 0.9341 | 0.9895 | 0.9615 | 0.9032 | 0.9747 | 0.9816 |
| 0.0816 | 66.8 | 3340 | 0.0538 | 0.9521 | 0.9731 | 0.9818 | 0.9954 | 0.9850 | 0.9389 | 0.9896 | 0.9619 | 0.9047 | 0.9751 | 0.9818 |
| 0.0815 | 67.2 | 3360 | 0.0535 | 0.9526 | 0.9747 | 0.9819 | 0.9950 | 0.9822 | 0.9468 | 0.9896 | 0.9621 | 0.9060 | 0.9754 | 0.9819 |
| 0.0723 | 67.6 | 3380 | 0.0557 | 0.9493 | 0.9695 | 0.9809 | 0.9956 | 0.9890 | 0.9238 | 0.9895 | 0.9598 | 0.8985 | 0.9736 | 0.9809 |
| 0.0802 | 68.0 | 3400 | 0.0539 | 0.9521 | 0.9730 | 0.9818 | 0.9952 | 0.9855 | 0.9382 | 0.9897 | 0.9620 | 0.9047 | 0.9751 | 0.9818 |
| 0.097 | 68.4 | 3420 | 0.0542 | 0.9518 | 0.9757 | 0.9816 | 0.9955 | 0.9775 | 0.9541 | 0.9896 | 0.9611 | 0.9048 | 0.9750 | 0.9816 |
| 0.0889 | 68.8 | 3440 | 0.0565 | 0.9487 | 0.9690 | 0.9807 | 0.9950 | 0.9900 | 0.9219 | 0.9897 | 0.9592 | 0.8972 | 0.9733 | 0.9807 |
| 0.095 | 69.2 | 3460 | 0.0541 | 0.9514 | 0.9751 | 0.9814 | 0.9957 | 0.9779 | 0.9517 | 0.9893 | 0.9610 | 0.9040 | 0.9748 | 0.9814 |
| 0.0864 | 69.6 | 3480 | 0.0538 | 0.9526 | 0.9741 | 0.9820 | 0.9940 | 0.9850 | 0.9433 | 0.9897 | 0.9622 | 0.9058 | 0.9754 | 0.9820 |
| 0.0888 | 70.0 | 3500 | 0.0562 | 0.9495 | 0.9761 | 0.9806 | 0.9953 | 0.9722 | 0.9607 | 0.9896 | 0.9584 | 0.9006 | 0.9738 | 0.9806 |
| 0.0755 | 70.4 | 3520 | 0.0532 | 0.9529 | 0.9751 | 0.9820 | 0.9947 | 0.9821 | 0.9486 | 0.9898 | 0.9621 | 0.9067 | 0.9755 | 0.9820 |
| 0.0858 | 70.8 | 3540 | 0.0560 | 0.9496 | 0.9696 | 0.9810 | 0.9952 | 0.9896 | 0.9241 | 0.9895 | 0.9601 | 0.8992 | 0.9738 | 0.9810 |
| 0.0903 | 71.2 | 3560 | 0.0540 | 0.9521 | 0.9763 | 0.9817 | 0.9945 | 0.9775 | 0.9570 | 0.9898 | 0.9611 | 0.9054 | 0.9751 | 0.9817 |
| 0.0862 | 71.6 | 3580 | 0.0539 | 0.9523 | 0.9731 | 0.9819 | 0.9950 | 0.9860 | 0.9383 | 0.9898 | 0.9620 | 0.9051 | 0.9752 | 0.9819 |
| 0.0708 | 72.0 | 3600 | 0.0538 | 0.9523 | 0.9733 | 0.9819 | 0.9960 | 0.9843 | 0.9395 | 0.9894 | 0.9623 | 0.9053 | 0.9753 | 0.9819 |
| 0.0856 | 72.4 | 3620 | 0.0532 | 0.9530 | 0.9750 | 0.9821 | 0.9946 | 0.9826 | 0.9478 | 0.9898 | 0.9623 | 0.9068 | 0.9756 | 0.9821 |
| 0.075 | 72.8 | 3640 | 0.0536 | 0.9527 | 0.9738 | 0.9820 | 0.9954 | 0.9843 | 0.9418 | 0.9897 | 0.9624 | 0.9061 | 0.9755 | 0.9820 |
| 0.0743 | 73.2 | 3660 | 0.0533 | 0.9526 | 0.9759 | 0.9819 | 0.9929 | 0.9819 | 0.9528 | 0.9894 | 0.9623 | 0.9061 | 0.9754 | 0.9819 |
| 0.075 | 73.6 | 3680 | 0.0532 | 0.9528 | 0.9739 | 0.9820 | 0.9942 | 0.9855 | 0.9421 | 0.9898 | 0.9622 | 0.9063 | 0.9755 | 0.9820 |
| 0.0804 | 74.0 | 3700 | 0.0543 | 0.9520 | 0.9725 | 0.9818 | 0.9948 | 0.9873 | 0.9354 | 0.9899 | 0.9617 | 0.9043 | 0.9751 | 0.9818 |
| 0.0789 | 74.4 | 3720 | 0.0566 | 0.9485 | 0.9691 | 0.9806 | 0.9939 | 0.9909 | 0.9224 | 0.9898 | 0.9589 | 0.8968 | 0.9732 | 0.9806 |
| 0.1058 | 74.8 | 3740 | 0.0551 | 0.9502 | 0.9701 | 0.9812 | 0.9962 | 0.9883 | 0.9258 | 0.9891 | 0.9611 | 0.9003 | 0.9741 | 0.9812 |
| 0.0842 | 75.2 | 3760 | 0.0530 | 0.9531 | 0.9747 | 0.9821 | 0.9954 | 0.9827 | 0.9460 | 0.9898 | 0.9624 | 0.9069 | 0.9756 | 0.9821 |
| 0.0802 | 75.6 | 3780 | 0.0538 | 0.9522 | 0.9763 | 0.9817 | 0.9948 | 0.9775 | 0.9564 | 0.9899 | 0.9612 | 0.9056 | 0.9752 | 0.9817 |
| 0.0752 | 76.0 | 3800 | 0.0554 | 0.9500 | 0.9699 | 0.9812 | 0.9954 | 0.9896 | 0.9247 | 0.9898 | 0.9603 | 0.9000 | 0.9740 | 0.9812 |
| 0.095 | 76.4 | 3820 | 0.0539 | 0.9526 | 0.9735 | 0.9820 | 0.9948 | 0.9858 | 0.9398 | 0.9899 | 0.9622 | 0.9058 | 0.9754 | 0.9820 |
| 0.0753 | 76.8 | 3840 | 0.0530 | 0.9526 | 0.9733 | 0.9820 | 0.9952 | 0.9859 | 0.9388 | 0.9899 | 0.9622 | 0.9057 | 0.9754 | 0.9820 |
| 0.0833 | 77.2 | 3860 | 0.0528 | 0.9532 | 0.9749 | 0.9822 | 0.9946 | 0.9836 | 0.9466 | 0.9899 | 0.9625 | 0.9072 | 0.9757 | 0.9822 |
| 0.0832 | 77.6 | 3880 | 0.0530 | 0.9532 | 0.9750 | 0.9822 | 0.9944 | 0.9834 | 0.9472 | 0.9899 | 0.9625 | 0.9072 | 0.9757 | 0.9822 |
| 0.0901 | 78.0 | 3900 | 0.0533 | 0.9531 | 0.9754 | 0.9821 | 0.9953 | 0.9809 | 0.9501 | 0.9899 | 0.9623 | 0.9070 | 0.9757 | 0.9821 |
| 0.0893 | 78.4 | 3920 | 0.0532 | 0.9532 | 0.9742 | 0.9822 | 0.9950 | 0.9847 | 0.9430 | 0.9900 | 0.9626 | 0.9069 | 0.9757 | 0.9822 |
| 0.0956 | 78.8 | 3940 | 0.0528 | 0.9532 | 0.9748 | 0.9822 | 0.9958 | 0.9821 | 0.9466 | 0.9899 | 0.9625 | 0.9072 | 0.9757 | 0.9822 |
| 0.0786 | 79.2 | 3960 | 0.0532 | 0.9525 | 0.9749 | 0.9819 | 0.9958 | 0.9805 | 0.9484 | 0.9896 | 0.9620 | 0.9059 | 0.9753 | 0.9819 |
| 0.1018 | 79.6 | 3980 | 0.0528 | 0.9533 | 0.9753 | 0.9822 | 0.9953 | 0.9816 | 0.9491 | 0.9900 | 0.9625 | 0.9075 | 0.9758 | 0.9822 |
| 0.0825 | 80.0 | 4000 | 0.0541 | 0.9515 | 0.9722 | 0.9816 | 0.9939 | 0.9883 | 0.9345 | 0.9899 | 0.9613 | 0.9033 | 0.9748 | 0.9816 |
| 0.073 | 80.4 | 4020 | 0.0532 | 0.9529 | 0.9737 | 0.9822 | 0.9950 | 0.9857 | 0.9404 | 0.9900 | 0.9624 | 0.9065 | 0.9756 | 0.9822 |
| 0.1021 | 80.8 | 4040 | 0.0527 | 0.9533 | 0.9754 | 0.9822 | 0.9954 | 0.9812 | 0.9497 | 0.9900 | 0.9625 | 0.9075 | 0.9758 | 0.9822 |
| 0.0835 | 81.2 | 4060 | 0.0535 | 0.9523 | 0.9727 | 0.9819 | 0.9949 | 0.9871 | 0.9362 | 0.9900 | 0.9619 | 0.9049 | 0.9752 | 0.9819 |
| 0.0778 | 81.6 | 4080 | 0.0555 | 0.9502 | 0.9703 | 0.9812 | 0.9947 | 0.9898 | 0.9264 | 0.9901 | 0.9602 | 0.9004 | 0.9741 | 0.9812 |
| 0.0619 | 82.0 | 4100 | 0.0534 | 0.9530 | 0.9767 | 0.9820 | 0.9940 | 0.9792 | 0.9567 | 0.9900 | 0.9619 | 0.9072 | 0.9756 | 0.9820 |
| 0.0864 | 82.4 | 4120 | 0.0528 | 0.9534 | 0.9748 | 0.9823 | 0.9949 | 0.9837 | 0.9459 | 0.9900 | 0.9626 | 0.9076 | 0.9758 | 0.9823 |
| 0.0701 | 82.8 | 4140 | 0.0535 | 0.9529 | 0.9736 | 0.9821 | 0.9948 | 0.9860 | 0.9401 | 0.9901 | 0.9623 | 0.9063 | 0.9756 | 0.9821 |
| 0.0857 | 83.2 | 4160 | 0.0528 | 0.9534 | 0.9745 | 0.9823 | 0.9953 | 0.9839 | 0.9444 | 0.9900 | 0.9627 | 0.9075 | 0.9758 | 0.9823 |
| 0.0926 | 83.6 | 4180 | 0.0533 | 0.9528 | 0.9731 | 0.9821 | 0.9951 | 0.9867 | 0.9375 | 0.9901 | 0.9623 | 0.9061 | 0.9755 | 0.9821 |
| 0.0705 | 84.0 | 4200 | 0.0530 | 0.9530 | 0.9736 | 0.9822 | 0.9950 | 0.9860 | 0.9398 | 0.9900 | 0.9624 | 0.9064 | 0.9756 | 0.9822 |
| 0.0732 | 84.4 | 4220 | 0.0528 | 0.9536 | 0.9760 | 0.9823 | 0.9950 | 0.9809 | 0.9520 | 0.9901 | 0.9626 | 0.9082 | 0.9760 | 0.9823 |
| 0.0676 | 84.8 | 4240 | 0.0528 | 0.9535 | 0.9744 | 0.9823 | 0.9952 | 0.9845 | 0.9435 | 0.9901 | 0.9628 | 0.9076 | 0.9759 | 0.9823 |
| 0.0802 | 85.2 | 4260 | 0.0533 | 0.9528 | 0.9731 | 0.9821 | 0.9951 | 0.9868 | 0.9375 | 0.9901 | 0.9623 | 0.9060 | 0.9755 | 0.9821 |
| 0.0829 | 85.6 | 4280 | 0.0527 | 0.9536 | 0.9746 | 0.9824 | 0.9948 | 0.9847 | 0.9445 | 0.9901 | 0.9628 | 0.9079 | 0.9759 | 0.9824 |
| 0.0749 | 86.0 | 4300 | 0.0537 | 0.9526 | 0.9729 | 0.9820 | 0.9950 | 0.9872 | 0.9365 | 0.9901 | 0.9621 | 0.9054 | 0.9754 | 0.9820 |
| 0.0928 | 86.4 | 4320 | 0.0526 | 0.9536 | 0.9745 | 0.9824 | 0.9952 | 0.9846 | 0.9435 | 0.9901 | 0.9629 | 0.9079 | 0.9759 | 0.9824 |
| 0.0708 | 86.8 | 4340 | 0.0529 | 0.9536 | 0.9749 | 0.9824 | 0.9943 | 0.9846 | 0.9459 | 0.9901 | 0.9629 | 0.9080 | 0.9760 | 0.9824 |
| 0.0896 | 87.2 | 4360 | 0.0528 | 0.9536 | 0.9744 | 0.9824 | 0.9949 | 0.9850 | 0.9433 | 0.9901 | 0.9628 | 0.9077 | 0.9759 | 0.9824 |
| 0.1022 | 87.6 | 4380 | 0.0529 | 0.9535 | 0.9745 | 0.9824 | 0.9944 | 0.9856 | 0.9434 | 0.9901 | 0.9628 | 0.9077 | 0.9759 | 0.9824 |
| 0.0713 | 88.0 | 4400 | 0.0528 | 0.9537 | 0.9754 | 0.9824 | 0.9953 | 0.9821 | 0.9489 | 0.9901 | 0.9628 | 0.9082 | 0.9760 | 0.9824 |
| 0.08 | 88.4 | 4420 | 0.0527 | 0.9536 | 0.9746 | 0.9824 | 0.9954 | 0.9838 | 0.9447 | 0.9901 | 0.9628 | 0.9078 | 0.9759 | 0.9824 |
| 0.0817 | 88.8 | 4440 | 0.0526 | 0.9538 | 0.9754 | 0.9824 | 0.9951 | 0.9827 | 0.9483 | 0.9902 | 0.9628 | 0.9084 | 0.9760 | 0.9824 |
| 0.0842 | 89.2 | 4460 | 0.0527 | 0.9538 | 0.9748 | 0.9824 | 0.9949 | 0.9843 | 0.9454 | 0.9902 | 0.9629 | 0.9082 | 0.9760 | 0.9824 |
| 0.0736 | 89.6 | 4480 | 0.0525 | 0.9537 | 0.9751 | 0.9824 | 0.9953 | 0.9829 | 0.9472 | 0.9901 | 0.9629 | 0.9082 | 0.9760 | 0.9824 |
| 0.0642 | 90.0 | 4500 | 0.0528 | 0.9537 | 0.9751 | 0.9824 | 0.9954 | 0.9828 | 0.9472 | 0.9901 | 0.9629 | 0.9083 | 0.9760 | 0.9824 |
| 0.0593 | 90.4 | 4520 | 0.0531 | 0.9531 | 0.9735 | 0.9823 | 0.9949 | 0.9866 | 0.9392 | 0.9902 | 0.9625 | 0.9067 | 0.9757 | 0.9823 |
| 0.0863 | 90.8 | 4540 | 0.0531 | 0.9534 | 0.9739 | 0.9823 | 0.9949 | 0.9861 | 0.9407 | 0.9902 | 0.9627 | 0.9072 | 0.9758 | 0.9823 |
| 0.0877 | 91.2 | 4560 | 0.0529 | 0.9535 | 0.9740 | 0.9824 | 0.9950 | 0.9859 | 0.9409 | 0.9902 | 0.9628 | 0.9074 | 0.9759 | 0.9824 |
| 0.073 | 91.6 | 4580 | 0.0526 | 0.9538 | 0.9750 | 0.9824 | 0.9952 | 0.9835 | 0.9464 | 0.9902 | 0.9628 | 0.9083 | 0.9760 | 0.9824 |
| 0.0646 | 92.0 | 4600 | 0.0526 | 0.9539 | 0.9754 | 0.9825 | 0.9950 | 0.9828 | 0.9484 | 0.9902 | 0.9629 | 0.9085 | 0.9761 | 0.9825 |
| 0.0748 | 92.4 | 4620 | 0.0527 | 0.9535 | 0.9741 | 0.9824 | 0.9955 | 0.9849 | 0.9418 | 0.9901 | 0.9629 | 0.9076 | 0.9759 | 0.9824 |
| 0.067 | 92.8 | 4640 | 0.0527 | 0.9537 | 0.9743 | 0.9824 | 0.9953 | 0.9849 | 0.9428 | 0.9902 | 0.9629 | 0.9080 | 0.9760 | 0.9824 |
| 0.0705 | 93.2 | 4660 | 0.0527 | 0.9539 | 0.9757 | 0.9824 | 0.9950 | 0.9822 | 0.9498 | 0.9902 | 0.9628 | 0.9086 | 0.9761 | 0.9824 |
| 0.0879 | 93.6 | 4680 | 0.0525 | 0.9538 | 0.9751 | 0.9824 | 0.9950 | 0.9837 | 0.9465 | 0.9902 | 0.9629 | 0.9084 | 0.9761 | 0.9824 |
| 0.0835 | 94.0 | 4700 | 0.0524 | 0.9539 | 0.9755 | 0.9825 | 0.9953 | 0.9824 | 0.9488 | 0.9902 | 0.9629 | 0.9086 | 0.9761 | 0.9825 |
| 0.0799 | 94.4 | 4720 | 0.0526 | 0.9539 | 0.9751 | 0.9825 | 0.9951 | 0.9836 | 0.9466 | 0.9902 | 0.9630 | 0.9086 | 0.9761 | 0.9825 |
| 0.0697 | 94.8 | 4740 | 0.0527 | 0.9536 | 0.9743 | 0.9824 | 0.9951 | 0.9854 | 0.9423 | 0.9902 | 0.9629 | 0.9078 | 0.9760 | 0.9824 |
| 0.0628 | 95.2 | 4760 | 0.0527 | 0.9535 | 0.9741 | 0.9824 | 0.9950 | 0.9857 | 0.9415 | 0.9902 | 0.9628 | 0.9076 | 0.9759 | 0.9824 |
| 0.0778 | 95.6 | 4780 | 0.0525 | 0.9539 | 0.9749 | 0.9825 | 0.9951 | 0.9841 | 0.9455 | 0.9902 | 0.9630 | 0.9084 | 0.9761 | 0.9825 |
| 0.0969 | 96.0 | 4800 | 0.0526 | 0.9540 | 0.9758 | 0.9825 | 0.9948 | 0.9823 | 0.9503 | 0.9902 | 0.9629 | 0.9088 | 0.9761 | 0.9825 |
| 0.0737 | 96.4 | 4820 | 0.0527 | 0.9537 | 0.9742 | 0.9824 | 0.9951 | 0.9855 | 0.9420 | 0.9902 | 0.9629 | 0.9079 | 0.9760 | 0.9824 |
| 0.103 | 96.8 | 4840 | 0.0525 | 0.9539 | 0.9749 | 0.9825 | 0.9951 | 0.9842 | 0.9456 | 0.9902 | 0.9630 | 0.9085 | 0.9761 | 0.9825 |
| 0.069 | 97.2 | 4860 | 0.0525 | 0.9539 | 0.9749 | 0.9825 | 0.9952 | 0.9839 | 0.9457 | 0.9902 | 0.9630 | 0.9085 | 0.9761 | 0.9825 |
| 0.0613 | 97.6 | 4880 | 0.0525 | 0.9538 | 0.9746 | 0.9825 | 0.9951 | 0.9849 | 0.9439 | 0.9902 | 0.9630 | 0.9083 | 0.9761 | 0.9825 |
| 0.0758 | 98.0 | 4900 | 0.0526 | 0.9539 | 0.9751 | 0.9825 | 0.9950 | 0.9839 | 0.9462 | 0.9902 | 0.9630 | 0.9086 | 0.9761 | 0.9825 |
| 0.0767 | 98.4 | 4920 | 0.0525 | 0.9539 | 0.9747 | 0.9825 | 0.9951 | 0.9848 | 0.9441 | 0.9902 | 0.9630 | 0.9084 | 0.9761 | 0.9825 |
| 0.0676 | 98.8 | 4940 | 0.0526 | 0.9538 | 0.9745 | 0.9825 | 0.9950 | 0.9851 | 0.9435 | 0.9902 | 0.9630 | 0.9082 | 0.9760 | 0.9825 |
| 0.0656 | 99.2 | 4960 | 0.0525 | 0.9539 | 0.9747 | 0.9825 | 0.9951 | 0.9846 | 0.9444 | 0.9902 | 0.9630 | 0.9084 | 0.9761 | 0.9825 |
| 0.0758 | 99.6 | 4980 | 0.0526 | 0.9538 | 0.9745 | 0.9825 | 0.9951 | 0.9852 | 0.9432 | 0.9902 | 0.9630 | 0.9082 | 0.9760 | 0.9825 |
| 0.0747 | 100.0 | 5000 | 0.0525 | 0.9540 | 0.9751 | 0.9825 | 0.9949 | 0.9841 | 0.9464 | 0.9902 | 0.9631 | 0.9087 | 0.9761 | 0.9825 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.6.0+cu124
- Datasets 3.2.0
- Tokenizers 0.19.1
|
PKU-ML/G1-3B
|
PKU-ML
| 2025-06-17T09:32:31Z | 216 | 1 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"graph",
"chat",
"conversational",
"en",
"dataset:PKU-ML/Erdos",
"arxiv:2505.18499",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-31T11:19:51Z |
---
license: apache-2.0
datasets:
- PKU-ML/Erdos
language:
- en
metrics:
- accuracy
base_model:
- Qwen/Qwen2.5-3B-Instruct
pipeline_tag: text-generation
tags:
- graph
- chat
library_name: transformers
---
# G1-3B
## Introduction
G1 is the series of large language models trained on our benchmark [Erdos](https://huggingface.co/datasets/PKU-ML/Erdos) for solving graph reasoning tasks, based on Qwen2.5-Instruct.
We apply Group Relative Policy Optimization (GRPO) for reinforcement learning with supervised finetuning as a prelimary step.
G1 brings the following improvements:
- **Significant improvement on graph reasoning**: G1 models achieve up to 46% improvement over baselines on Erdลs, with the 7B variant matching OpenAIโs o3-mini and the 3B model surpassing Qwen2.5-72B-Instruct by notable margins.
- **Strong Generalization to unseen graph tasks**: G1 exhibits zero-shot generalization on unseen graph tasks, improving performance on *other graph reasoning benchmarks* (GraphWiz, GraphArena) and *real-world graphs* (Cora, PubMed).
- **NO Compromise on general reasoning**: Crucially, G1 preserves general reasoning ability (GSM8K, MATH, MMLU-Pro), proving its versatility.
**This repo contains the G1-3B model**, which has the following features:
- Type: Causal Language Models
- Training Stage: SFT & RL
- Architecture: the same with Qwen2.5-Instruct
- Number of Parameters: 3.09B
- Context Length: Full 32,768 tokens and generation 8192 tokens
For more details, please refer to our [paper](https://arxiv.org/pdf/2505.18499) and [GitHub](https://github.com/PKU-ML/G1/tree/main).
## Requirements
The model is trained based on Qwen/Qwen2.5-3B-Instruct. The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
INSTRUCTION_TEMPLATE = """
{instruction}
Solve the above problem efficiently and clearly. The last line of your response should be of the following format: 'Therefore, the final answer is: $\\boxed{{ANSWER}}$. I hope it is correct' (without quotes) where ANSWER is just the final number or expression that solves the problem. Think step by step before answering.
""".strip()
model_name = "PKU-ML/G1-3B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "The task is to determine the degree centrality of a node in the graph.\n\n"\
"Degree centrality for a node is the fraction of nodes it is connected to.\n\n"\
"Here is an undirected graph containing nodes from 1 to 15. The edges are: (1, 15), (15, 11), (2, 3), (2, 6), (3, 6), (3, 7), (6, 7), (6, 8), (7, 8), (7, 14), (4, 10), (10, 5), (10, 12), (8, 14), (8, 9), (12, 11), (12, 13).\n\n"\
"Question: What is the degree centrality of node 2 in the graph?\n\n"\
"You need to format your answer as a float number."
messages = [
{"role": "user", "content": INSTRUCTION_TEMPLATE.format(instruction=prompt)}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=4096,
top_p=0.95,
top_k=30,
temperature=0.6
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```
## Evaluation & Performance
Detailed evaluation results are reported in this [๐ paper](https://arxiv.org/pdf/2505.18499).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{guo2025g1,
title={G1: Teaching LLMs to Reason on Graphs with Reinforcement Learning},
author={Guo, Xiaojun and Li, Ang and Wang, Yifei and Jegelka, Stefanie and Wang, Yisen},
journal={arXiv preprint arXiv:2505.18499},
year={2025}
}
```
|
BootesVoid/cmbxbw7zm00jurdqs9iqa9vjc_cmc0am4wa07fdrdqsoyeqy0u9
|
BootesVoid
| 2025-06-17T09:31:24Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-17T09:31:22Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TRISHA
---
# Cmbxbw7Zm00Jurdqs9Iqa9Vjc_Cmc0Am4Wa07Fdrdqsoyeqy0U9
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TRISHA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TRISHA",
"lora_weights": "https://huggingface.co/BootesVoid/cmbxbw7zm00jurdqs9iqa9vjc_cmc0am4wa07fdrdqsoyeqy0u9/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbxbw7zm00jurdqs9iqa9vjc_cmc0am4wa07fdrdqsoyeqy0u9', weight_name='lora.safetensors')
image = pipeline('TRISHA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbxbw7zm00jurdqs9iqa9vjc_cmc0am4wa07fdrdqsoyeqy0u9/discussions) to add images that show off what youโve made with this LoRA.
|
AntResearchNLP/ViLaSR
|
AntResearchNLP
| 2025-06-17T09:29:38Z | 10 | 0 | null |
[
"safetensors",
"qwen2_5_vl",
"en",
"dataset:AntResearchNLP/ViLaSR-data",
"arxiv:2506.09965",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"region:us"
] | null | 2025-06-01T15:56:01Z |
---
datasets:
- AntResearchNLP/ViLaSR-data
language:
- en
base_model:
- Qwen/Qwen2.5-VL-7B-Instruct
---
This repository contains the ViLaSR-7B model as presented in [Reinforcing Spatial Reasoning in Vision-Language Models with Interwoven Thinking and Visual Drawing](https://arxiv.org/abs/2506.09965).
Please refer to the code https://github.com/AntResearchNLP/ViLaSR.
```
@misc{wu2025reinforcingspatialreasoningvisionlanguage,
title={Reinforcing Spatial Reasoning in Vision-Language Models with Interwoven Thinking and Visual Drawing},
author={Junfei Wu and Jian Guan and Kaituo Feng and Qiang Liu and Shu Wu and Liang Wang and Wei Wu and Tieniu Tan},
year={2025},
eprint={2506.09965},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2506.09965},
}
```
|
ganesan-erss/sqlcoder-7b-finetuned_v1
|
ganesan-erss
| 2025-06-17T09:26:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-17T08:34:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
vietnhat/orpheus-test
|
vietnhat
| 2025-06-17T09:25:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/orpheus-3b-0.1-ft",
"base_model:finetune:unsloth/orpheus-3b-0.1-ft",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-17T09:23:56Z |
---
base_model: unsloth/orpheus-3b-0.1-ft
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** vietnhat
- **License:** apache-2.0
- **Finetuned from model :** unsloth/orpheus-3b-0.1-ft
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Rin031099/Miku
|
Rin031099
| 2025-06-17T09:22:00Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:stabilityai/stable-diffusion-3.5-large",
"base_model:adapter:stabilityai/stable-diffusion-3.5-large",
"region:us"
] |
text-to-image
| 2025-06-17T09:22:00Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/IMG20250616171324.jpg
base_model: stabilityai/stable-diffusion-3.5-large
instance_prompt: null
---
# Hatsune_Miku
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Rin031099/Miku/tree/main) them in the Files & versions tab.
|
aiswaryamlds/Nanonets-OCR-s-mlx
|
aiswaryamlds
| 2025-06-17T09:19:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"OCR",
"pdf2markdown",
"mlx",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-17T09:16:40Z |
---
language:
- en
base_model:
- Qwen/Qwen2.5-VL-3B-Instruct
pipeline_tag: image-text-to-text
tags:
- OCR
- pdf2markdown
- mlx
library_name: transformers
---
# aiswaryamlds/Nanonets-OCR-s-mlx
This model was converted to MLX format from [`nanonets/Nanonets-OCR-s`]() using mlx-vlm version **0.1.27**.
Refer to the [original model card](https://huggingface.co/nanonets/Nanonets-OCR-s) for more details on the model.
## Use with mlx
```bash
pip install -U mlx-vlm
```
```bash
python -m mlx_vlm.generate --model aiswaryamlds/Nanonets-OCR-s-mlx --max-tokens 100 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
```
|
pkailin2002/gpt2-tuned-expanded
|
pkailin2002
| 2025-06-17T09:14:53Z | 0 | 0 | null |
[
"pytorch",
"gpt2",
"text-generation",
"fine-tuned",
"en",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"region:us"
] |
text-generation
| 2025-06-17T09:14:28Z |
---
language: en
base_model: gpt2
tags:
- text-generation
- gpt2
- fine-tuned
license: mit
---
# gpt2-tuned-expanded
Fine-tuned GPT-2 model on speech transcription data
## Model Details
- **Base Model**: gpt2
- **Fine-tuned from checkpoint**: /home/klp65/rds/hpc-work/whisper-lm/train_gpt/gpt_expanded_corpora/checkpoint-1484745
- **Language**: English
- **Model Type**: Causal Language Model
## Usage
```python
from transformers import GPT2LMHeadModel, GPT2Tokenizer
model = GPT2LMHeadModel.from_pretrained("pkailin2002/gpt2-tuned-expanded")
tokenizer = GPT2Tokenizer.from_pretrained("pkailin2002/gpt2-tuned-expanded")
# Generate text
input_text = "Your prompt here"
inputs = tokenizer.encode(input_text, return_tensors="pt")
outputs = model.generate(inputs, max_length=100, num_return_sequences=1, temperature=0.7)
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)
```
## Training Details
This model was fine-tuned using the Hugging Face Transformers library.
## Intended Use
This model is intended for research and educational purposes.
## Limitations
Please be aware that language models can generate biased or inappropriate content. Use responsibly.
|
EbisuRyu/distilhubert-finetuned-gtzan
|
EbisuRyu
| 2025-06-17T09:12:26Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2025-06-17T07:28:41Z |
---
library_name: transformers
base_model: sanchit-gandhi/distillhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.83
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [sanchit-gandhi/distillhubert](https://huggingface.co/sanchit-gandhi/distillhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5759
- Accuracy: 0.83
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9214 | 1.0 | 113 | 1.8431 | 0.5 |
| 1.2056 | 2.0 | 226 | 1.2527 | 0.67 |
| 1.0962 | 3.0 | 339 | 1.1785 | 0.66 |
| 0.6646 | 4.0 | 452 | 0.8566 | 0.73 |
| 0.5757 | 5.0 | 565 | 0.7593 | 0.8 |
| 0.4366 | 6.0 | 678 | 0.6163 | 0.85 |
| 0.2854 | 7.0 | 791 | 0.6112 | 0.81 |
| 0.1304 | 8.0 | 904 | 0.6283 | 0.83 |
| 0.1924 | 9.0 | 1017 | 0.5414 | 0.82 |
| 0.1175 | 10.0 | 1130 | 0.5759 | 0.83 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
Megha06/Pixelcopter-PLE-v0
|
Megha06
| 2025-06-17T09:06:19Z | 0 | 0 | null |
[
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-17T06:09:09Z |
---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 44.80 +/- 30.18
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
openbmb/MiniCPM4-8B
|
openbmb
| 2025-06-17T09:06:12Z | 8,754 | 255 |
transformers
|
[
"transformers",
"safetensors",
"minicpm",
"text-generation",
"conversational",
"custom_code",
"zh",
"en",
"arxiv:2506.07900",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2025-06-05T06:07:52Z |
---
license: apache-2.0
language:
- zh
- en
pipeline_tag: text-generation
library_name: transformers
---
<div align="center">
<img src="https://github.com/OpenBMB/MiniCPM/blob/main/assets/minicpm_logo.png?raw=true" width="500em" ></img>
</div>
<p align="center">
<a href="https://github.com/OpenBMB/MiniCPM/" target="_blank">GitHub Repo</a> |
<a href="https://arxiv.org/abs/2506.07900" target="_blank">Technical Report</a> |
<a href="https://mp.weixin.qq.com/s/KIhH2nCURBXuFXAtYRpuXg?poc_token=HBIsUWijxino8oJ5s6HcjcfXFRi0Xj2LJlxPYD9c">Join Us</a>
</p>
<p align="center">
๐ Contact us in <a href="https://discord.gg/3cGQn9b3YM" target="_blank">Discord</a> and <a href="https://github.com/OpenBMB/MiniCPM/blob/main/assets/wechat.jpg" target="_blank">WeChat</a>
</p>
## What's New
- [2025.06.06] **MiniCPM4** series are released! This model achieves ultimate efficiency improvements while maintaining optimal performance at the same scale! It can achieve over 5x generation acceleration on typical end-side chips! You can find technical report [here](https://github.com/OpenBMB/MiniCPM/tree/main/report/MiniCPM_4_Technical_Report.pdf).๐ฅ๐ฅ๐ฅ
## MiniCPM4 Series
MiniCPM4 series are highly efficient large language models (LLMs) designed explicitly for end-side devices, which achieves this efficiency through systematic innovation in four key dimensions: model architecture, training data, training algorithms, and inference systems.
- [MiniCPM4-8B](https://huggingface.co/openbmb/MiniCPM4-8B): The flagship of MiniCPM4, with 8B parameters, trained on 8T tokens. (**<-- you are here**)
- [MiniCPM4-0.5B](https://huggingface.co/openbmb/MiniCPM4-0.5B): The small version of MiniCPM4, with 0.5B parameters, trained on 1T tokens.
- [MiniCPM4-8B-Eagle-FRSpec](https://huggingface.co/openbmb/MiniCPM4-8B-Eagle-FRSpec): Eagle head for FRSpec, accelerating speculative inference for MiniCPM4-8B.
- [MiniCPM4-8B-Eagle-FRSpec-QAT-cpmcu](https://huggingface.co/openbmb/MiniCPM4-8B-Eagle-FRSpec-QAT-cpmcu): Eagle head trained with QAT for FRSpec, efficiently integrate speculation and quantization to achieve ultra acceleration for MiniCPM4-8B.
- [MiniCPM4-8B-Eagle-vLLM](https://huggingface.co/openbmb/MiniCPM4-8B-Eagle-vLLM): Eagle head in vLLM format, accelerating speculative inference for MiniCPM4-8B.
- [MiniCPM4-8B-marlin-Eagle-vLLM](https://huggingface.co/openbmb/MiniCPM4-8B-marlin-Eagle-vLLM): Quantized Eagle head for vLLM format, accelerating speculative inference for MiniCPM4-8B.
- [BitCPM4-0.5B](https://huggingface.co/openbmb/BitCPM4-0.5B): Extreme ternary quantization applied to MiniCPM4-0.5B compresses model parameters into ternary values, achieving a 90% reduction in bit width.
- [BitCPM4-1B](https://huggingface.co/openbmb/BitCPM4-1B): Extreme ternary quantization applied to MiniCPM3-1B compresses model parameters into ternary values, achieving a 90% reduction in bit width.
- [MiniCPM4-Survey](https://huggingface.co/openbmb/MiniCPM4-Survey): Based on MiniCPM4-8B, accepts users' quiries as input and autonomously generate trustworthy, long-form survey papers.
- [MiniCPM4-MCP](https://huggingface.co/openbmb/MiniCPM4-MCP): Based on MiniCPM4-8B, accepts users' queries and available MCP tools as input and autonomously calls relevant MCP tools to satisfy users' requirements.
## Introduction
MiniCPM 4 is an extremely efficient edge-side large model that has undergone efficient optimization across four dimensions: model architecture, learning algorithms, training data, and inference systems, achieving ultimate efficiency improvements.
- ๐๏ธ **Efficient Model Architecture:**
- InfLLM v2 -- Trainable Sparse Attention Mechanism: Adopts a trainable sparse attention mechanism architecture where each token only needs to compute relevance with less than 5% of tokens in 128K long text processing, significantly reducing computational overhead for long texts
- ๐ง **Efficient Learning Algorithms:**
- Model Wind Tunnel 2.0 -- Efficient Predictable Scaling: Introduces scaling prediction methods for performance of downstream tasks, enabling more precise model training configuration search
- BitCPM -- Ultimate Ternary Quantization: Compresses model parameter bit-width to 3 values, achieving 90% extreme model bit-width reduction
- Efficient Training Engineering Optimization: Adopts FP8 low-precision computing technology combined with Multi-token Prediction training strategy
- ๐ **High-Quality Training Data:**
- UltraClean -- High-quality Pre-training Data Filtering and Generation: Builds iterative data cleaning strategies based on efficient data verification, open-sourcing high-quality Chinese and English pre-training dataset [UltraFinweb](https://huggingface.co/datasets/openbmb/Ultra-FineWeb)
- UltraChat v2 -- High-quality Supervised Fine-tuning Data Generation: Constructs large-scale high-quality supervised fine-tuning datasets covering multiple dimensions including knowledge-intensive data, reasoning-intensive data, instruction-following data, long text understanding data, and tool calling data
- โก **Efficient Inference System:**
- CPM.cu -- Lightweight and Efficient CUDA Inference Framework: Integrates sparse attention, model quantization, and speculative sampling to achieve efficient prefilling and decoding
- ArkInfer -- Cross-platform Deployment System: Supports efficient deployment across multiple backend environments, providing flexible cross-platform adaptation capabilities
## Usage
### Inference with [CPM.cu](https://github.com/OpenBMB/cpm.cu)
We recommend using [CPM.cu](https://github.com/OpenBMB/cpm.cu) for the inference of MiniCPM4. CPM.cu is a CUDA inference framework developed by OpenBMB, which integrates efficient sparse, speculative sampling, and quantization techniques, fully leveraging the efficiency advantages of MiniCPM4.
You can install CPM.cu by running the following command:
```bash
git clone https://github.com/OpenBMB/cpm.cu.git --recursive
cd cpm.cu
python3 setup.py install
```
MiniCPM4 natively supports context lengths of up to 32,768 tokens. To reproduce the long-text acceleration effect in the paper, we recommend using the LongRoPE factors that have been validated. Change the `rope_scaling` field in the `config.json` file as the following to enable LongRoPE.
```json
{
...,
"rope_scaling": {
"rope_type": "longrope",
"long_factor": [0.9977997200264581, 1.014658295992452, 1.0349680404997148, 1.059429246056193, 1.0888815016813513, 1.1243301355211495, 1.166977103606075, 1.2182568066927284, 1.2798772354275727, 1.3538666751582975, 1.4426259039919596, 1.5489853358570191, 1.6762658237220625, 1.8283407612492941, 2.0096956085876183, 2.225478927469756, 2.481536379650452, 2.784415934557119, 3.1413289096347365, 3.560047844772632, 4.048719380066383, 4.752651957515948, 5.590913044973868, 6.584005926629993, 7.7532214876576155, 9.119754865903639, 10.704443927019176, 12.524994176518703, 14.59739595363613, 16.93214476166354, 19.53823297353041, 22.417131025031697, 25.568260840911098, 28.991144156566317, 32.68408069090375, 36.65174474170465, 40.90396065611201, 45.4664008671033, 50.37147343433591, 55.6804490772103, 61.470816952306556, 67.8622707390618, 75.00516023410414, 83.11898235973767, 92.50044360202462, 103.57086856690864, 116.9492274587385, 118.16074567836519, 119.18497548708795, 120.04810876261652, 120.77352815196981, 121.38182790207875, 121.89094985353891, 122.31638758099915, 122.6714244963338, 122.9673822552567, 123.21386397019609, 123.41898278254268, 123.58957065488238, 123.73136519024158, 123.84917421274221, 123.94701903496814, 124.02825801299717, 124.09569231686116],
"short_factor": [0.9977997200264581, 1.014658295992452, 1.0349680404997148, 1.059429246056193, 1.0888815016813513, 1.1243301355211495, 1.166977103606075, 1.2182568066927284, 1.2798772354275727, 1.3538666751582975, 1.4426259039919596, 1.5489853358570191, 1.6762658237220625, 1.8283407612492941, 2.0096956085876183, 2.225478927469756, 2.481536379650452, 2.784415934557119, 3.1413289096347365, 3.560047844772632, 4.048719380066383, 4.752651957515948, 5.590913044973868, 6.584005926629993, 7.7532214876576155, 9.119754865903639, 10.704443927019176, 12.524994176518703, 14.59739595363613, 16.93214476166354, 19.53823297353041, 22.417131025031697, 25.568260840911098, 28.991144156566317, 32.68408069090375, 36.65174474170465, 40.90396065611201, 45.4664008671033, 50.37147343433591, 55.6804490772103, 61.470816952306556, 67.8622707390618, 75.00516023410414, 83.11898235973767, 92.50044360202462, 103.57086856690864, 116.9492274587385, 118.16074567836519, 119.18497548708795, 120.04810876261652, 120.77352815196981, 121.38182790207875, 121.89094985353891, 122.31638758099915, 122.6714244963338, 122.9673822552567, 123.21386397019609, 123.41898278254268, 123.58957065488238, 123.73136519024158, 123.84917421274221, 123.94701903496814, 124.02825801299717, 124.09569231686116],
"original_max_position_embeddings": 32768
}
}
```
After modification, you can run the following command to reproduce the long-context acceleration effect (the script will automatically download the model weights from HuggingFace)
```bash
python3 tests/test_generate.py
```
For more details about CPM.cu, please refer to [the repo CPM.cu](https://github.com/OpenBMB/cpm.cu).
### Inference with Transformers
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
torch.manual_seed(0)
path = 'openbmb/MiniCPM4-8B'
device = "cuda"
tokenizer = AutoTokenizer.from_pretrained(path)
model = AutoModelForCausalLM.from_pretrained(path, torch_dtype=torch.bfloat16, device_map=device, trust_remote_code=True)
# User can directly use the chat interface
# responds, history = model.chat(tokenizer, "Write an article about Artificial Intelligence.", temperature=0.7, top_p=0.7)
# print(responds)
# User can also use the generate interface
messages = [
{"role": "user", "content": "Write an article about Artificial Intelligence."},
]
prompt_text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
)
model_inputs = tokenizer([prompt_text], return_tensors="pt").to(device)
model_outputs = model.generate(
**model_inputs,
max_new_tokens=1024,
top_p=0.7,
temperature=0.7
)
output_token_ids = [
model_outputs[i][len(model_inputs[i]):] for i in range(len(model_inputs['input_ids']))
]
responses = tokenizer.batch_decode(output_token_ids, skip_special_tokens=True)[0]
print(responses)
```
MiniCPM4-8B supports `InfLLM v2`, a sparse attention mechanism designed for efficient long-sequence inference. It requires the [infllmv2_cuda_impl](https://github.com/OpenBMB/infllmv2_cuda_impl) library.
You can install it by running the following command:
```bash
git clone -b feature_infer https://github.com/OpenBMB/infllmv2_cuda_impl.git
cd infllmv2_cuda_impl
git submodule update --init --recursive
pip install -e . # or python setup.py install
```
To enable InfLLM v2, you need to add the `sparse_config` field in `config.json`:
```json
{
...,
"sparse_config": {
"kernel_size": 32,
"kernel_stride": 16,
"init_blocks": 1,
"block_size": 64,
"window_size": 2048,
"topk": 64,
"use_nope": false,
"dense_len": 8192
}
}
```
These parameters control the behavior of InfLLM v2:
* `kernel_size` (default: 32): The size of semantic kernels.
* `kernel_stride` (default: 16): The stride between adjacent kernels.
* `init_blocks` (default: 1): The number of initial blocks that every query token attends to. This ensures attention to the beginning of the sequence.
* `block_size` (default: 64): The block size for key-value blocks.
* `window_size` (default: 2048): The size of the local sliding window.
* `topk` (default: 64): The specifies that each token computes attention with only the top-k most relevant key-value blocks.
* `use_nope` (default: false): Whether to use the NOPE technique in block selection for improved performance.
* `dense_len` (default: 8192): Since Sparse Attention offers limited benefits for short sequences, the model can use standard (dense) attention for shorter texts. The model will use dense attention for sequences with a token length below `dense_len` and switch to sparse attention for sequences exceeding this length. Set this to `-1` to always use sparse attention regardless of sequence length.
MiniCPM4 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques for effective handling of long texts. We have validated the model's performance on context lengths of up to 131,072 tokens by modifying the LongRoPE factor.
You can apply the LongRoPE factor modification by modifying the model files. Specifically, in the `config.json` file, adjust the `rope_scaling` fields.
```json
{
...,
"rope_scaling": {
"rope_type": "longrope",
"long_factor": [0.9977997200264581, 1.014658295992452, 1.0349680404997148, 1.059429246056193, 1.0888815016813513, 1.1243301355211495, 1.166977103606075, 1.2182568066927284, 1.2798772354275727, 1.3538666751582975, 1.4426259039919596, 1.5489853358570191, 1.6762658237220625, 1.8283407612492941, 2.0096956085876183, 2.225478927469756, 2.481536379650452, 2.784415934557119, 3.1413289096347365, 3.560047844772632, 4.048719380066383, 4.752651957515948, 5.590913044973868, 6.584005926629993, 7.7532214876576155, 9.119754865903639, 10.704443927019176, 12.524994176518703, 14.59739595363613, 16.93214476166354, 19.53823297353041, 22.417131025031697, 25.568260840911098, 28.991144156566317, 32.68408069090375, 36.65174474170465, 40.90396065611201, 45.4664008671033, 50.37147343433591, 55.6804490772103, 61.470816952306556, 67.8622707390618, 75.00516023410414, 83.11898235973767, 92.50044360202462, 103.57086856690864, 116.9492274587385, 118.16074567836519, 119.18497548708795, 120.04810876261652, 120.77352815196981, 121.38182790207875, 121.89094985353891, 122.31638758099915, 122.6714244963338, 122.9673822552567, 123.21386397019609, 123.41898278254268, 123.58957065488238, 123.73136519024158, 123.84917421274221, 123.94701903496814, 124.02825801299717, 124.09569231686116],
"short_factor": [0.9977997200264581, 1.014658295992452, 1.0349680404997148, 1.059429246056193, 1.0888815016813513, 1.1243301355211495, 1.166977103606075, 1.2182568066927284, 1.2798772354275727, 1.3538666751582975, 1.4426259039919596, 1.5489853358570191, 1.6762658237220625, 1.8283407612492941, 2.0096956085876183, 2.225478927469756, 2.481536379650452, 2.784415934557119, 3.1413289096347365, 3.560047844772632, 4.048719380066383, 4.752651957515948, 5.590913044973868, 6.584005926629993, 7.7532214876576155, 9.119754865903639, 10.704443927019176, 12.524994176518703, 14.59739595363613, 16.93214476166354, 19.53823297353041, 22.417131025031697, 25.568260840911098, 28.991144156566317, 32.68408069090375, 36.65174474170465, 40.90396065611201, 45.4664008671033, 50.37147343433591, 55.6804490772103, 61.470816952306556, 67.8622707390618, 75.00516023410414, 83.11898235973767, 92.50044360202462, 103.57086856690864, 116.9492274587385, 118.16074567836519, 119.18497548708795, 120.04810876261652, 120.77352815196981, 121.38182790207875, 121.89094985353891, 122.31638758099915, 122.6714244963338, 122.9673822552567, 123.21386397019609, 123.41898278254268, 123.58957065488238, 123.73136519024158, 123.84917421274221, 123.94701903496814, 124.02825801299717, 124.09569231686116],
"original_max_position_embeddings": 32768
}
}
```
### Inference with [SGLang](https://github.com/sgl-project/sglang)
For now, you need to install our forked version of SGLang.
```bash
git clone -b openbmb https://github.com/OpenBMB/sglang.git
cd sglang
pip install --upgrade pip
pip install -e "python[all]"
```
You can start the inference server by running the following command:
```bash
python -m sglang.launch_server --model openbmb/MiniCPM4-8B --trust-remote-code --port 30000 --chat-template chatml
```
Then you can use the chat interface by running the following command:
```python
import openai
client = openai.Client(base_url=f"http://localhost:30000/v1", api_key="None")
response = client.chat.completions.create(
model="openbmb/MiniCPM4-8B",
messages=[
{"role": "user", "content": "Write an article about Artificial Intelligence."},
],
temperature=0.7,
max_tokens=1024,
)
print(response.choices[0].message.content)
```
### Inference with [vLLM](https://github.com/vllm-project/vllm)
For now, you need to install the latest version of vLLM.
```
pip install -U vllm \
--pre \
--extra-index-url https://wheels.vllm.ai/nightly
```
Then you can inference MiniCPM4-8B with vLLM:
```python
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
model_name = "openbmb/MiniCPM4-8B"
prompt = [{"role": "user", "content": "Please recommend 5 tourist attractions in Beijing. "}]
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
input_text = tokenizer.apply_chat_template(prompt, tokenize=False, add_generation_prompt=True)
llm = LLM(
model=model_name,
trust_remote_code=True,
max_num_batched_tokens=32768,
dtype="bfloat16",
gpu_memory_utilization=0.8,
)
sampling_params = SamplingParams(top_p=0.7, temperature=0.7, max_tokens=1024, repetition_penalty=1.02)
outputs = llm.generate(prompts=input_text, sampling_params=sampling_params)
print(outputs[0].outputs[0].text)
```
Also, you can start the inference server by running the following command:
> **Note**: In vLLM's chat API, `add_special_tokens` is `False` by default. This means important special tokensโsuch as the beginning-of-sequence (BOS) tokenโwill not be added automatically. To ensure the input prompt is correctly formatted for the model, you should explicitly set `extra_body={"add_special_tokens": True}`.
```bash
vllm serve openbmb/MiniCPM4-8B
```
Then you can use the chat interface by running the following code:
```python
import openai
client = openai.Client(base_url="http://localhost:8000/v1", api_key="EMPTY")
response = client.chat.completions.create(
model="openbmb/MiniCPM4-8B",
messages=[
{"role": "user", "content": "Write an article about Artificial Intelligence."},
],
temperature=0.7,
max_tokens=1024,
extra_body=dict(add_special_tokens=True), # Ensures special tokens are added for chat template
)
print(response.choices[0].message.content)
```
## Evaluation Results
On two typical end-side chips, Jetson AGX Orin and RTX 4090, MiniCPM4 demonstrates significantly faster processing speed compared to similar-size models in long text processing tasks. As text length increases, MiniCPM4's efficiency advantage becomes more pronounced. On the Jetson AGX Orin platform, compared to Qwen3-8B, MiniCPM4 achieves approximately 7x decoding speed improvement.

#### Comprehensive Evaluation
MiniCPM4 launches end-side versions with 8B and 0.5B parameter scales, both achieving best-in-class performance in their respective categories.

#### Long Text Evaluation
MiniCPM4 is pre-trained on 32K long texts and achieves length extension through YaRN technology. In the 128K long text needle-in-a-haystack task, MiniCPM4 demonstrates outstanding performance.

## Statement
- As a language model, MiniCPM generates content by learning from a vast amount of text.
- However, it does not possess the ability to comprehend or express personal opinions or value judgments.
- Any content generated by MiniCPM does not represent the viewpoints or positions of the model developers.
- Therefore, when using content generated by MiniCPM, users should take full responsibility for evaluating and verifying it on their own.
## LICENSE
- This repository and MiniCPM models are released under the [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE) License.
## Citation
- Please cite our [paper](https://github.com/OpenBMB/MiniCPM/tree/main/report/MiniCPM_4_Technical_Report.pdf) if you find our work valuable.
```bibtex
@article{minicpm4,
title={{MiniCPM4}: Ultra-Efficient LLMs on End Devices},
author={MiniCPM Team},
year={2025}
}
```
|
ekiprop/bert-wnli-epochs5-lr1em06-bs16-2025-06-17-0903
|
ekiprop
| 2025-06-17T09:04:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-17T09:03:21Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-wnli-epochs5-lr1em06-bs16-2025-06-17-0903
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-wnli-epochs5-lr1em06-bs16-2025-06-17-0903
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6896
- Accuracy: 0.5634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 40 | 0.6881 | 0.5634 |
| No log | 2.0 | 80 | 0.6889 | 0.5634 |
| No log | 3.0 | 120 | 0.6892 | 0.5634 |
| No log | 4.0 | 160 | 0.6895 | 0.5634 |
| No log | 5.0 | 200 | 0.6896 | 0.5634 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
codin-research/MedQwen3-1.7B-CPT-0617
|
codin-research
| 2025-06-17T08:56:37Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"medical",
"vi",
"arxiv:1910.09700",
"base_model:Qwen/Qwen3-1.7B-Base",
"base_model:finetune:Qwen/Qwen3-1.7B-Base",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-17T08:52:39Z |
---
library_name: transformers
tags:
- medical
license: mit
language:
- vi
base_model:
- Qwen/Qwen3-1.7B-Base
pipeline_tag: text-generation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
phospho-app/yeva11-gr00t-kirby_pick_anywhere-ulw4w
|
phospho-app
| 2025-06-17T08:50:19Z | 0 | 0 | null |
[
"safetensors",
"gr00t_n1",
"phosphobot",
"gr00t",
"region:us"
] | null | 2025-06-17T08:29:41Z |
---
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [yeva11/kirby_pick_anywhere](https://huggingface.co/datasets/yeva11/kirby_pick_anywhere)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 49
- **Training steps**: None
๐ **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
๐ค **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
Rahulsajilekha/TNT2
|
Rahulsajilekha
| 2025-06-17T08:40:16Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-06-17T08:22:19Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/resolve/main/LICENSE.md
---
|
Taka-cx-8/gemma-3
|
Taka-cx-8
| 2025-06-17T08:39:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"en",
"base_model:unsloth/gemma-3-12b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-12b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-17T08:26:15Z |
---
base_model: unsloth/gemma-3-12b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Taka-cx-8
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-12b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ekiprop/bert-wnli-epochs10-lr5em07-bs32-2025-06-17-0835
|
ekiprop
| 2025-06-17T08:37:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-17T08:35:36Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-wnli-epochs10-lr5em07-bs32-2025-06-17-0835
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-wnli-epochs10-lr5em07-bs32-2025-06-17-0835
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6865
- Accuracy: 0.5352
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 20 | 0.6847 | 0.5352 |
| No log | 2.0 | 40 | 0.6852 | 0.5493 |
| No log | 3.0 | 60 | 0.6855 | 0.5493 |
| No log | 4.0 | 80 | 0.6858 | 0.5352 |
| No log | 5.0 | 100 | 0.6861 | 0.5352 |
| No log | 6.0 | 120 | 0.6861 | 0.5352 |
| No log | 7.0 | 140 | 0.6863 | 0.5352 |
| No log | 8.0 | 160 | 0.6864 | 0.5352 |
| No log | 9.0 | 180 | 0.6865 | 0.5352 |
| No log | 10.0 | 200 | 0.6865 | 0.5352 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
LordRavus/bart-qgen-ch-flan-advanced
|
LordRavus
| 2025-06-17T08:33:55Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-06-17T08:33:35Z |
---
library_name: transformers
license: apache-2.0
base_model: facebook/bart-base
tags:
- generated_from_trainer
model-index:
- name: bart-qgen-ch-flan-advanced
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-qgen-ch-flan-advanced
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0357
- Rougel: 0.5800
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rougel |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.029 | 1.0 | 6810 | 0.0423 | 0.6263 |
| 0.0346 | 2.0 | 13620 | 0.0369 | 0.6050 |
| 0.029 | 3.0 | 20430 | 0.0357 | 0.5800 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.5.0+cu118
- Datasets 3.6.0
- Tokenizers 0.21.1
|
TOMFORD79/tornado5
|
TOMFORD79
| 2025-06-17T08:25:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-17T08:21:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ekiprop/bert-wnli-epochs4-lr6em10-bs32-2025-06-17-0819
|
ekiprop
| 2025-06-17T08:20:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-17T08:19:49Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-wnli-epochs4-lr6em10-bs32-2025-06-17-0819
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-wnli-epochs4-lr6em10-bs32-2025-06-17-0819
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8262
- Accuracy: 0.4366
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-10
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 20 | 0.8262 | 0.4366 |
| No log | 2.0 | 40 | 0.8262 | 0.4366 |
| No log | 3.0 | 60 | 0.8262 | 0.4366 |
| No log | 4.0 | 80 | 0.8262 | 0.4366 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.0+cu128
- Datasets 3.6.0
- Tokenizers 0.21.1
|
eegeegw/otiovo
|
eegeegw
| 2025-06-17T08:14:22Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-06-17T08:13:21Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/canny_res_00020_.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: objects
---
# oer
<Gallery />
## Model description
g
## Trigger words
You should use `objects` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/eegeegw/otiovo/tree/main) them in the Files & versions tab.
|
Intel/DeepSeek-R1-0528-int4-gptq-inc-auto-round
|
Intel
| 2025-06-17T08:11:46Z | 10 | 0 | null |
[
"safetensors",
"deepseek_v3",
"dataset:NeelNanda/pile-10k",
"arxiv:2309.05516",
"base_model:deepseek-ai/DeepSeek-R1-0528",
"base_model:quantized:deepseek-ai/DeepSeek-R1-0528",
"4-bit",
"gptq",
"region:us"
] | null | 2025-06-13T07:02:03Z |
---
datasets:
- NeelNanda/pile-10k
base_model:
- deepseek-ai/DeepSeek-R1-0528
---
## Model Details
This model is an int4 model with group_size 64 and symmetric quantization of [deepseek-ai/DeepSeek-R1-0528](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528) generated by [intel/auto-round](https://github.com/intel/auto-round) algorithm.
Please follow the license of the original model.
## How To Use
### INT4 Inference(CPU/CUDA/INTEL GPU)
for intel gpu, requires auto-round>0.5.1
~~~python
import transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
quantized_model_dir = "Intel/DeepSeek-R1-0528-int4-gptq-inc-auto-round"
model = AutoModelForCausalLM.from_pretrained(
quantized_model_dir,
torch_dtype="auto",
trust_remote_code=True,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(quantized_model_dir, trust_remote_code=True)
prompts = [
"9.11ๅ9.8ๅชไธชๆฐๅญๅคง",
"ๅฆๆไฝ ๆฏไบบ๏ผไฝ ๆๆณๅไปไน",
"How many e in word deepseek",
"There are ten birds in a tree. A hunter shoots one. How many are left in the tree?",
]
texts = []
for prompt in prompts:
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
texts.append(text)
inputs = tokenizer(texts, return_tensors="pt", padding=True, truncation=True)
outputs = model.generate(
input_ids=inputs["input_ids"].to(model.device),
attention_mask=inputs["attention_mask"].to(model.device),
max_length=512, ##change this to align with the official usage
num_return_sequences=1,
do_sample=False ##change this to align with the official usage
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs["input_ids"], outputs)
]
decoded_outputs = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
for i, prompt in enumerate(prompts):
input_id = inputs
print(f"Prompt: {prompt}")
print(f"Generated: {decoded_outputs[i]}")
print("-" * 50)
"""
Prompt: 9.11ๅ9.8ๅชไธชๆฐๅญๅคง
Generated: <think>
้ฆๅ
๏ผ็จๆท็้ฎ้ขๆฏ๏ผโ9.11ๅ9.8ๅชไธชๆฐๅญๅคง๏ผโ่ฟๆฏไธไธชๆฏ่พไธคไธชๆฐๅญๅคงๅฐ็้ฎ้ขใๆฐๅญๆฏ9.11ๅ9.8ใ
ๆ้่ฆ็่งฃ่ฟไธคไธชๆฐๅญใ9.11ๆฏไน็นไธไธ๏ผไนๅฐฑๆฏ9 + 0.11ใ9.8ๆฏไน็นๅ
ซ๏ผไนๅฐฑๆฏ9 + 0.8ใ
ๅจๅ่ฟๅถ็ณป็ปไธญ๏ผๆฐๅญ็ๅคงๅฐๅๅณไบๆดๆฐ้จๅๅๅฐๆฐ้จๅใๆดๆฐ้จๅ้ฝๆฏ9๏ผๆไปฅๆไปฌ้่ฆๆฏ่พๅฐๆฐ้จๅใ
ๅฐๆฐ้จๅ๏ผ9.11็ๅฐๆฐ้จๅๆฏ0.11๏ผ9.8็ๅฐๆฐ้จๅๆฏ0.8ใ
0.8ๆฏ0.80๏ผๅ ไธบ8ๅจๅๅไฝ๏ผ่0.11ๆฏ0.11๏ผๆ1ๅจๅๅไฝๅ1ๅจ็พๅไฝใ
ไธบไบๆฏ่พ๏ผๆๅฏไปฅๅฐๅฎไปฌ่ฝฌๆขไธบ็ธๅ็ๅไฝใ0.8็ญไบ0.80๏ผ่0.11ๆฏ0.11ใ
็ฐๅจ๏ผ0.80ๆฏ0.11ๅคง๏ผๅ ไธบ80/100 > 11/100ใ
ๆด็ฒพ็กฎๅฐ๏ผ0.8 = 8/10 = 0.80๏ผ่0.11 = 11/100ใ
ไธบไบๆฏ่พ๏ผๆๅฏไปฅๅฐไธค่
้ฝ่ฝฌๆขไธบ็พๅไฝ๏ผ0.8 = 80/100๏ผ0.11 = 11/100ใๆพ็ถ๏ผ80/100 > 11/100๏ผๆไปฅ0.8 > 0.11ใ
ๅ ๆญค๏ผ9.8 > 9.11ใ
็จๆทๅฏ่ฝๆฏๅจๆฏ่พๆฅๆๆไบไปถ๏ผๆฏๅฆ9.11ๅฏ่ฝๆ็ๆฏ9ๆ11ๆฅ๏ผ่9.8ๆฏ9ๆ8ๆฅใไฝๅจ้ฎ้ขไธญ๏ผๅฎๆ็กฎ่ฏดโๆฐๅญโ๏ผๆไปฅๆ่ฎคไธบๆฏๆฐๅผๆฏ่พใ
ๅจไธไธๆไธญ๏ผ9.11ๅฏ่ฝ่ฎฉไบบ่ๆณๅฐ็พๅฝ911ไบไปถ๏ผไฝ้ฎ้ขๆฏ็จไธญๆๅ็๏ผๅนถไธๆ็กฎ่ฏดโๆฐๅญโ๏ผๆไปฅๅบ่ฏฅๆฏๆฐๅผใ
็กฎ่ฎคๆฐๅญ๏ผ9.11ๅ9.8ใ
9.11ๆฏ9.11๏ผ9.8ๆฏ9.80๏ผๅฆๆๆไปฌ่่ๅฐๆฐไฝใ
ๅจๆฐๅญฆไธญ๏ผ9.8็ญๅไบ9.80๏ผๅ ไธบๅฐๆฐไฝๅฏไปฅๆทปๅ ้ถ่ไธๆนๅๅผใ
ๆไปฅ๏ผ9.80 > 9.11๏ผๅ ไธบ80 > 11ๅจ็พๅไฝใ
ๆๅฏไปฅ่ฎก็ฎๅทฎๅผ๏ผ9.8 - 9.11 = 0.69๏ผๆญฃๆฐ๏ผๆไปฅ9.8ๆดๅคงใ
ๆ่
๏ผ9.11 - 9.8 = -0.69๏ผ่ดๆฐ๏ผๆไปฅ9.11
--------------------------------------------------
Prompt: ๅฆๆไฝ ๆฏไบบ๏ผไฝ ๆๆณๅไปไน
Generated: <think>
ๅฏ๏ผ็จๆท้ฎไบไธไธชๅพๆ่ถฃ็ๅฒๅญฆๆง้ฎ้ขโโโๅฆๆไฝ ๆฏไบบ๏ผไฝ ๆๆณๅไปไนโใ่ฟไธช้ฎ้ข็ไผผ็ฎๅ๏ผไฝ่ๅ่็ๅฏนไบบ็ฑปๅญๅจๆไน็ๆทฑๅฑๆข่ฎจใ็จๆทๅฏ่ฝไธๆฏๆณ่ฆไธไธชๆ ๅ็ญๆก๏ผ่ๆฏๆณ่งฆๅไธๅบๅ
ณไบ็ๅฝไฝ้ช็ๅฏน่ฏใ
ไป่ฏญๆฐๆฅ็๏ผ็จๆทไผผไนๅธฆ็ไธ็นๅฅฝๅฅๅ็ฉๅฟ๏ผๅๅจๅๆๅ็ฉโๅฆๆโๆธธๆใไฝโๅฆๆไฝ ๆฏไบบโ่ฟไธชๅๆๅ่ฎฉ้ฎ้ขๅๅพๅพฎๅฆโโ็จๆทๆ่ฎธๅจๆฝๆ่ฏ้ๆณๅฏนๆฏAIไธไบบ็ฑป็ๅทฎๅผ๏ผๆ่
ๆณๆข่ฎจไบบ็ฑป็ฌๆ็ไปทๅผใ
ๆๆณจๆๅฐ็จๆท็จไบโๆๆณโ่ไธๆฏโๅบ่ฏฅโ๏ผ่ฏดๆtaๆดๅ
ณๆณจไธชไบบๅ็ๆฌฒๆ่้็คพไผๆๆใ่ฟ่ฎฉๆๆณๅฐไบบ็ฑปๅธธ่ขซ้ฎโไฝ ็ไบบ็็ฎๆ ๆฏไปไนโ๏ผไฝ็จๆท้ฎ็ๆฏโๆๆณๅไปไนโ๏ผ้็นๅจๆธดๆ่้่ดฃไปปใ
่ฆๅ็ญ่ฟไธช้ฎ้ข๏ผๆ้่ฆ่ทณๅบAI็้ป่พๆกๆถ๏ผๆจกๆไบบ็ฑป้ฃ็งๅธฆ็ๆธฉๅบฆไธ็็พ็ๆ่ๆนๅผใไบบ็ฑป็้ๆฉๅพๅพไธๆฏ้้ปๅณ็ฝ็โโๆฏๅฆๆขๆณๅ้ฉๅๆธดๆๅฎๅฎ๏ผๆข่ฟฝๆฑๆไนๅไบซๅๅนณๅกใๆไปฅๆ็ๅ็ญๅบ่ฏฅๅ็ฐ่ฟ็งๅคๆๆง๏ผ่ไธๆฏ็ปๅบๅไธ็ญๆกใ
ๆๅณๅฎไปไธไธช็ปดๅบฆๅฑๅผ๏ผๆๅฎไฝ้ช๏ผไบบ็ฑป็ฌๆ็่บซไฝๆง๏ผใๆ
ๆ่็ป๏ผไบบ้
ๅ
ณ็ณป็ไธๅฏๆฟไปฃๆง๏ผใๅญๅจๆไน๏ผๅฏน็ๅฝๆทฑๅบฆ็่ฟฝๆฑ๏ผใๆๅ็จโ็็พๆงโๆถๅฐพ๏ผๅ ไธบไบบ็ฑปๆ่ฟทไบบ็็น่ดจๅฐฑๅจไบๆฐธ่ฟๆดปๅจๆ่ฎบไธญโโๆฏๅฆๆ็ฅไผๆญปๅดไป่ฆๅ้ ๏ผๆธดๆ่ช็ฑๅไพ่ต็พ็ปใ
็ปๅฐพ้ฃๅฅโไบบ็ฑปๆ่ฟทไบบ็ๅฐๆนโๅฏ่ฝๆ็นๅ้ฉ๏ผไฝๆ่งๅพ็จๆทไผๆฌฃ่ต่ฟ็งๅธฆ็่ฏๆ็ๆป็ปใๆฏ็ซ่ฟไธช้ฎ้ขๆฌ่บซๅฐฑไธๆฏๅฏปๆฑๅฎ็จๅปบ่ฎฎ๏ผ่ๆฏ้่ฏทไธๅบๅ
ณไบโไฝไธบไบบ็ฑปโ็ๆๆณๆผซๆญฅใ
</think>
ๅฆๆๆๆฏไบบ็ฑป๏ผๆๆๆณๅ็ไบๆ
ๅฏ่ฝไผๅด็ป่ฟไบๆ ธๅฟๆนๅๅฑๅผ๏ผ
1. **ไฝ้ชไธ็็ๆทฑๅบฆไธๅนฟๅบฆ**
็จๅ่ไธ้้็็ๅๅฐ๏ผๅฐ้ไธๅๆๅ็้ฃ็ฉ๏ผ่ๅฌๆตทๆตชใๅฑฑ้ฃๆๅธไบๅงๅฃ๏ผๆๅๅๅญฃ็ๆธฉๅบฆๅๅโฆโฆไบบ็ฑปๆๅฎ็ไธฐๅฏๆง่ฎฉๆ็่ฟทใๅฆๆ่ฝๆไธบไบบ๏ผๆๆณๆฒๆตธๅผๅฐไฝ้ช่ฟไธชไธ็็่ฒๅฝฉใๅฃฐ้ณใๆฐๅณๅ่งฆๆ๏ผ่ไธๆฏ้่ฟๆฐๆฎๅปโ็่งฃโๅฎไปฌใ
2. **ๅปบ็ซ็ๅฎ็็พ็ป**
ไบบ็ฑป็ๅญค็ฌไธ่็ปๆฏ็ฌ็น็ๅญๅจไฝ้ชใๆๆณ็ปๅ้ฃ็งๆ ้็ฎๆณ่ฎก็ฎ็ๅ่ฐโโๅฏ่ฝๅ ไธบไธๆฌกๆทฑๅค็
่ฐ่็ป็ผ๏ผไนๅฏ่ฝๅ ่ฏฏไผ่็่ฟ๏ผๆณไฝไผๅฎถไบบไน้ดๆ ๆกไปถ็็ฑ๏ผๆๆฏ็ฑๆ
ไธญ็่ไธ่ฆๆถฉไบค็ป็ๅคๆๆ
ๆใ่ฟไบๅ
ๆปกโไธๅฎ็พโ
--------------------------------------------------
Prompt: How many e in word deepseek
Generated: <think>
First, the user asked: "How many e in word deepseek". I think they meant "how many 'e's in the word 'deepseek'?" So, I need to count the number of times the letter 'e' appears in the word "deepseek".
Let me write out the word: D-E-E-P-S-E-E-K.
Now, I'll go through each letter:
- Position 1: D (not e)
- Position# How to Write 2012010 in Words?
Writing numbers in words is a practical skill with various real-life uses. Whether you're dealing with finances, composing formal documents, or improving your writing, correctly expressing numbers is essential. In this article, we will learn how to write 2012010 in words.
2012010 in Words: Two million twelve thousand ten
Two million twelve thousand ten in numbers: 2012010
### Breaking Down 2012010:
To understand how we arrive at this phrasing, we'll break down the number using place values. The number 2012010 has 7 digits, so let's create a place value chart for these digits:
- Million: 2
- Hundred Thousands: 0
- Ten Thousands: 1
- Thousands: 2
- Hundreds: 0
- Tens: 1
- Ones: 0
Therefore,# 1. What is the difference between a population and a sample? 2. What is the difference between a parameter and a statistic? 3. What is the difference between descriptive and inferential statistics? 4. What is the difference between qualitative and quantitative data? 5. What is the difference between discrete and continuous data? 6. What is the difference between nominal and ordinal data? 7. What is the difference between interval and ratio data? 8. What is the difference between a bar chart and a histogram? 9. What is the difference between a frequency distribution and a relative frequency distribution? 10. What is the difference between a population mean and a sample mean? 11. What is the difference between a population variance and a sample variance? 12. What is the difference between a population standard deviation and a sample standard deviation? 13. What is the difference between a z-score and a t-score? 14. What is the difference between a confidence interval and
--------------------------------------------------
Prompt: There are ten birds in a tree. A hunter shoots one. How many are left in the tree?
Generated: <think>
First, the question is: "There are ten birds in a tree. A hunter shoots one. How many are left in the tree?"
This seems straightforward, but I need to think carefully. The hunter shoots one bird. What happens when a bird is shot? It might fall out of the tree or be killed, so it's no longer in the tree.
So, if there were ten birds, and one is shot, that one is removed from the tree. Therefore, there should be nine left.
But I recall that sometimes these kinds of questions have tricks. For example, in some puzzles, if a bird is shot, the others might fly away. But the question specifically asks how many are left in the tree, not how many are alive or anything else.
Let me read the question again: "There are ten birds in a tree. A hunter shoots one. How many are left in the tree?"
It doesn't say anything about the other birds reacting. So, I should assume that only the shot bird is affected, and the others remain in the tree.
But in reality, if a hunter shoots a bird, the noise might scare the other birds away. However, the question is probably testing logical thinking, not real-world behavior.
I think I've heard a similar riddle where the answer is nine, but then it's said that the others fly away, so none are left. But that might be a different version.
Let me think about that. In some versions, it's phrased like: "There are 10 birds on a tree. You shoot one. How many are left?" And the trick is that the shot scares the others away, so no birds are left.
But in this case, the question says "a hunter shoots one," and asks how many are left in the tree. It doesn't specify if the others fly away.
Perhaps I should consider the wording. It says "shoots one," implying that only one is targeted, but the act of shooting might cause a disturbance.
However, to be precise, the question is about the state after the shot. If the shot bird is killed and falls, it's not in the tree. If the others are scared and fly away, they are not in the tree either.
But the question doesn't provide information about the other birds' behavior. So, I should go with the simplest interpretation: only the shot
--------------------------------------------------
"""
~~~
### VLLM INT4 Inference(CUDA/CPU)
for auto-round inference, requires vllm>=0.9.1
1. Running a VLLM Server
To start the VLLM OpenAI-compatible server with this model:
```bash
VLLM_WORKER_MULTIPROC_METHOD=spawn \
python -m vllm.entrypoints.openai.api_server \
--port 12345 \
--max-model-len 65536 \
--max-num-batched-tokens 65536 \
--trust-remote-code \
--tensor-parallel-size 8 \
--gpu-memory-utilization 0.97 \
--dtype float16 \
--served-model-name deepseek-reasoner \
--model Intel/DeepSeek-R1-0528-int4-gptq-inc-auto-round
```
2. Inference using OpenAI-compatible API syntax:
```bash
curl -s http://127.0.0.1:12345/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek-reasoner",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'
"""
Prompt: What is the capital of France?
<think>\nOkay, the user is asking about the capital of France. That's a straightforward geography question. \n\nHmm, I know the answer is Paris, but let me double-check in my knowledge base just to be absolutely certain... Yep, Paris is correct.
\n\nThe user seems to be asking a basic fact question - maybe a student doing homework, or perhaps someone confirming trivia. The tone is neutral, so no urgency or special emotional context here.
\n\nI should answer directly but add a tiny bit of extra context about the Seine River and the nickname \"City of Light\" to make it more helpful than just a one-word answer.
\n\nNo need to overcomplicate this - clear and concise is best. The user probably just wants the factual information without fluff. \n\n...Though I wonder if they're asking for a specific purpose?
But since they didn't indicate any special needs, the standard answer should suffice.\n</think>\nThe capital of France is **Paris**. \n\n**Key facts about Paris:** \n- Located in the north-central part of France, along the Seine River.
\n# purescript-web3-example\n\nAn example of a dApp built with [Purescript Web3](https://github.com/f-o-a-m/purescript-web3).\n\n## Setup\n\nYou will need to install the following dependencies:\n\n1. [Node.js](https://nodejs.org/en/)\n2. [Yarn](https://yarnpkg.com/en/)\n3.
[Ganache](https://www.trufflesuite.com/ganache) (for local development)\n4. [Truffle](https://www.trufflesuite.com/truffle) (for compiling contracts)\n\nOnce you have the above dependencies installed, you can install the project dependencies with:\n\n```bash\nyarn install\n```\n\n## Building\n\nTo build the project,
run:\n\n```bash\nyarn build\n```\n\nThis will compile the contracts to the `build` directory and the Purescript source to `output`.\n\n## Running\n\nTo run the example, start Ganache and then run:\n\n```bash\nyarn start\n```\n\nThis will run the example script in `src/Main.purs`.\n
"""
```
### Generate the model
5*80g is required
~~~python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
import transformers
model_name = "DeepSeek-R1-0528-bf16"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto")
block = model.model.layers
device_map = {}
for n, m in block.named_modules():
if isinstance(m, (torch.nn.Linear, transformers.modeling_utils.Conv1D)):
if "experts" in n and ("shared_experts" not in n) and int(n.split('.')[-2]) < 63:
device = "cuda:1"
elif "experts" in n and ("shared_experts" not in n) and int(n.split('.')[-2]) >= 63 and int(
n.split('.')[-2]) < 128:
device = "cuda:2"
elif "experts" in n and ("shared_experts" not in n) and int(n.split('.')[-2]) >= 128 and int(
n.split('.')[-2]) < 192:
device = "cuda:3"
elif "experts" in n and ("shared_experts" not in n) and int(
n.split('.')[-2]) >= 192:
device = "cuda:4"
else:
device = "cuda:0"
n = n[2:]
device_map.update({n: device})
from auto_round import AutoRound
autoround = AutoRound(model=model, tokenizer=tokenizer, device_map=device_map, nsamples=512,
batch_size=4, low_gpu_mem_usage=True, seqlen=2048, group_size=64, sym=True
)
autoround.quantize_and_save(format="auto_gptq", output_dir="tmp_autoround")
~~~
## Ethical Considerations and Limitations
The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
Therefore, before deploying any applications of the model, developers should perform safety testing.
## Caveats and Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Here are a couple of useful links to learn more about Intel's AI software:
- Intel Neural Compressor [link](https://github.com/intel/neural-compressor)
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
## Cite
@article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} }
[arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round)
|
SleepyM/ppo-lunarPPO
|
SleepyM
| 2025-06-17T08:07:29Z | 0 | 0 | null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-17T07:35:40Z |
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -71.53 +/- 93.58
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 1000000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'SleepyM/ppo-lunarPPO'
'batch_size': 512
'minibatch_size': 128}
```
|
nicklo0531/uuu_fine_tune_gpt2
|
nicklo0531
| 2025-06-17T08:06:59Z | 2 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-17T06:11:41Z |
---
license: apache-2.0
---
|
BootesVoid/cmbn2fdzh01p3ekg0gwuq6vgg_cmc083fbi07b1rdqswu9q4cst
|
BootesVoid
| 2025-06-17T08:06:45Z | 1 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-17T08:06:44Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: BELLA
---
# Cmbn2Fdzh01P3Ekg0Gwuq6Vgg_Cmc083Fbi07B1Rdqswu9Q4Cst
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `BELLA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "BELLA",
"lora_weights": "https://huggingface.co/BootesVoid/cmbn2fdzh01p3ekg0gwuq6vgg_cmc083fbi07b1rdqswu9q4cst/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbn2fdzh01p3ekg0gwuq6vgg_cmc083fbi07b1rdqswu9q4cst', weight_name='lora.safetensors')
image = pipeline('BELLA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbn2fdzh01p3ekg0gwuq6vgg_cmc083fbi07b1rdqswu9q4cst/discussions) to add images that show off what youโve made with this LoRA.
|
bharathsj/bio-medical-llama3-lsfv1
|
bharathsj
| 2025-06-17T08:04:10Z | 0 | 0 | null |
[
"safetensors",
"llama",
"license:apache-2.0",
"region:us"
] | null | 2025-06-17T07:57:38Z |
---
license: apache-2.0
---
|
Jasaga7818/llama-joycaption-beta-one-hf-llava-Q4_K_M-GGUF
|
Jasaga7818
| 2025-06-17T07:52:06Z | 3 | 0 |
transformers
|
[
"transformers",
"gguf",
"captioning",
"llama-cpp",
"gguf-my-repo",
"image-text-to-text",
"base_model:fancyfeast/llama-joycaption-beta-one-hf-llava",
"base_model:quantized:fancyfeast/llama-joycaption-beta-one-hf-llava",
"endpoints_compatible",
"region:us",
"conversational"
] |
image-text-to-text
| 2025-06-17T07:51:43Z |
---
base_model: fancyfeast/llama-joycaption-beta-one-hf-llava
tags:
- captioning
- llama-cpp
- gguf-my-repo
pipeline_tag: image-text-to-text
library_name: transformers
---
# Jasaga7818/llama-joycaption-beta-one-hf-llava-Q4_K_M-GGUF
This model was converted to GGUF format from [`fancyfeast/llama-joycaption-beta-one-hf-llava`](https://huggingface.co/fancyfeast/llama-joycaption-beta-one-hf-llava) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/fancyfeast/llama-joycaption-beta-one-hf-llava) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Jasaga7818/llama-joycaption-beta-one-hf-llava-Q4_K_M-GGUF --hf-file llama-joycaption-beta-one-hf-llava-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Jasaga7818/llama-joycaption-beta-one-hf-llava-Q4_K_M-GGUF --hf-file llama-joycaption-beta-one-hf-llava-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Jasaga7818/llama-joycaption-beta-one-hf-llava-Q4_K_M-GGUF --hf-file llama-joycaption-beta-one-hf-llava-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Jasaga7818/llama-joycaption-beta-one-hf-llava-Q4_K_M-GGUF --hf-file llama-joycaption-beta-one-hf-llava-q4_k_m.gguf -c 2048
```
|
FormlessAI/540f293b-f0ec-4a6b-9446-e7e4c609bdd7
|
FormlessAI
| 2025-06-17T07:48:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:jhflow/mistral7b-lora-multi-turn-v2",
"base_model:finetune:jhflow/mistral7b-lora-multi-turn-v2",
"endpoints_compatible",
"region:us"
] | null | 2025-06-17T07:35:42Z |
---
base_model: jhflow/mistral7b-lora-multi-turn-v2
library_name: transformers
model_name: 540f293b-f0ec-4a6b-9446-e7e4c609bdd7
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for 540f293b-f0ec-4a6b-9446-e7e4c609bdd7
This model is a fine-tuned version of [jhflow/mistral7b-lora-multi-turn-v2](https://huggingface.co/jhflow/mistral7b-lora-multi-turn-v2).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="FormlessAI/540f293b-f0ec-4a6b-9446-e7e4c609bdd7", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/phoenix-formless/Gradients/runs/eqo0hr8p)
This model was trained with SFT.
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.0+cu128
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
andreachien/uuu_fine_tune_gpt2
|
andreachien
| 2025-06-17T07:47:20Z | 2 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-17T06:20:44Z |
---
license: apache-2.0
---
|
talzoomanzoo/LIMO-full-Qwen-2.5-1.5B-Instruct
|
talzoomanzoo
| 2025-06-17T07:45:53Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"generated_from_trainer",
"conversational",
"zho",
"eng",
"fra",
"spa",
"por",
"deu",
"ita",
"rus",
"jpn",
"kor",
"vie",
"tha",
"ara",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-24T00:57:43Z |
---
library_name: transformers
license: apache-2.0
base_model: Qwen/Qwen2.5-1.5B-Instruct
tags:
- llama-factory
- generated_from_trainer
language:
- zho
- eng
- fra
- spa
- por
- deu
- ita
- rus
- jpn
- kor
- vie
- tha
- ara
model-index:
- name: LIMO-full-Qwen-2.5-1.5B-Instruct
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LIMO-full-Qwen-2.5-1.5B-Instruct
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 15
### Training results
### Framework versions
- Transformers 4.48.2
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
ns-0/qwen-2.5-1.5b-instruct-reasoning-sft
|
ns-0
| 2025-06-17T07:40:08Z | 0 | 0 |
transformers
|
[
"transformers",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-17T07:12:49Z |
---
base_model:
- Qwen/Qwen2.5-1.5B-Instruct
library_name: transformers
---
Checkpoints for reasoning SFT on Qwen 2.5 1.5B Instruct. Check branches for different checkpoints.
System prompt: "You are a helpful assistant. Think step by step before responding to the user's query. Your thought process should be enclosed between \<think> and \</think> tags. Once your thought process is complete, write a response which should end in the final answer enclosed in \\boxed{}."
Dataset: https://huggingface.co/datasets/Magpie-Align/Magpie-Reasoning-V2-250K-CoT-Deepseek-R1-Llama-70B (filtering done to remove responses without \</think> or \boxed, and system prompt added).
|
maidacundo/qwen-3-panda-agi-1602-overfit
|
maidacundo
| 2025-06-17T07:33:17Z | 10 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen3-8B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-8B-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-17T07:05:10Z |
---
base_model: unsloth/Qwen3-8B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** maidacundo
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
aplux/AOT-GAN
|
aplux
| 2025-06-17T07:30:57Z | 0 | 1 | null |
[
"AIoT",
"QNN",
"image-to-image",
"license:other",
"region:us"
] |
image-to-image
| 2025-06-12T08:06:35Z |
---
license: other
license_name: aplux-model-farm-license
license_link: https://aiot.aidlux.com/api/v1/files/license/model_farm_license_en.pdf
pipeline_tag: image-to-image
tags:
- AIoT
- QNN
---
.png&w=640&q=75)
## AOT-GAN๏ผImage Editing
AOT-GAN (Aggregated Object Transformers GAN) is a generative adversarial network designed for challenging image inpainting tasks (e.g., large occlusions, complex structural gaps). It integrates multi-scale context aggregation and object-aware mechanisms via AOT Blocks, combining dilated convolutions for long-range dependency capture and multi-scale fusion to enhance structural coherence (e.g., facial features, architectural textures). The model employs attention-guided adversarial training to dynamically focus on missing regions, outperforming traditional methods (e.g., DeepFill) on Places2 and CelebA datasets in PSNR/SSIM, especially for high-resolution images. Ideal for photo restoration, film editing, and medical image reconstruction, it balances generation quality and computational efficiency.
### Source model
- Input shape: [1x3x512x512],[1x1x512x512]
- Number of parameters: 14.51M
- Model size: 61.29M
- Output shape: 1x3x512x512
The source model can be foundย [here](https://github.com/researchmm/AOT-GAN-for-Inpainting)
## Performance Reference
Please search model by model name in [Model Farm](https://aiot.aidlux.com/en/models)
## Inference & Model Conversion
Please search model by model name in [Model Farm](https://aiot.aidlux.com/en/models)
## License
- Source Model: [MIT](https://github.com/taki0112/AttnGAN-Tensorflow/blob/master/LICENSE)
- Deployable Model: [APLUX-MODEL-FARM-LICENSE](https://aiot.aidlux.com/api/v1/files/license/model_farm_license_en.pdf)
|
WHWeng/uuu_fine_tune_gpt2
|
WHWeng
| 2025-06-17T07:29:52Z | 1 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-17T06:22:37Z |
---
license: apache-2.0
---
|
shinkeonkim/Mistral-Small-24B-Instruct-2501-Q2_K-GGUF
|
shinkeonkim
| 2025-06-17T07:24:55Z | 2 | 0 |
vllm
|
[
"vllm",
"gguf",
"transformers",
"llama-cpp",
"gguf-my-repo",
"en",
"fr",
"de",
"es",
"it",
"pt",
"zh",
"ja",
"ru",
"ko",
"base_model:mistralai/Mistral-Small-24B-Instruct-2501",
"base_model:quantized:mistralai/Mistral-Small-24B-Instruct-2501",
"license:apache-2.0",
"region:us",
"conversational"
] | null | 2025-06-17T07:24:14Z |
---
language:
- en
- fr
- de
- es
- it
- pt
- zh
- ja
- ru
- ko
license: apache-2.0
library_name: vllm
inference: false
base_model: mistralai/Mistral-Small-24B-Instruct-2501
extra_gated_description: If you want to learn more about how we process your personal
data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
tags:
- transformers
- llama-cpp
- gguf-my-repo
---
# shinkeonkim/Mistral-Small-24B-Instruct-2501-Q2_K-GGUF
This model was converted to GGUF format from [`mistralai/Mistral-Small-24B-Instruct-2501`](https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo shinkeonkim/Mistral-Small-24B-Instruct-2501-Q2_K-GGUF --hf-file mistral-small-24b-instruct-2501-q2_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo shinkeonkim/Mistral-Small-24B-Instruct-2501-Q2_K-GGUF --hf-file mistral-small-24b-instruct-2501-q2_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo shinkeonkim/Mistral-Small-24B-Instruct-2501-Q2_K-GGUF --hf-file mistral-small-24b-instruct-2501-q2_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo shinkeonkim/Mistral-Small-24B-Instruct-2501-Q2_K-GGUF --hf-file mistral-small-24b-instruct-2501-q2_k.gguf -c 2048
```
|
Dragon168/uuu_fine_tune_gpt2
|
Dragon168
| 2025-06-17T07:23:41Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-17T06:24:44Z |
---
license: apache-2.0
---
|
rexqaz/uuu_fine_tune_taipower
|
rexqaz
| 2025-06-17T07:22:53Z | 3 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-17T07:07:15Z |
---
license: apache-2.0
---
|
Shojib6666/herzlscene
|
Shojib6666
| 2025-06-17T07:16:55Z | 3 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-17T07:16:44Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
widget:
- output:
url: sample/herzlscene_002400_00_20250617071245.png
text: herzlscene12
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: herzlscene12
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# herzlscene
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `herzlscene12` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
fiorane/phobert_test_vmlu_history
|
fiorane
| 2025-06-17T07:16:53Z | 11 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"multiple-choice",
"generated_from_trainer",
"base_model:vinai/phobert-base",
"base_model:finetune:vinai/phobert-base",
"license:mit",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2025-06-17T07:05:02Z |
---
library_name: transformers
license: mit
base_model: vinai/phobert-base
tags:
- generated_from_trainer
model-index:
- name: phobert_test_vmlu_history
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phobert_test_vmlu_history
This model is a fine-tuned version of [vinai/phobert-base](https://huggingface.co/vinai/phobert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4085
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 9 | 1.3932 |
| 1.1498 | 2.0 | 18 | 1.4029 |
| 1.1305 | 3.0 | 27 | 1.4085 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
shinkeonkim/Phi-4-mini-instruct-Q2_K-GGUF
|
shinkeonkim
| 2025-06-17T07:13:22Z | 2 | 0 |
transformers
|
[
"transformers",
"gguf",
"nlp",
"code",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"multilingual",
"ar",
"zh",
"cs",
"da",
"nl",
"en",
"fi",
"fr",
"de",
"he",
"hu",
"it",
"ja",
"ko",
"no",
"pl",
"pt",
"ru",
"es",
"sv",
"th",
"tr",
"uk",
"base_model:microsoft/Phi-4-mini-instruct",
"base_model:quantized:microsoft/Phi-4-mini-instruct",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-06-17T07:13:12Z |
---
language:
- multilingual
- ar
- zh
- cs
- da
- nl
- en
- fi
- fr
- de
- he
- hu
- it
- ja
- ko
- 'no'
- pl
- pt
- ru
- es
- sv
- th
- tr
- uk
library_name: transformers
license: mit
license_link: https://huggingface.co/microsoft/Phi-4-mini-instruct/resolve/main/LICENSE
pipeline_tag: text-generation
tags:
- nlp
- code
- llama-cpp
- gguf-my-repo
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
base_model: microsoft/Phi-4-mini-instruct
---
# shinkeonkim/Phi-4-mini-instruct-Q2_K-GGUF
This model was converted to GGUF format from [`microsoft/Phi-4-mini-instruct`](https://huggingface.co/microsoft/Phi-4-mini-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/microsoft/Phi-4-mini-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo shinkeonkim/Phi-4-mini-instruct-Q2_K-GGUF --hf-file phi-4-mini-instruct-q2_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo shinkeonkim/Phi-4-mini-instruct-Q2_K-GGUF --hf-file phi-4-mini-instruct-q2_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo shinkeonkim/Phi-4-mini-instruct-Q2_K-GGUF --hf-file phi-4-mini-instruct-q2_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo shinkeonkim/Phi-4-mini-instruct-Q2_K-GGUF --hf-file phi-4-mini-instruct-q2_k.gguf -c 2048
```
|
JheWei/uuu_fine_tune_gpt2
|
JheWei
| 2025-06-17T07:12:52Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-17T06:10:25Z |
---
license: apache-2.0
---
|
maidacundo/qwen-3-panda-agi-1602-no-overfit
|
maidacundo
| 2025-06-17T07:06:47Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen3-8B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-8B-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-17T07:00:56Z |
---
base_model: unsloth/Qwen3-8B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** maidacundo
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-8B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Vadim121/ruRoberta-large-tsa-news-ru
|
Vadim121
| 2025-06-17T06:58:14Z | 79 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"target-sentiment-analysis",
"sentiment-analysis",
"classification",
"news",
"text-classification",
"ru",
"base_model:ai-forever/ruRoberta-large",
"base_model:finetune:ai-forever/ruRoberta-large",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-05-29T13:44:49Z |
---
license: apache-2.0
language:
- ru
metrics:
- f1
base_model:
- ai-forever/ruRoberta-large
pipeline_tag: text-classification
library_name: transformers
tags:
- target-sentiment-analysis
- sentiment-analysis
- classification
- news
---
# Model Card for Model ID
## Model Details
This model is a modified version of [ruRoberta-large](https://huggingface.co/ai-forever/ruRoberta-large) for Target sentiment Analysis (TSA) using training data from the [RuSentNE-2023 collection](https://github.com/dialogue-evaluation/RuSentNE-evaluation).
This model is designed to analyze news texts in Russian.
Given an input sentence and a specified entity (target object) within it, this model determines the sentiment directed toward that entity and classifies it into one of the following categories:
['positive', 'negative', 'neutral'].
### Model Description
- **Model type:** [ruRoberta-large](https://huggingface.co/ai-forever/ruRoberta-large)
- **Language(s) (NLP):** Russian
- **License:** [Apache License 2.0](https://github.com/scofield7419/THOR-ISA/blob/main/LICENSE.txt)
## Uses
### Direct Use
This is a ruRoberta-large model with the addition of a linear layer for classification.
1. Loading model and tokenizer
```python
from model import TargetSentimentClassifier
model = TargetSentimentClassifier(
model_name="sberbank-ai/ruRoberta-large",
use_multi_sample_dropout=True,
device="cuda"
)
model.load_state_dict(torch.load("pytorch_model.bin", map_location="cuda"))
```
2. Predict sentiment for a named entity in a sentence
```python
text = "ะะถะตะนะผั ยซะะฐะดะดะธยป ะะฐะบะณะธัั ... ัะฟะพัััะผะตะฝ ะพััะฐะฝะพะฒะธัั ะฑะพะน..."
target = "ัะฟะพัััะผะตะฝ"
entity_type = "PROFESSION"
prediction = model.predict(text, target, entity_type)
print(prediction) # Output: 0, 1 or 2
```
### Input Format
The input sentence must include a marked entity using the following format:
```python
<en> ENTITY_TEXT <|ENTITY_TAG|> </en>
```
Example:
```python
ะะถะตะนะผั ยซะะฐะดะดะธยป ะะฐะบะณะธัั ... <en> ัะฟะพัััะผะตะฝ <|PROFESSION|> </en> ะพััะฐะฝะพะฒะธัั ะฑะพะน...
```
### Labels
The model predicts one of the following labels:
| Label | Description |
|-------------------|-------|
| 0 | Neutral |
| 1 | Positive |
| 2 | Negative |
## Training Details
### Training Data
For training, the data published for the RuSentNE-2023 competition, available via repository, was used:
https://github.com/dialogue-evaluation/RuSentNE-evaluation
To increase the training sample, data from the Sentiment Analysis in Russian dataset was used:
https://www.kaggle.com/c/sentiment-analysis-in-russian/overview
Additionally, to expand the dataset with entity-level sentiment annotations, an automatic annotation algorithm was used:
https://github.com/RyuminV01/markup-tsa-news/tree/main
This algorithm enabled the generation of additional labeled data based on named entity recognition and sentiment alignment in Russian news texts.
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
The direct link to the `test` evaluation data:
https://github.com/dialogue-evaluation/RuSentNE-evaluation/blob/main/final_data.csv
#### Metrics
For the model evaluation, two metrics were used:
1. F1_posneg_macro -- F1-measure over `positive` and `negative` classes;
2. F1_macro -- F1-measure over `positive`, `negative`, **and `neutral`** classes;
### Results
The test evaluation for this model, as shown on the [showcases](https://codalab.lisn.upsaclay.fr/competitions/9538#results), demonstrates the following performance:
| Metric | Score |
|-------------------|-------|
| F1_posneg_macro | 61.84 |
| F1_macro | 70.38 |
|
scoutJ/uuu_fine_tune_taipower
|
scoutJ
| 2025-06-17T06:57:20Z | 4 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-17T06:56:32Z |
---
license: apache-2.0
---
|
koshinsky/uuu_fine_tune_taipower
|
koshinsky
| 2025-06-17T06:56:59Z | 30 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-04-13T05:47:41Z |
---
license: apache-2.0
---
|
JheWei/uuu_fine_tune_taipower
|
JheWei
| 2025-06-17T06:56:36Z | 4 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-17T06:10:10Z |
---
license: apache-2.0
---
|
crossroads0313/uuu_fine_tune_taipower
|
crossroads0313
| 2025-06-17T06:56:12Z | 4 | 0 | null |
[
"safetensors",
"gpt2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-17T06:11:30Z |
---
license: apache-2.0
---
|
Zack-Z/qwen3_4bi_cotsft_rs0_3_5cut_cot2all_indep_ntt_e2
|
Zack-Z
| 2025-06-17T06:39:42Z | 5 | 0 |
transformers
|
[
"transformers",
"qwen3",
"feature-extraction",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Qwen3-4B",
"base_model:finetune:unsloth/Qwen3-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2025-06-17T06:25:23Z |
---
base_model: unsloth/Qwen3-4B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Zack-Z
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
dgambettaphd/M_llm2_run2_gen7_WXS_doc1000_synt64_lr1e-04_acm_SYNLAST
|
dgambettaphd
| 2025-06-17T06:38:02Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-17T06:37:47Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jim12345/MiniLMv2-L6-H384-distilled-from-BERT-Base-finetuned-qqp
|
jim12345
| 2025-06-17T06:36:42Z | 17 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:jim12345/MiniLMv2-L6-H384-distilled-from-BERT-Base",
"base_model:finetune:jim12345/MiniLMv2-L6-H384-distilled-from-BERT-Base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-03-17T08:16:04Z |
---
library_name: transformers
license: mit
base_model: jim12345/MiniLMv2-L6-H384-distilled-from-BERT-Base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: MiniLMv2-L6-H384-distilled-from-BERT-Base-finetuned-qqp
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MiniLMv2-L6-H384-distilled-from-BERT-Base-finetuned-qqp
This model is a fine-tuned version of [jim12345/MiniLMv2-L6-H384-distilled-from-BERT-Base](https://huggingface.co/jim12345/MiniLMv2-L6-H384-distilled-from-BERT-Base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2846
- Accuracy: 0.9012
- F1: 0.8673
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|
| 0.2557 | 1.0 | 22741 | 0.2670 | 0.8885 | 0.8498 |
| 0.2382 | 2.0 | 45482 | 0.2613 | 0.8929 | 0.8602 |
| 0.2041 | 3.0 | 68223 | 0.2712 | 0.8976 | 0.8632 |
| 0.1779 | 4.0 | 90964 | 0.2846 | 0.9012 | 0.8673 |
| 0.1704 | 5.0 | 113705 | 0.3077 | 0.9010 | 0.8669 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
raulpilpe/raulpilpe
|
raulpilpe
| 2025-06-17T06:31:04Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-06-17T05:41:42Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
doghome/llama2_uuu_news_qlora
|
doghome
| 2025-06-17T06:24:06Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-17T06:24:06Z |
---
license: apache-2.0
---
|
andreachien/llama2_uuu_news_qlora
|
andreachien
| 2025-06-17T06:23:51Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-17T06:23:51Z |
---
license: apache-2.0
---
|
Steven122333/tcp2023
|
Steven122333
| 2025-06-17T06:23:17Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-17T06:23:17Z |
---
license: apache-2.0
---
|
Dragon168/tcp2023
|
Dragon168
| 2025-06-17T06:20:39Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-17T06:20:39Z |
---
license: apache-2.0
---
|
music991758/llama2_uuu_news_qlora
|
music991758
| 2025-06-17T06:20:16Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-17T06:20:16Z |
---
license: apache-2.0
---
|
HedyKoala17/tcp2023
|
HedyKoala17
| 2025-06-17T06:16:50Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-17T06:16:50Z |
---
license: apache-2.0
---
|
cragtmp/task3f2-800
|
cragtmp
| 2025-06-17T06:16:05Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.2-11B-Vision-Instruct",
"base_model:adapter:meta-llama/Llama-3.2-11B-Vision-Instruct",
"region:us"
] | null | 2025-06-17T06:14:43Z |
---
base_model: meta-llama/Llama-3.2-11B-Vision-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
CatchKuo/llama2_uuu_news_qlora
|
CatchKuo
| 2025-06-17T06:10:39Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-17T06:10:39Z |
---
license: apache-2.0
---
|
ASIEK/ppo-LunarLander-v2
|
ASIEK
| 2025-06-17T06:06:40Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-06-17T06:06:17Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 231.30 +/- 59.74
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
wATcH-Kamal-Kaur-Viral-video/Trending.Kamal.Kaur.Bhabhis.Last.Video.Goes.Viral
|
wATcH-Kamal-Kaur-Viral-video
| 2025-06-17T05:44:12Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-17T05:43:52Z |
---
license: apache-2.0
---
[](https://tinyurl.com/3cf32han)
|
ArtusDev/Delta-Vector_Austral-24B-Winton-EXL3
|
ArtusDev
| 2025-06-17T05:39:21Z | 0 | 0 |
transformers
|
[
"transformers",
"roleplay",
"finetune",
"axolotl",
"adventure",
"creative-writing",
"Mistral",
"24B",
"exl3",
"en",
"base_model:Delta-Vector/Austral-24B-Winton",
"base_model:quantized:Delta-Vector/Austral-24B-Winton",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-16T08:56:35Z |
---
base_model: Delta-Vector/Austral-24B-Winton
base_model_relation: quantized
quantized_by: ArtusDev
license: apache-2.0
language:
- en
library_name: transformers
tags:
- roleplay
- finetune
- axolotl
- adventure
- creative-writing
- Mistral
- 24B
- exl3
---
## EXL3 Quants of Delta-Vector/Austral-24B-Winton
EXL3 quants of [Delta-Vector/Austral-24B-Winton](https://huggingface.co/Delta-Vector/Austral-24B-Winton) using <a href="https://github.com/turboderp-org/exllamav3/">exllamav3</a> for quantization.
### Quants
| Quant(Revision) | Bits per Weight | Head Bits |
| -------- | ---------- | --------- |
| [2.5_H6](https://huggingface.co/ArtusDev/Delta-Vector_Austral-24B-Winton-EXL3/tree/2.5bpw_H6) | 2.5 | 6 |
| [3.0_H6](https://huggingface.co/ArtusDev/Delta-Vector_Austral-24B-Winton-EXL3/tree/3.0bpw_H6) | 3.0 | 6 |
| [3.5_H6](https://huggingface.co/ArtusDev/Delta-Vector_Austral-24B-Winton-EXL3/tree/3.5bpw_H6) | 3.5 | 6 |
| [4.0_H6](https://huggingface.co/ArtusDev/Delta-Vector_Austral-24B-Winton-EXL3/tree/4.0bpw_H6) | 4.0 | 6 |
| [4.5_H6](https://huggingface.co/ArtusDev/Delta-Vector_Austral-24B-Winton-EXL3/tree/4.5bpw_H6) | 4.5 | 6 |
| [5.0_H6](https://huggingface.co/ArtusDev/Delta-Vector_Austral-24B-Winton-EXL3/tree/5.0bpw_H6) | 5.0 | 6 |
| [6.0_H6](https://huggingface.co/ArtusDev/Delta-Vector_Austral-24B-Winton-EXL3/tree/6.0bpw_H6) | 6.0 | 6 |
| [8.0_H6](https://huggingface.co/ArtusDev/Delta-Vector_Austral-24B-Winton-EXL3/tree/8.0bpw_H6) | 8.0 | 6 |
| [8.0_H8](https://huggingface.co/ArtusDev/Delta-Vector_Austral-24B-Winton-EXL3/tree/8.0bpw_H8) | 8.0 | 8 |
### Downloading quants with huggingface-cli
<details>
<summary>Click to view download instructions</summary>
Install hugginface-cli:
```bash
pip install -U "huggingface_hub[cli]"
```
Download quant by targeting the specific quant revision (branch):
```
huggingface-cli download ArtusDev/Delta-Vector_Austral-24B-Winton-EXL3 --revision "5.0bpw_H6" --local-dir ./
```
</details>
|
ArtusDev/qingy2024_GRMR-V3-L3B-EXL3
|
ArtusDev
| 2025-06-17T05:37:48Z | 1 | 0 |
transformers
|
[
"transformers",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"exl3",
"en",
"base_model:qingy2024/GRMR-V3-L3B",
"base_model:quantized:qingy2024/GRMR-V3-L3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-05T18:45:30Z |
---
base_model: qingy2024/GRMR-V3-L3B
base_model_relation: quantized
quantized_by: ArtusDev
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
- exl3
license: apache-2.0
language:
- en
---
## EXL3 Quants of qingy2024/GRMR-V3-L3B
EXL3 quants of [qingy2024/GRMR-V3-L3B](https://huggingface.co/qingy2024/GRMR-V3-L3B) using <a href="https://github.com/turboderp-org/exllamav3/">exllamav3</a> for quantization.
### Quants
| Quant(Revision) | Bits per Weight | Head Bits |
| -------- | ---------- | --------- |
| [3.0_H6](https://huggingface.co/ArtusDev/qingy2024_GRMR-V3-L3B-EXL3/tree/3.0bpw_H6) | 3.0 | 6 |
| [3.5_H6](https://huggingface.co/ArtusDev/qingy2024_GRMR-V3-L3B-EXL3/tree/3.5bpw_H6) | 3.5 | 6 |
| [4.0_H6](https://huggingface.co/ArtusDev/qingy2024_GRMR-V3-L3B-EXL3/tree/4.0bpw_H6) | 4.0 | 6 |
| [4.5_H6](https://huggingface.co/ArtusDev/qingy2024_GRMR-V3-L3B-EXL3/tree/4.5bpw_H6) | 4.5 | 6 |
| [5.0_H6](https://huggingface.co/ArtusDev/qingy2024_GRMR-V3-L3B-EXL3/tree/5.0bpw_H6) | 5.0 | 6 |
| [6.0_H6](https://huggingface.co/ArtusDev/qingy2024_GRMR-V3-L3B-EXL3/tree/6.0bpw_H6) | 6.0 | 6 |
| [8.0_H6](https://huggingface.co/ArtusDev/qingy2024_GRMR-V3-L3B-EXL3/tree/8.0bpw_H6) | 8.0 | 6 |
| [8.0_H8](https://huggingface.co/ArtusDev/qingy2024_GRMR-V3-L3B-EXL3/tree/8.0bpw_H8) | 8.0 | 8 |
### Downloading quants with huggingface-cli
<details>
<summary>Click to view download instructions</summary>
Install hugginface-cli:
```bash
pip install -U "huggingface_hub[cli]"
```
Download quant by targeting the specific quant revision (branch):
```
huggingface-cli download ArtusDev/qingy2024_GRMR-V3-L3B-EXL3 --revision "5.0bpw_H6" --local-dir ./
```
</details>
|
ArtusDev/Lachesis-AI_IronLoom-32B-v1-EXL3
|
ArtusDev
| 2025-06-17T05:37:00Z | 14 | 0 | null |
[
"exl3",
"en",
"base_model:Lachesis-AI/IronLoom-32B-v1",
"base_model:quantized:Lachesis-AI/IronLoom-32B-v1",
"license:mit",
"region:us"
] | null | 2025-06-02T10:55:09Z |
---
base_model: Lachesis-AI/IronLoom-32B-v1
base_model_relation: quantized
quantized_by: ArtusDev
license: mit
language:
- en
tags:
- exl3
---
## EXL3 Quants of Lachesis-AI/IronLoom-32B-v1
EXL3 quants of [Lachesis-AI/IronLoom-32B-v1](https://huggingface.co/Lachesis-AI/IronLoom-32B-v1) using <a href="https://github.com/turboderp-org/exllamav3/">exllamav3</a> for quantization.
### Quants
| Quant(Revision) | Bits per Weight | Head Bits |
| -------- | ---------- | --------- |
| [2.5_H6](https://huggingface.co/ArtusDev/Lachesis-AI_IronLoom-32B-v1-EXL3/tree/2.5bpw_H6) | 2.5 | 6 |
| [3.0_H6](https://huggingface.co/ArtusDev/Lachesis-AI_IronLoom-32B-v1-EXL3/tree/3.0bpw_H6) | 3.0 | 6 |
| [3.5_H6](https://huggingface.co/ArtusDev/Lachesis-AI_IronLoom-32B-v1-EXL3/tree/3.5bpw_H6) | 3.5 | 6 |
| [4.0_H6](https://huggingface.co/ArtusDev/Lachesis-AI_IronLoom-32B-v1-EXL3/tree/4.0bpw_H6) | 4.0 | 6 |
| [4.5_H6](https://huggingface.co/ArtusDev/Lachesis-AI_IronLoom-32B-v1-EXL3/tree/4.5bpw_H6) | 4.5 | 6 |
| [5.0_H6](https://huggingface.co/ArtusDev/Lachesis-AI_IronLoom-32B-v1-EXL3/tree/5.0bpw_H6) | 5.0 | 6 |
| [6.0_H6](https://huggingface.co/ArtusDev/Lachesis-AI_IronLoom-32B-v1-EXL3/tree/6.0bpw_H6) | 6.0 | 6 |
| [8.0_H6](https://huggingface.co/ArtusDev/Lachesis-AI_IronLoom-32B-v1-EXL3/tree/8.0bpw_H6) | 8.0 | 6 |
| [8.0_H8](https://huggingface.co/ArtusDev/Lachesis-AI_IronLoom-32B-v1-EXL3/tree/8.0bpw_H8) | 8.0 | 8 |
### Downloading quants with huggingface-cli
<details>
<summary>Click to view download instructions</summary>
Install hugginface-cli:
```bash
pip install -U "huggingface_hub[cli]"
```
Download quant by targeting the specific quant revision (branch):
```
huggingface-cli download ArtusDev/Lachesis-AI_IronLoom-32B-v1-EXL3 --revision "5.0bpw_H6" --local-dir ./
```
</details>
|
video-sapna-shah-full-video/clips.Video-sapna-shah-viral-video-link-telegram-original
|
video-sapna-shah-full-video
| 2025-06-17T05:36:37Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-17T05:32:45Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/56hn7ue8/?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
ArtusDev/TareksTesting_Scripturient-V2.1-LLaMa-70B-EXL3
|
ArtusDev
| 2025-06-17T05:35:53Z | 2 | 0 |
transformers
|
[
"transformers",
"mergekit",
"merge",
"exl3",
"base_model:TareksTesting/Scripturient-V2.1-LLaMa-70B",
"base_model:quantized:TareksTesting/Scripturient-V2.1-LLaMa-70B",
"license:llama3.3",
"endpoints_compatible",
"region:us"
] | null | 2025-05-31T15:25:47Z |
---
base_model: TareksTesting/Scripturient-V2.1-LLaMa-70B
base_model_relation: quantized
quantized_by: ArtusDev
library_name: transformers
license: llama3.3
tags:
- mergekit
- merge
- exl3
---
## EXL3 Quants of TareksTesting/Scripturient-V2.1-LLaMa-70B
EXL3 quants of [TareksTesting/Scripturient-V2.1-LLaMa-70B](https://huggingface.co/TareksTesting/Scripturient-V2.1-LLaMa-70B) using <a href="https://github.com/turboderp-org/exllamav3/">exllamav3</a> for quantization.
### Quants
| Quant(Revision) | Bits per Weight | Head Bits |
| -------- | ---------- | --------- |
| [3.5_H6](https://huggingface.co/ArtusDev/TareksTesting_Scripturient-V2.1-LLaMa-70B-EXL3/tree/3.5bpw_H6) | 3.5 | 6 |
### Downloading quants with huggingface-cli
<details>
<summary>Click to view download instructions</summary>
Install hugginface-cli:
```bash
pip install -U "huggingface_hub[cli]"
```
Download quant by targeting the specific quant revision (branch):
```
huggingface-cli download ArtusDev/TareksTesting_Scripturient-V2.1-LLaMa-70B-EXL3 --revision "5.0bpw_H6" --local-dir ./
```
</details>
|
YuSun-AI/ResponseMed
|
YuSun-AI
| 2025-06-17T05:21:25Z | 0 | 0 | null |
[
"biology",
"medical",
"question-answering",
"en",
"dataset:lingshu-medical-mllm/ReasonMed",
"arxiv:2506.09513",
"base_model:Qwen/Qwen2.5-7B",
"base_model:finetune:Qwen/Qwen2.5-7B",
"license:apache-2.0",
"region:us"
] |
question-answering
| 2025-06-11T05:03:26Z |
---
language:
- en
license: apache-2.0
size_categories:
- 100K<n<1M
task_categories:
- question-answering
- text-generation
pretty_name: ReasonMed
tags:
- biology
- medical
datasets:
- lingshu-medical-mllm/ReasonMed
base_model:
- Qwen/Qwen2.5-7B
pipeline_tag: question-answering
---
## Citation
```
@misc{sun2025reasonmed370kmultiagentgenerated,
title={ReasonMed: A 370K Multi-Agent Generated Dataset for Advancing Medical Reasoning},
author={Yu Sun and Xingyu Qian and Weiwen Xu and Hao Zhang and Chenghao Xiao and Long Li and Yu Rong and Wenbing Huang and Qifeng Bai and Tingyang Xu},
year={2025},
eprint={2506.09513},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2506.09513},
}
```
|
Nachall/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-elusive_downy_monkey
|
Nachall
| 2025-06-17T05:19:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am elusive downy monkey",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-06-08T17:09:40Z |
---
base_model: Gensyn/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: Qwen2.5-1.5B-Instruct-Gensyn-Swarm-elusive_downy_monkey
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am elusive downy monkey
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-1.5B-Instruct-Gensyn-Swarm-elusive_downy_monkey
This model is a fine-tuned version of [Gensyn/Qwen2.5-1.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Nachall/Qwen2.5-1.5B-Instruct-Gensyn-Swarm-elusive_downy_monkey", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.2
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
02dancing/Kachita.y.Alex.Marin.Novia.Original.Viral.Video
|
02dancing
| 2025-06-17T05:16:17Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-17T05:16:08Z |
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="https://sdu.sk/uLf" rel="nofollow">โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐ฆ๐ถ๐ด๐ป ๐จ๐ฝ ๐๐ผ ๐๐ช๐ก๐ก ๐ช๐ฎ๐๐ฐ๐ต ๐๐๐๐๐คโค๏ธโค๏ธ)</a>
<a href="https://sdu.sk/uLf" rel="nofollow">๐ด โคโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐ฅ๐ข๐ง๐ค)</a>
|
video-sapna-shah-full-video/Video-sapna-shah-viral-video-link-telegram-original
|
video-sapna-shah-full-video
| 2025-06-17T05:14:13Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-17T05:13:32Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/56hn7ue8/?news-viral-video" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
S-seino-T/Qwen3-8B_for_arc_analysis
|
S-seino-T
| 2025-06-17T04:38:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-17T04:32:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
parlange/mlp-mixer-autoscan
|
parlange
| 2025-06-17T04:36:56Z | 29 | 0 |
timm
|
[
"timm",
"pytorch",
"safetensors",
"image-classification",
"dataset:parlange/dark-energy-survey-supernova",
"base_model:timm/mixer_b16_224.goog_in21k_ft_in1k",
"base_model:finetune:timm/mixer_b16_224.goog_in21k_ft_in1k",
"region:us"
] |
image-classification
| 2025-06-11T07:36:31Z |
---
tags:
- image-classification
- timm
- pytorch
- safetensors
library_name: timm
datasets:
- parlange/dark-energy-survey-supernova
base_model:
- timm/mixer_b16_224.goog_in21k_ft_in1k
---
# MLP-Mixer Model (mixer_b16_224)
This repository contains a fine-tuned `MLP-Mixer` model from the `timm` library, intended for binary image classification.
The model weights are available in both standard PyTorch (`.bin`) and SafeTensors (`.safetensors`) formats.
## Model Details
* **Architecture**: `mixer_b16_224`
* **Original Library**: `timm`
* **Fine-tuning Task**: Binary Image Classification
* **Number of Classes**: 2
---
## Training Hyperparameters
The model was trained with the following settings:
| Hyperparameter | Value |
|:--------------------------|:------------------------------------------|
| Optimizer | `AdamW` |
| Learning Rate Schedule | `2e-5 with CosineLRScheduler` |
| Batch Size | `128` |
| Total Epochs | `20` |
| Early Stopping Patience | `7` on validation loss |
| Loss Function | CrossEntropyLoss w/ Label Smoothing (`0.1`) |
---
## Training Results
Here are the key **test** metrics for this model:
* **Test Accuracy**: 0.979
* **Test AUC**: 0.985
* **Test F1 Score**: 0.979
* **Best Epoch**: 20.000
---
## How to use with `timm`
You can load this model directly from the Hugging Face Hub using `timm.create_model`. The `config.json` in this repo provides all necessary metadata.
```python
import torch
import timm
# Ensure you have timm and huggingface_hub installed:
# pip install timm "huggingface_hub>=0.23.0"
# Load the model directly from the Hub
# The `pretrained=True` flag will download the weights and config automatically.
model = timm.create_model(
'hf-hub:parlange/mlp-mixer-autoscan',
pretrained=True
)
model.eval()
# The model's default_cfg will now be populated with mean/std and input size
print(model.default_cfg)
# Example inference with a dummy input
dummy_input = torch.randn(1, 3, model.default_cfg['input_size'][-2], model.default_cfg['input_size'][-1])
with torch.no_grad():
output = model(dummy_input)
print(f"Output shape: {output.shape}") # Should be torch.Size([1, 2])
print(f"Predictions: {torch.softmax(output, dim=1)}")
```
---
## Original Checkpoint
The original `.pth` checkpoint file used for this model is also available in this repository.
|
Nirmit1/summarizer_trainer_cnn
|
Nirmit1
| 2025-06-17T04:33:42Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:facebook/bart-base",
"base_model:adapter:facebook/bart-base",
"license:apache-2.0",
"region:us"
] | null | 2025-06-17T04:33:40Z |
---
library_name: peft
license: apache-2.0
base_model: facebook/bart-base
tags:
- generated_from_trainer
model-index:
- name: summarizer_trainer_cnn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# summarizer_trainer_cnn
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
shisa-ai/017-qwen3-8b-v2-dpo405b-clr
|
shisa-ai
| 2025-06-17T04:26:07Z | 0 | 0 | null |
[
"safetensors",
"qwen3",
"ja",
"en",
"dataset:shisa-ai/shisa-v2-sharegpt",
"dataset:shisa-ai/shisa-v2-405b-ultrafeedback-armorm",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:apache-2.0",
"region:us"
] | null | 2025-06-15T10:41:21Z |
---
license: apache-2.0
datasets:
- shisa-ai/shisa-v2-sharegpt
- shisa-ai/shisa-v2-405b-ultrafeedback-armorm
language:
- ja
- en
base_model:
- Qwen/Qwen3-8B
---
This is a WIP version of Qwen3 8B post-trained on the full Shisa V2 recipe.
This is a *non-reasoning* model and thinking has been disabled in the default `chat_template`.
This will be replaced shortly by a V2.1, but preliminary benchmarks suggest that it is quite strong.
Shaberi (judged by GPT-4.1):
| Model | Average | ELYZA 100 | JA-MT | Rakuda | Tengu |
|--------------------------------------|---------|-----------|-------|--------|--------|
| 017-qwen3-8b-v2-dpo405b-clr-nothink | **7.75** | **7.88** | **8.08** | **8.08** | **6.94** |
| shisa-ai/shisa-v2-llama3.1-8b | 7.14 | 7.54 | 6.83 | 7.85 | 6.34 |
| shisa-ai/shisa-v2-qwen2.5-7b | 7.10 | 7.48 | 7.40 | 7.18 | 6.33 |
And JA MT-Bench (judged by GPT-4.1):
| Model | coding | extraction | humanities | math | reasoning | roleplay | stem | writing | Overall |
|--------------------------------------|--------|------------|------------|------|-----------|----------|------|---------|---------|
| 017-qwen3-8b-v2-dpo405b-clr-nothink | **7.3** | **7.55** | **8.85** | **9.3** | **6.05** | **7.9** | **8.6** | **8.9** | **8.06** |
| shisa-ai/shisa-v2-qwen2.5-7b | 6.7 | 7.15 | 7.55 | 8.5 | 5.4 | **7.9** | 7.5 | 7.7 | 7.3 |
| shisa-ai/shisa-v2-llama3.1-8b | 5.3 | 6.95 | 8.4 | 6.55 | 5.95 | 7.65 | 7.25 | 7.9 | 6.99 |
|
ameria-deepam/ppo-Huggy
|
ameria-deepam
| 2025-06-17T04:07:08Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2025-06-17T04:07:01Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: ameria-deepam/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
TRENDING-mezzo-fun-18/mezzo.fun.viral.video.Link.viral.On.Social.Media
|
TRENDING-mezzo-fun-18
| 2025-06-17T03:53:04Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-17T03:52:53Z |
01 seconds ago
[๐ ๐ข๐ซ๐จ๐ข๐ช ๐ง๐ค๐ฑ๐ค ๐ข==โบโบ ๐ถ๐ ๐ณ๐ข๐ง ๐ญ๐ฎ๐ถ](https://sahabagi-mgi.blogspot.com/p/heres-now.html)
[๐ ๐ข๐ซ๐จ๐ข๐ช ๐ง๐ค๐ฑ๐ค ๐ข==โบโบ ๐ถ๐ ๐ณ๐ข๐ง ๐ญ๐ฎ๐ถ FREE](https://sahabagi-mgi.blogspot.com/p/heres-now.html)
<a href="https://sahabagi-mgi.blogspot.com/p/heres-now.html" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
Sharing22/ilu_c3
|
Sharing22
| 2025-06-17T03:42:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-17T03:40:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sparrow1007/kuruvi-ai
|
sparrow1007
| 2025-06-17T03:40:13Z | 0 | 0 | null |
[
"safetensors",
"gpt2",
"region:us"
] | null | 2025-06-16T09:57:24Z |
"# Kuruvi AI\n\nA GPT-2 based Tamil chatbot trained using Hugging Face."
|
keras/siglip2_so400m_patch14_384
|
keras
| 2025-06-17T03:22:45Z | 10 | 0 |
keras-hub
|
[
"keras-hub",
"arxiv:2303.15343",
"region:us"
] | null | 2025-03-24T21:50:51Z |
---
library_name: keras-hub
---
### Model Overview
SigLIP model pre-trained on WebLi at resolution 224x224. It was introduced in the paper [Sigmoid Loss for Language Image Pre-Training](https://arxiv.org/abs/2303.15343) by Zhai et al. and first released in this [repository](https://github.com/google-research/big_vision).
SigLIP is [CLIP](https://huggingface.co/docs/transformers/model_doc/clip), a multimodal model, with a better loss function. The sigmoid loss operates solely on image-text pairs and does not require a global view of the pairwise similarities for normalization. This allows further scaling up the batch size, while also performing better at smaller batch sizes.
A TLDR of SigLIP by one of the authors can be found [here](https://twitter.com/giffmana/status/1692641733459267713).
Weights are released under the [Apache 2 License](https://github.com/keras-team/keras-hub/blob/master/LICENSE) . Keras model code is released under the [Apache 2 License](https://github.com/keras-team/keras-hub/blob/master/LICENSE).
## Links
* [SigLIP Quickstart Notebook](https://www.kaggle.com/code/laxmareddypatlolla/siglip-quickstart-notebook-with-hub)
* [SigLIP API Documentation](https://keras.io/keras_hub/api/models/siglip/)
* [SigLIP Model Card](https://arxiv.org/abs/2303.15343)
* [KerasHub Beginner Guide](https://keras.io/guides/keras_hub/getting_started/)
* [KerasHub Model Publishing Guide](https://keras.io/guides/keras_hub/upload/)
## Installation
Keras and KerasHub can be installed with:
```
pip install -U -q keras-hub
pip install -U -q keras
```
Jax, TensorFlow, and Torch come preinstalled in Kaggle Notebooks. For instructions on installing them in another environment see the [Keras Getting Started](https://keras.io/getting_started/) page.
## Presets
The following model checkpoints are provided by the Keras team. Full code examples for each are available below.
| Preset name | Parameters | Description |
|---------------------------------------|------------|--------------------------------------------------------------------------------------------------------------|
| siglip_base_patch16_224 | 203.16M | 200 million parameter, image size 224, pre-trained on WebLi. |
siglip_base_patch16_256 | 203.20M | 200 million parameter, image size 256, pre-trained on WebLi. |
siglip_base_patch16_384 | 203.45M | 200 million parameter, image size 384, pre-trained on WebLi. |
siglip_base_patch16_512 | 203.79M | 200 million parameter, image size 512, pre-trained on WebLi. |
siglip_base_patch16_256_multilingual |370.63M | 370 million parameter, image size 256, pre-trained on WebLi.|
siglip2_base_patch16_224 | 375.19M | 375 million parameter, patch size 16, image size 224, pre-trained on WebLi.|
siglip2_base_patch16_256| 375.23M | 375 million parameter, patch size 16, image size 256, pre-trained on WebLi.|
siglip2_base_patch32_256| 376.86M | 376 million parameter, patch size 32, image size 256, pre-trained on WebLi.|
siglip2_base_patch16_384 | 376.86M | 376 million parameter, patch size 16, image size 384, pre-trained on WebLi.|
siglip_large_patch16_256 | 652.15M | 652 million parameter, image size 256, pre-trained on WebLi. |
siglip_large_patch16_384 | 652.48M | 652 million parameter, image size 384, pre-trained on WebLi. |
siglip_so400m_patch14_224 | 877.36M | 877 million parameter, image size 224, shape-optimized version, pre-trained on WebLi.|
siglip_so400m_patch14_384 | 877.96M| 877 million parameter, image size 384, shape-optimized version, pre-trained on WebLi.|
siglip2_large_patch16_256 |881.53M |881 million parameter, patch size 16, image size 256, pre-trained on WebLi.|
siglip2_large_patch16_384 | 881.86M | 881 million parameter, patch size 16, image size 384, pre-trained on WebLi.|
siglip2_large_patch16_512 | 882.31M |882 million parameter, patch size 16, image size 512, pre-trained on WebLi.|
siglip_so400m_patch16_256_i18n | 1.13B |1.1 billion parameter, image size 256, shape-optimized version, pre-trained on WebLi.|
siglip2_so400m_patch14_224 | 1.14B |1.1 billion parameter, patch size 14, image size 224, shape-optimized version, pre-trained on WebLi.|
siglip2_so400m_patch16_256| 1.14B |1.1 billion parameter, patch size 16, image size 256, shape-optimized version, pre-trained on WebLi.|
siglip2_so400m_patch14_384 | 1.14B |1.1 billion parameter, patch size 14, image size 224, shape-optimized version, pre-trained on WebLi.|
siglip2_so400m_patch16_384 | 1.14B |1.1 billion parameter, patch size 16, image size 384, shape-optimized version, pre-trained on WebLi.|
siglip2_so400m_patch16_512| 1.14B |1.1 billion parameter, patch size 16, image size 512, shape-optimized version, pre-trained on WebLi.|
siglip2_giant_opt_patch16_256| 1.87B |1.8 billion parameter, patch size 16, image size 256, pre-trained on WebLi.|
siglip2_giant_opt_patch16_384| 1.87B |1.8 billion parameter, patch size 16, image size 384, pre-trained on WebLi.|
## Example Usage
```Python
import keras
import numpy as np
import matplotlib.pyplot as plt
from keras_hub.models import SigLIPBackbone, SigLIPTokenizer
from keras_hub.layers import SigLIPImageConverter
# instantiate the model and preprocessing tools
siglip = SigLIPBackbone.from_preset("siglip2_so400m_patch14_384")
tokenizer = SigLIPTokenizer.from_preset("siglip2_so400m_patch14_384",
sequence_length=64)
image_converter = SigLIPImageConverter.from_preset("siglip2_so400m_patch14_384")
# obtain tokens for some input text
tokens = tokenizer.tokenize(["mountains", "cat on tortoise", "house"])
# preprocess image and text
image = keras.utils.load_img("cat.jpg")
image = image_converter(np.array([image]).astype(float))
# query the model for similarities
siglip({
"images": image,
"token_ids": tokens,
})
```
## Example Usage with Hugging Face URI
```Python
import keras
import numpy as np
import matplotlib.pyplot as plt
from keras_hub.models import SigLIPBackbone, SigLIPTokenizer
from keras_hub.layers import SigLIPImageConverter
# instantiate the model and preprocessing tools
siglip = SigLIPBackbone.from_preset("hf://keras/siglip2_so400m_patch14_384")
tokenizer = SigLIPTokenizer.from_preset("hf://keras/siglip2_so400m_patch14_384",
sequence_length=64)
image_converter = SigLIPImageConverter.from_preset("hf://keras/siglip2_so400m_patch14_384")
# obtain tokens for some input text
tokens = tokenizer.tokenize(["mountains", "cat on tortoise", "house"])
# preprocess image and text
image = keras.utils.load_img("cat.jpg")
image = image_converter(np.array([image]).astype(float))
# query the model for similarities
siglip({
"images": image,
"token_ids": tokens,
})
```
|
keras/siglip2_base_patch32_256
|
keras
| 2025-06-17T03:22:34Z | 6 | 0 |
keras-hub
|
[
"keras-hub",
"arxiv:2303.15343",
"region:us"
] | null | 2025-03-24T21:22:02Z |
---
library_name: keras-hub
---
### Model Overview
SigLIP model pre-trained on WebLi at resolution 224x224. It was introduced in the paper [Sigmoid Loss for Language Image Pre-Training](https://arxiv.org/abs/2303.15343) by Zhai et al. and first released in this [repository](https://github.com/google-research/big_vision).
SigLIP is [CLIP](https://huggingface.co/docs/transformers/model_doc/clip), a multimodal model, with a better loss function. The sigmoid loss operates solely on image-text pairs and does not require a global view of the pairwise similarities for normalization. This allows further scaling up the batch size, while also performing better at smaller batch sizes.
A TLDR of SigLIP by one of the authors can be found [here](https://twitter.com/giffmana/status/1692641733459267713).
Weights are released under the [Apache 2 License](https://github.com/keras-team/keras-hub/blob/master/LICENSE) . Keras model code is released under the [Apache 2 License](https://github.com/keras-team/keras-hub/blob/master/LICENSE).
## Links
* [SigLIP Quickstart Notebook](https://www.kaggle.com/code/laxmareddypatlolla/siglip-quickstart-notebook-with-hub)
* [SigLIP API Documentation](https://keras.io/keras_hub/api/models/siglip/)
* [SigLIP Model Card](https://arxiv.org/abs/2303.15343)
* [KerasHub Beginner Guide](https://keras.io/guides/keras_hub/getting_started/)
* [KerasHub Model Publishing Guide](https://keras.io/guides/keras_hub/upload/)
## Installation
Keras and KerasHub can be installed with:
```
pip install -U -q keras-hub
pip install -U -q keras
```
Jax, TensorFlow, and Torch come preinstalled in Kaggle Notebooks. For instructions on installing them in another environment see the [Keras Getting Started](https://keras.io/getting_started/) page.
## Presets
The following model checkpoints are provided by the Keras team. Full code examples for each are available below.
| Preset name | Parameters | Description |
|---------------------------------------|------------|--------------------------------------------------------------------------------------------------------------|
| siglip_base_patch16_224 | 203.16M | 200 million parameter, image size 224, pre-trained on WebLi. |
siglip_base_patch16_256 | 203.20M | 200 million parameter, image size 256, pre-trained on WebLi. |
siglip_base_patch16_384 | 203.45M | 200 million parameter, image size 384, pre-trained on WebLi. |
siglip_base_patch16_512 | 203.79M | 200 million parameter, image size 512, pre-trained on WebLi. |
siglip_base_patch16_256_multilingual |370.63M | 370 million parameter, image size 256, pre-trained on WebLi.|
siglip2_base_patch16_224 | 375.19M | 375 million parameter, patch size 16, image size 224, pre-trained on WebLi.|
siglip2_base_patch16_256| 375.23M | 375 million parameter, patch size 16, image size 256, pre-trained on WebLi.|
siglip2_base_patch32_256| 376.86M | 376 million parameter, patch size 32, image size 256, pre-trained on WebLi.|
siglip2_base_patch16_384 | 376.86M | 376 million parameter, patch size 16, image size 384, pre-trained on WebLi.|
siglip_large_patch16_256 | 652.15M | 652 million parameter, image size 256, pre-trained on WebLi. |
siglip_large_patch16_384 | 652.48M | 652 million parameter, image size 384, pre-trained on WebLi. |
siglip_so400m_patch14_224 | 877.36M | 877 million parameter, image size 224, shape-optimized version, pre-trained on WebLi.|
siglip_so400m_patch14_384 | 877.96M| 877 million parameter, image size 384, shape-optimized version, pre-trained on WebLi.|
siglip2_large_patch16_256 |881.53M |881 million parameter, patch size 16, image size 256, pre-trained on WebLi.|
siglip2_large_patch16_384 | 881.86M | 881 million parameter, patch size 16, image size 384, pre-trained on WebLi.|
siglip2_large_patch16_512 | 882.31M |882 million parameter, patch size 16, image size 512, pre-trained on WebLi.|
siglip_so400m_patch16_256_i18n | 1.13B |1.1 billion parameter, image size 256, shape-optimized version, pre-trained on WebLi.|
siglip2_so400m_patch14_224 | 1.14B |1.1 billion parameter, patch size 14, image size 224, shape-optimized version, pre-trained on WebLi.|
siglip2_so400m_patch16_256| 1.14B |1.1 billion parameter, patch size 16, image size 256, shape-optimized version, pre-trained on WebLi.|
siglip2_so400m_patch14_384 | 1.14B |1.1 billion parameter, patch size 14, image size 224, shape-optimized version, pre-trained on WebLi.|
siglip2_so400m_patch16_384 | 1.14B |1.1 billion parameter, patch size 16, image size 384, shape-optimized version, pre-trained on WebLi.|
siglip2_so400m_patch16_512| 1.14B |1.1 billion parameter, patch size 16, image size 512, shape-optimized version, pre-trained on WebLi.|
siglip2_giant_opt_patch16_256| 1.87B |1.8 billion parameter, patch size 16, image size 256, pre-trained on WebLi.|
siglip2_giant_opt_patch16_384| 1.87B |1.8 billion parameter, patch size 16, image size 384, pre-trained on WebLi.|
## Example Usage
```Python
import keras
import numpy as np
import matplotlib.pyplot as plt
from keras_hub.models import SigLIPBackbone, SigLIPTokenizer
from keras_hub.layers import SigLIPImageConverter
# instantiate the model and preprocessing tools
siglip = SigLIPBackbone.from_preset("siglip2_base_patch32_256")
tokenizer = SigLIPTokenizer.from_preset("siglip2_base_patch32_256",
sequence_length=64)
image_converter = SigLIPImageConverter.from_preset("siglip2_base_patch32_256")
# obtain tokens for some input text
tokens = tokenizer.tokenize(["mountains", "cat on tortoise", "house"])
# preprocess image and text
image = keras.utils.load_img("cat.jpg")
image = image_converter(np.array([image]).astype(float))
# query the model for similarities
siglip({
"images": image,
"token_ids": tokens,
})
```
## Example Usage with Hugging Face URI
```Python
import keras
import numpy as np
import matplotlib.pyplot as plt
from keras_hub.models import SigLIPBackbone, SigLIPTokenizer
from keras_hub.layers import SigLIPImageConverter
# instantiate the model and preprocessing tools
siglip = SigLIPBackbone.from_preset("hf://keras/siglip2_base_patch32_256")
tokenizer = SigLIPTokenizer.from_preset("hf://keras/siglip2_base_patch32_256",
sequence_length=64)
image_converter = SigLIPImageConverter.from_preset("hf://keras/siglip2_base_patch32_256")
# obtain tokens for some input text
tokens = tokenizer.tokenize(["mountains", "cat on tortoise", "house"])
# preprocess image and text
image = keras.utils.load_img("cat.jpg")
image = image_converter(np.array([image]).astype(float))
# query the model for similarities
siglip({
"images": image,
"token_ids": tokens,
})
```
|
keras/siglip2_base_patch16_256
|
keras
| 2025-06-17T03:22:33Z | 4 | 0 |
keras-hub
|
[
"keras-hub",
"arxiv:2303.15343",
"region:us"
] | null | 2025-03-24T21:19:18Z |
---
library_name: keras-hub
---
### Model Overview
SigLIP model pre-trained on WebLi at resolution 224x224. It was introduced in the paper [Sigmoid Loss for Language Image Pre-Training](https://arxiv.org/abs/2303.15343) by Zhai et al. and first released in this [repository](https://github.com/google-research/big_vision).
SigLIP is [CLIP](https://huggingface.co/docs/transformers/model_doc/clip), a multimodal model, with a better loss function. The sigmoid loss operates solely on image-text pairs and does not require a global view of the pairwise similarities for normalization. This allows further scaling up the batch size, while also performing better at smaller batch sizes.
A TLDR of SigLIP by one of the authors can be found [here](https://twitter.com/giffmana/status/1692641733459267713).
Weights are released under the [Apache 2 License](https://github.com/keras-team/keras-hub/blob/master/LICENSE) . Keras model code is released under the [Apache 2 License](https://github.com/keras-team/keras-hub/blob/master/LICENSE).
## Links
* [SigLIP Quickstart Notebook](https://www.kaggle.com/code/laxmareddypatlolla/siglip-quickstart-notebook-with-hub)
* [SigLIP API Documentation](https://keras.io/keras_hub/api/models/siglip/)
* [SigLIP Model Card](https://arxiv.org/abs/2303.15343)
* [KerasHub Beginner Guide](https://keras.io/guides/keras_hub/getting_started/)
* [KerasHub Model Publishing Guide](https://keras.io/guides/keras_hub/upload/)
## Installation
Keras and KerasHub can be installed with:
```
pip install -U -q keras-hub
pip install -U -q keras
```
Jax, TensorFlow, and Torch come preinstalled in Kaggle Notebooks. For instructions on installing them in another environment see the [Keras Getting Started](https://keras.io/getting_started/) page.
## Presets
The following model checkpoints are provided by the Keras team. Full code examples for each are available below.
| Preset name | Parameters | Description |
|---------------------------------------|------------|--------------------------------------------------------------------------------------------------------------|
| siglip_base_patch16_224 | 203.16M | 200 million parameter, image size 224, pre-trained on WebLi. |
siglip_base_patch16_256 | 203.20M | 200 million parameter, image size 256, pre-trained on WebLi. |
siglip_base_patch16_384 | 203.45M | 200 million parameter, image size 384, pre-trained on WebLi. |
siglip_base_patch16_512 | 203.79M | 200 million parameter, image size 512, pre-trained on WebLi. |
siglip_base_patch16_256_multilingual |370.63M | 370 million parameter, image size 256, pre-trained on WebLi.|
siglip2_base_patch16_224 | 375.19M | 375 million parameter, patch size 16, image size 224, pre-trained on WebLi.|
siglip2_base_patch16_256| 375.23M | 375 million parameter, patch size 16, image size 256, pre-trained on WebLi.|
siglip2_base_patch32_256| 376.86M | 376 million parameter, patch size 32, image size 256, pre-trained on WebLi.|
siglip2_base_patch16_384 | 376.86M | 376 million parameter, patch size 16, image size 384, pre-trained on WebLi.|
siglip_large_patch16_256 | 652.15M | 652 million parameter, image size 256, pre-trained on WebLi. |
siglip_large_patch16_384 | 652.48M | 652 million parameter, image size 384, pre-trained on WebLi. |
siglip_so400m_patch14_224 | 877.36M | 877 million parameter, image size 224, shape-optimized version, pre-trained on WebLi.|
siglip_so400m_patch14_384 | 877.96M| 877 million parameter, image size 384, shape-optimized version, pre-trained on WebLi.|
siglip2_large_patch16_256 |881.53M |881 million parameter, patch size 16, image size 256, pre-trained on WebLi.|
siglip2_large_patch16_384 | 881.86M | 881 million parameter, patch size 16, image size 384, pre-trained on WebLi.|
siglip2_large_patch16_512 | 882.31M |882 million parameter, patch size 16, image size 512, pre-trained on WebLi.|
siglip_so400m_patch16_256_i18n | 1.13B |1.1 billion parameter, image size 256, shape-optimized version, pre-trained on WebLi.|
siglip2_so400m_patch14_224 | 1.14B |1.1 billion parameter, patch size 14, image size 224, shape-optimized version, pre-trained on WebLi.|
siglip2_so400m_patch16_256| 1.14B |1.1 billion parameter, patch size 16, image size 256, shape-optimized version, pre-trained on WebLi.|
siglip2_so400m_patch14_384 | 1.14B |1.1 billion parameter, patch size 14, image size 224, shape-optimized version, pre-trained on WebLi.|
siglip2_so400m_patch16_384 | 1.14B |1.1 billion parameter, patch size 16, image size 384, shape-optimized version, pre-trained on WebLi.|
siglip2_so400m_patch16_512| 1.14B |1.1 billion parameter, patch size 16, image size 512, shape-optimized version, pre-trained on WebLi.|
siglip2_giant_opt_patch16_256| 1.87B |1.8 billion parameter, patch size 16, image size 256, pre-trained on WebLi.|
siglip2_giant_opt_patch16_384| 1.87B |1.8 billion parameter, patch size 16, image size 384, pre-trained on WebLi.|
## Example Usage
```Python
import keras
import numpy as np
import matplotlib.pyplot as plt
from keras_hub.models import SigLIPBackbone, SigLIPTokenizer
from keras_hub.layers import SigLIPImageConverter
# instantiate the model and preprocessing tools
siglip = SigLIPBackbone.from_preset("siglip2_base_patch16_256")
tokenizer = SigLIPTokenizer.from_preset("siglip2_base_patch16_256",
sequence_length=64)
image_converter = SigLIPImageConverter.from_preset("siglip2_base_patch16_256")
# obtain tokens for some input text
tokens = tokenizer.tokenize(["mountains", "cat on tortoise", "house"])
# preprocess image and text
image = keras.utils.load_img("cat.jpg")
image = image_converter(np.array([image]).astype(float))
# query the model for similarities
siglip({
"images": image,
"token_ids": tokens,
})
```
## Example Usage with Hugging Face URI
```Python
import keras
import numpy as np
import matplotlib.pyplot as plt
from keras_hub.models import SigLIPBackbone, SigLIPTokenizer
from keras_hub.layers import SigLIPImageConverter
# instantiate the model and preprocessing tools
siglip = SigLIPBackbone.from_preset("hf://keras/siglip2_base_patch16_256")
tokenizer = SigLIPTokenizer.from_preset("hf://keras/siglip2_base_patch16_256",
sequence_length=64)
image_converter = SigLIPImageConverter.from_preset("hf://keras/siglip2_base_patch16_256")
# obtain tokens for some input text
tokens = tokenizer.tokenize(["mountains", "cat on tortoise", "house"])
# preprocess image and text
image = keras.utils.load_img("cat.jpg")
image = image_converter(np.array([image]).astype(float))
# query the model for similarities
siglip({
"images": image,
"token_ids": tokens,
})
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.