modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-14 06:28:02
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 502
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-14 06:27:19
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
rvipitkirubbe/blockassist-bc-mottled_foraging_ape_1755140454
|
rvipitkirubbe
| 2025-08-14T03:26:25Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mottled foraging ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T03:26:19Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mottled foraging ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
manhtien0236/blockassist-bc-flightless_unseen_parrot_1755135941
|
manhtien0236
| 2025-08-14T02:01:30Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"flightless unseen parrot",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-14T02:01:15Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- flightless unseen parrot
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ecamli/Qwen3-0.6B-Gensyn-Swarm-vocal_placid_sloth
|
ecamli
| 2025-08-14T01:03:16Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am vocal_placid_sloth",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-26T15:59:51Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am vocal_placid_sloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Alizabethli/Qwen32_SFT_RL_claude
|
Alizabethli
| 2025-08-14T00:49:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"grpo",
"trl",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-32B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T14:34:25Z |
---
base_model: Qwen/Qwen2.5-32B-Instruct
library_name: transformers
model_name: Qwen32_SFT_RL_claude
tags:
- generated_from_trainer
- grpo
- trl
licence: license
---
# Model Card for Qwen32_SFT_RL_claude
This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Alizabethli/Qwen32_SFT_RL_claude", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.5.1+cu121
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp-down_pnas_layer_16_6_all_37_0.0001_5120_1
|
winnieyangwannan
| 2025-08-14T00:17:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-14T00:15:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
neko-llm/Qwen3-32B-HLE
|
neko-llm
| 2025-08-13T22:42:23Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"open-r1",
"sft",
"trl",
"conversational",
"dataset:neko-llm/HLE_SFT_general",
"dataset:neko-llm/HLE_SFT_OpenThoughts-114k",
"dataset:neko-llm/HLE_SFT_OpenMathReasoning",
"dataset:neko-llm/HLE_SFT_MixtureOfThoughts",
"dataset:neko-llm/HLE_SFT_Chemistry",
"dataset:neko-llm/HLE_SFT_biology",
"dataset:neko-llm/HLE_SFT_medical",
"dataset:neko-llm/HLE_SFT_humanity",
"base_model:Qwen/Qwen3-32B",
"base_model:finetune:Qwen/Qwen3-32B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-09T22:28:44Z |
---
base_model: Qwen/Qwen3-32B
datasets:
- neko-llm/HLE_SFT_general
- neko-llm/HLE_SFT_OpenThoughts-114k
- neko-llm/HLE_SFT_OpenMathReasoning
- neko-llm/HLE_SFT_MixtureOfThoughts
- neko-llm/HLE_SFT_MixtureOfThoughts
- neko-llm/HLE_SFT_Chemistry
- neko-llm/HLE_SFT_biology
- neko-llm/HLE_SFT_medical
- neko-llm/HLE_SFT_general
- neko-llm/HLE_SFT_humanity
library_name: transformers
model_name: Qwen3-32B-HLE
tags:
- generated_from_trainer
- open-r1
- sft
- trl
licence: license
---
# Model Card for Qwen3-32B-HLE
This model is a fine-tuned version of [Qwen/Qwen3-32B](https://huggingface.co/Qwen/Qwen3-32B) on the [['neko-llm/HLE_SFT_general', 'neko-llm/HLE_SFT_OpenThoughts-114k', 'neko-llm/HLE_SFT_OpenMathReasoning', 'neko-llm/HLE_SFT_MixtureOfThoughts', 'neko-llm/HLE_SFT_MixtureOfThoughts', 'neko-llm/HLE_SFT_Chemistry', 'neko-llm/HLE_SFT_biology', 'neko-llm/HLE_SFT_medical', 'neko-llm/HLE_SFT_general', 'neko-llm/HLE_SFT_humanity']](https://huggingface.co/datasets/['neko-llm/HLE_SFT_general', 'neko-llm/HLE_SFT_OpenThoughts-114k', 'neko-llm/HLE_SFT_OpenMathReasoning', 'neko-llm/HLE_SFT_MixtureOfThoughts', 'neko-llm/HLE_SFT_MixtureOfThoughts', 'neko-llm/HLE_SFT_Chemistry', 'neko-llm/HLE_SFT_biology', 'neko-llm/HLE_SFT_medical', 'neko-llm/HLE_SFT_general', 'neko-llm/HLE_SFT_humanity']) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="neko-llm/Qwen3-32B-HLE", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/neko-llm/huggingface/runs/7bda5pwn)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.54.1
- Pytorch: 2.6.0
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
ypszn/blockassist-bc-yapping_pawing_worm_1755123582
|
ypszn
| 2025-08-13T22:20:42Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping pawing worm",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T22:20:33Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping pawing worm
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nightmedia/Huihui-Qwen3-4B-Instruct-2507-abliterated-q3-mlx
|
nightmedia
| 2025-08-13T22:07:26Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3",
"abliterated",
"uncensored",
"text-generation",
"conversational",
"base_model:huihui-ai/Huihui-Qwen3-4B-Instruct-2507-abliterated",
"base_model:quantized:huihui-ai/Huihui-Qwen3-4B-Instruct-2507-abliterated",
"license:apache-2.0",
"3-bit",
"region:us"
] |
text-generation
| 2025-08-13T22:00:14Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-4B-Instruct-2507/blob/main/LICENSE
base_model: huihui-ai/Huihui-Qwen3-4B-Instruct-2507-abliterated
pipeline_tag: text-generation
library_name: mlx
tags:
- abliterated
- uncensored
- mlx
---
# Huihui-Qwen3-4B-Instruct-2507-abliterated-q3-mlx
This model [Huihui-Qwen3-4B-Instruct-2507-abliterated-q3-mlx](https://huggingface.co/Huihui-Qwen3-4B-Instruct-2507-abliterated-q3-mlx) was
converted to MLX format from [huihui-ai/Huihui-Qwen3-4B-Instruct-2507-abliterated](https://huggingface.co/huihui-ai/Huihui-Qwen3-4B-Instruct-2507-abliterated)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("Huihui-Qwen3-4B-Instruct-2507-abliterated-q3-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
allenai/olmOCR-7B-0825-FP8
|
allenai
| 2025-08-13T21:22:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"en",
"dataset:allenai/olmOCR-mix-0225",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-VL-7B-Instruct",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"compressed-tensors",
"region:us"
] |
image-to-text
| 2025-08-13T20:55:44Z |
---
language:
- en
license: apache-2.0
datasets:
- allenai/olmOCR-mix-0225
base_model:
- Qwen/Qwen2.5-VL-7B-Instruct
library_name: transformers
---
<img alt="olmOCR Logo" src="https://huggingface.co/datasets/allenai/blog-images/resolve/main/olmocr/olmocr.png" width="242px" style="margin-left:'auto' margin-right:'auto' display:'block'">
# olmOCR-7B-0825-FP8
Quantized to FP8 Version of [olmOCR-7B-0825](https://huggingface.co/allenai/olmOCR-7B-0825), using llmcompressor.
This is a release of the olmOCR model that's fine tuned from Qwen2.5-VL-7B-Instruct using the
[olmOCR-mix-0225](https://huggingface.co/datasets/allenai/olmOCR-mix-0225) dataset.
Quick links:
- 📃 [Paper](https://olmocr.allenai.org/papers/olmocr.pdf)
- 🤗 [Dataset](https://huggingface.co/datasets/allenai/olmOCR-mix-0225)
- 🛠️ [Code](https://github.com/allenai/olmocr)
- 🎮 [Demo](https://olmocr.allenai.org/)
The best way to use this model is via the [olmOCR toolkit](https://github.com/allenai/olmocr).
The toolkit comes with an efficient inference setup via sglang that can handle millions of documents
at scale.
## Usage
This model expects as input a single document image, rendered such that the longest dimension is 1288 pixels.
The prompt must then contain the additional metadata from the document, and the easiest way to generate this
is to use the methods provided by the [olmOCR toolkit](https://github.com/allenai/olmocr).
## License and use
olmOCR is licensed under the Apache 2.0 license.
olmOCR is intended for research and educational use.
For more information, please see our [Responsible Use Guidelines](https://allenai.org/responsible-use).
|
karalar/blockassist-bc-pouncing_padded_turtle_1755118335
|
karalar
| 2025-08-13T20:52:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"pouncing padded turtle",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T20:52:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- pouncing padded turtle
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Gliscor/email-summarizer-bart-large-cnn
|
Gliscor
| 2025-08-13T20:51:54Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:facebook/bart-large-cnn",
"lora",
"transformers",
"arxiv:1910.09700",
"base_model:facebook/bart-large-cnn",
"region:us"
] | null | 2025-08-13T20:35:30Z |
---
base_model: facebook/bart-large-cnn
library_name: peft
tags:
- base_model:adapter:facebook/bart-large-cnn
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
andr0m4da/blockassist-bc-grazing_hunting_boar_1755114010
|
andr0m4da
| 2025-08-13T19:41:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"grazing hunting boar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T19:41:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- grazing hunting boar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
JoelMah/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-domestic_aquatic_mink
|
JoelMah
| 2025-08-13T19:35:40Z | 97 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am domestic_aquatic_mink",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-09T21:30:13Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am domestic_aquatic_mink
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
TrySalient/tts-collections-test-finetune-tokenizer
|
TrySalient
| 2025-08-13T19:17:02Z | 0 | 0 |
transformers
|
[
"transformers",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T19:16:55Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AAAAnsah/Qwen2.5-0.5B-Instruct_ES_theta_0.3
|
AAAAnsah
| 2025-08-13T19:16:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T19:16:24Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
elmenbillion/blockassist-bc-beaked_sharp_otter_1755110799
|
elmenbillion
| 2025-08-13T19:14:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"beaked sharp otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T19:14:28Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- beaked sharp otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tamewild/4b_v53_merged_e5
|
tamewild
| 2025-08-13T18:38:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-13T18:35:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE-i1-GGUF
|
mradermacher
| 2025-08-13T18:35:59Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:netcat420/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE",
"base_model:quantized:netcat420/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-13T15:05:08Z |
---
base_model: netcat420/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/netcat420/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE.i1-IQ1_S.gguf) | i1-IQ1_S | 2.2 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE.i1-IQ1_M.gguf) | i1-IQ1_M | 2.4 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE.i1-IQ2_S.gguf) | i1-IQ2_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE.i1-IQ2_M.gguf) | i1-IQ2_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.2 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE.i1-Q2_K.gguf) | i1-Q2_K | 3.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.9 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE.i1-IQ3_S.gguf) | i1-IQ3_S | 3.9 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE.i1-IQ3_M.gguf) | i1-IQ3_M | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE.i1-Q4_0.gguf) | i1-Q4_0 | 4.9 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.9 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE.i1-Q4_1.gguf) | i1-Q4_1 | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE-i1-GGUF/resolve/main/DeepSeek-R1-0528-Qwen3-8B-SLERPSOURCE.i1-Q6_K.gguf) | i1-Q6_K | 6.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
rvipitkirubbe/blockassist-bc-mottled_foraging_ape_1755107604
|
rvipitkirubbe
| 2025-08-13T18:19:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mottled foraging ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T18:19:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mottled foraging ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
amareshparida/llama3-finetuned-price-predictor
|
amareshparida
| 2025-08-13T18:08:17Z | 0 | 0 | null |
[
"safetensors",
"gguf",
"llama",
"trl",
"sft",
"license:apache-2.0",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-08-12T12:43:35Z |
---
license: apache-2.0
tags:
- trl
- sft
---
|
HectorHe/Qwen1.5-MOE-sft-math7k
|
HectorHe
| 2025-08-13T18:05:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_moe",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-13T17:55:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
chhatramani/WhisperV3_Nepali_v0.5
|
chhatramani
| 2025-08-13T18:02:39Z | 0 | 0 | null |
[
"safetensors",
"unsloth",
"automatic-speech-recognition",
"dataset:mozilla-foundation/common_voice_17_0",
"base_model:openai/whisper-large-v3",
"base_model:finetune:openai/whisper-large-v3",
"license:apache-2.0",
"region:us"
] |
automatic-speech-recognition
| 2025-08-13T17:03:17Z |
---
license: apache-2.0
tags:
- unsloth
datasets:
- mozilla-foundation/common_voice_17_0
base_model:
- openai/whisper-large-v3
pipeline_tag: automatic-speech-recognition
---
# WhisperV3 Nepali v0.5
A Nepali automatic speech recognition (ASR) model fine‑tuned from Whisper Large V3 with LoRA. Trained on Nepali speech and transcriptions to improve accuracy on Nepali audio compared to the base model.
---
## Model details
- **Base model:** Whisper Large V3 (loaded via Unsloth FastModel)
- **Adapter method:** LoRA on attention projections
- **Target modules:** q_proj, v_proj
- **Rank (r):** 64
- **Alpha:** 64
- **Dropout:** 0
- **Gradient checkpointing:** "unsloth"
- **Task:** Transcribe
- **Language configuration:** Nepali (generation_config.language set to <|ne|>; suppress_tokens cleared; no forced decoder ids)
- **Precision:** fp16 on GPUs without bf16; bf16 where supported
- **Seed:** 3407
> This model was trained and saved as LoRA adapters, with optional merged 16‑bit/4‑bit export paths available via Unsloth utilities.
---
## Intended uses and limitations
- **Intended use:** Transcribing Nepali speech (general domain, conversational and read speech).
- **Out‑of‑scope:** Non‑Nepali languages, heavy code‑switching, extreme noise, domain‑specific jargon not present in training data.
- **Known limitations:** Accuracy may degrade on noisy audio, long‑form audio without segmentation, or accents/styles unseen during training.
---
## Training data
- **Primary dataset:** Common Voice 17.0 Nepali (language code "ne‑NP")
- **Splits:** train + validation used for training; test used for evaluation
- **Audio:** resampled to 16 kHz for Whisper
Data was prepared with a processing function that extracts Whisper input features from audio and tokenizes target transcripts, aligning “sentence” as the text field for Common Voice.
---
## Training configuration
- **Loader and framework:** Hugging Face Datasets + Transformers with Unsloth acceleration
- **Batching:** per_device_train_batch_size = 2, gradient_accumulation_steps = 4
- **Optimization:** AdamW 8‑bit, learning_rate = 1e‑4, weight_decay = 0.01, cosine LR schedule
- **Training length:** num_train_epochs = 3 with max_steps = 200 for a quick run
- **Evaluation:** eval_strategy = "steps", eval_steps = 5, label_names = ["labels"]
- **Logging:** logging_steps = 1
- **Other:** remove_unused_columns = False (for PEFT forward signatures)
Training used a Google Colab T4 environment (around 14.7 GB GPU memory), with peak reserved memory during training around 6.2 GB in the referenced session.
---
## How to use
### Quick inference
```python
from transformers import pipeline
import torch
asr = pipeline(
"automatic-speech-recognition",
model="chhatramani/WhisperV3_Nepali_v0.5", # replace with your model id if different
return_language=True,
torch_dtype=torch.float16,
)
result = asr("path/to/audio.wav") # 16 kHz mono recommended
print(result["text"])
```
### Processor-level usage
```python
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor
import torch, soundfile as sf
model_id = "chhatramani/WhisperV3_Nepali_v0.5"
model = AutoModelForSpeechSeq2Seq.from_pretrained(model_id, torch_dtype=torch.float16).eval().to("cuda")
processor = AutoProcessor.from_pretrained(model_id)
audio, sr = sf.read("path/to/audio.wav")
inputs = processor(audio, sampling_rate=sr, return_tensors="pt").to("cuda", torch.float16)
pred_ids = model.generate(**inputs)
text = processor.batch_decode(pred_ids, skip_special_tokens=True)[0]
print(text)
```
### Evaluation
Below is a minimal recipe to compute WER/CER on a Nepali test set (e.g., Common Voice 17.0 “test”). Adjust paths and batching for your setup.
```python
from datasets import load_dataset, Audio
from transformers import pipeline
import evaluate
wer = evaluate.load("wer")
cer = evaluate.load("cer")
asr = pipeline(
"automatic-speech-recognition",
model="chhatramani/WhisperV3_Nepali_v0.5",
return_language=True
)
test = load_dataset("mozilla-foundation/common_voice_17_0", "ne-NP", split="test")
test = test.cast_column("audio", Audio(sampling_rate=16000))
refs, hyps = [], []
for ex in test:
ref = ex.get("sentence", "").strip()
if not ref:
continue
out = asr(ex["audio"]["array"])
hyp = out["text"].strip()
refs.append(ref)
hyps.append(hyp)
print("WER:", wer.compute(references=refs, predictions=hyps))
print("CER:", cer.compute(references=refs, predictions=hyps))
```
* Inference and eval pipeline patterns mirror the training notebook, including resampling to 16 kHz and mapping “sentence” as the text field.
> If you have your own Nepali test set, ensure it’s sampled at 16 kHz and transcriptions are normalized consistently with training data.
## Reproducibility
- **Environment:** Transformers + Datasets + Unsloth; GPU T4 session illustrated in the notebook
- **Determinism:** Seed fixed at 3407 for trainer and LoRA setup
- **Saving:** LoRA adapters saved via `save_pretrained` / `push_to_hub`; optional merged exports to 16‑bit or 4‑bit are supported in Unsloth APIs
---
## Acknowledgements
- **Base model:** Whisper Large V3
- **Training utilities:** Unsloth FastModel and PEFT LoRA support
- **Dataset:** mozilla-foundation/common_voice_17_0 (Nepali)
> The included training notebook steps (installation, data prep, training loop, saving, and example inference) informed this model card’s details.
|
mradermacher/Baichuan-M2-32B-abliterated-i1-GGUF
|
mradermacher
| 2025-08-13T18:01:13Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:nicoboss/Baichuan-M2-32B-abliterated",
"base_model:quantized:nicoboss/Baichuan-M2-32B-abliterated",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-13T10:18:50Z |
---
base_model: nicoboss/Baichuan-M2-32B-abliterated
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/nicoboss/Baichuan-M2-32B-abliterated
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Baichuan-M2-32B-abliterated-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Baichuan-M2-32B-abliterated-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Baichuan-M2-32B-abliterated-i1-GGUF/resolve/main/Baichuan-M2-32B-abliterated.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Baichuan-M2-32B-abliterated-i1-GGUF/resolve/main/Baichuan-M2-32B-abliterated.i1-IQ1_S.gguf) | i1-IQ1_S | 7.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Baichuan-M2-32B-abliterated-i1-GGUF/resolve/main/Baichuan-M2-32B-abliterated.i1-IQ1_M.gguf) | i1-IQ1_M | 8.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Baichuan-M2-32B-abliterated-i1-GGUF/resolve/main/Baichuan-M2-32B-abliterated.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/Baichuan-M2-32B-abliterated-i1-GGUF/resolve/main/Baichuan-M2-32B-abliterated.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/Baichuan-M2-32B-abliterated-i1-GGUF/resolve/main/Baichuan-M2-32B-abliterated.i1-IQ2_S.gguf) | i1-IQ2_S | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/Baichuan-M2-32B-abliterated-i1-GGUF/resolve/main/Baichuan-M2-32B-abliterated.i1-IQ2_M.gguf) | i1-IQ2_M | 11.4 | |
| [GGUF](https://huggingface.co/mradermacher/Baichuan-M2-32B-abliterated-i1-GGUF/resolve/main/Baichuan-M2-32B-abliterated.i1-Q2_K_S.gguf) | i1-Q2_K_S | 11.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Baichuan-M2-32B-abliterated-i1-GGUF/resolve/main/Baichuan-M2-32B-abliterated.i1-Q2_K.gguf) | i1-Q2_K | 12.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Baichuan-M2-32B-abliterated-i1-GGUF/resolve/main/Baichuan-M2-32B-abliterated.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Baichuan-M2-32B-abliterated-i1-GGUF/resolve/main/Baichuan-M2-32B-abliterated.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.8 | |
| [GGUF](https://huggingface.co/mradermacher/Baichuan-M2-32B-abliterated-i1-GGUF/resolve/main/Baichuan-M2-32B-abliterated.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Baichuan-M2-32B-abliterated-i1-GGUF/resolve/main/Baichuan-M2-32B-abliterated.i1-IQ3_S.gguf) | i1-IQ3_S | 14.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Baichuan-M2-32B-abliterated-i1-GGUF/resolve/main/Baichuan-M2-32B-abliterated.i1-IQ3_M.gguf) | i1-IQ3_M | 14.9 | |
| [GGUF](https://huggingface.co/mradermacher/Baichuan-M2-32B-abliterated-i1-GGUF/resolve/main/Baichuan-M2-32B-abliterated.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.0 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Baichuan-M2-32B-abliterated-i1-GGUF/resolve/main/Baichuan-M2-32B-abliterated.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Baichuan-M2-32B-abliterated-i1-GGUF/resolve/main/Baichuan-M2-32B-abliterated.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.8 | |
| [GGUF](https://huggingface.co/mradermacher/Baichuan-M2-32B-abliterated-i1-GGUF/resolve/main/Baichuan-M2-32B-abliterated.i1-Q4_0.gguf) | i1-Q4_0 | 18.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Baichuan-M2-32B-abliterated-i1-GGUF/resolve/main/Baichuan-M2-32B-abliterated.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.9 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Baichuan-M2-32B-abliterated-i1-GGUF/resolve/main/Baichuan-M2-32B-abliterated.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Baichuan-M2-32B-abliterated-i1-GGUF/resolve/main/Baichuan-M2-32B-abliterated.i1-Q4_1.gguf) | i1-Q4_1 | 20.7 | |
| [GGUF](https://huggingface.co/mradermacher/Baichuan-M2-32B-abliterated-i1-GGUF/resolve/main/Baichuan-M2-32B-abliterated.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/Baichuan-M2-32B-abliterated-i1-GGUF/resolve/main/Baichuan-M2-32B-abliterated.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/Baichuan-M2-32B-abliterated-i1-GGUF/resolve/main/Baichuan-M2-32B-abliterated.i1-Q6_K.gguf) | i1-Q6_K | 27.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
alexxbobr/ORPO5000stepsclear
|
alexxbobr
| 2025-08-13T17:52:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"trl",
"orpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-13T17:51:20Z |
---
library_name: transformers
tags:
- trl
- orpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
chainway9/blockassist-bc-untamed_quick_eel_1755105439
|
chainway9
| 2025-08-13T17:43:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed quick eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T17:43:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed quick eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tanjumajerin/fine-tuned-llama-3e
|
tanjumajerin
| 2025-08-13T17:28:12Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:adapter:meta-llama/Meta-Llama-3-8B",
"license:llama3",
"region:us"
] | null | 2025-08-13T14:10:17Z |
---
license: llama3
base_model: meta-llama/Meta-Llama-3-8B
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
library_name: peft
model-index:
- name: fine-tuned-llama-3e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-llama-3e
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B](https://huggingface.co/meta-llama/Meta-Llama-3-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0623
- Accuracy: 0.6010
- F1: 0.5959
- Precision: 0.6091
- Recall: 0.6010
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:------:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.5496 | 0.9999 | 5167 | 1.1890 | 0.5875 | 0.5794 | 0.5982 | 0.5875 |
| 0.4241 | 1.9997 | 10334 | 1.0623 | 0.6010 | 0.5959 | 0.6091 | 0.6010 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
|
wheattoast11/arrl-qwen3-4b-consilience
|
wheattoast11
| 2025-08-13T17:12:12Z | 0 | 0 | null |
[
"safetensors",
"qwen3",
"region:us"
] | null | 2025-08-13T17:11:20Z |
# ARRL Qwen3-4B Consilience (Merged Full Weights)
- Base: sequelbox/Qwen3-4B-Thinking-2507-DAG-Reasoning
- Weights: LoRA merged into base (safetensors)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
repo='wheattoast11/arrl-qwen3-4b-consilience'
tok=AutoTokenizer.from_pretrained(repo)
model=AutoModelForCausalLM.from_pretrained(repo, torch_dtype='auto', device_map='auto')
```
|
HelloWorld47474747/ScanMyMole
|
HelloWorld47474747
| 2025-08-13T16:55:15Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-13T16:55:15Z |
---
license: apache-2.0
---
|
imge/llama_v1
|
imge
| 2025-08-13T16:47:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-10T13:20:53Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ecamli/blockassist-bc-hulking_soft_hippo_1755103417
|
ecamli
| 2025-08-13T16:44:28Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hulking soft hippo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T16:43:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hulking soft hippo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Devique/Calmiq-lora6
|
Devique
| 2025-08-13T16:37:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3n",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-08-13T16:29:25Z |
---
base_model: unsloth/gemma-3n-e4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3n
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Devique
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3n-e4b-it-unsloth-bnb-4bit
This gemma3n model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Yousefmd/my-surya-foundation-model
|
Yousefmd
| 2025-08-13T16:33:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"surya-multimodal-foundation",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-12T18:14:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hssnjfry/blockassist-bc-climbing_pouncing_dragonfly_1755102398
|
hssnjfry
| 2025-08-13T16:28:52Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"climbing pouncing dragonfly",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T16:27:39Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- climbing pouncing dragonfly
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sikaro/whisper_lora_model_meeting_12000
|
sikaro
| 2025-08-13T16:13:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"whisper",
"trl",
"en",
"base_model:sikaro/unsloth_whisper_16bit_model_kor_8000_meeting",
"base_model:finetune:sikaro/unsloth_whisper_16bit_model_kor_8000_meeting",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T16:13:14Z |
---
base_model: sikaro/unsloth_whisper_16bit_model_kor_8000_meeting
tags:
- text-generation-inference
- transformers
- unsloth
- whisper
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** sikaro
- **License:** apache-2.0
- **Finetuned from model :** sikaro/unsloth_whisper_16bit_model_kor_8000_meeting
This whisper model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
CSshihao/Triad_ov_1shot
|
CSshihao
| 2025-08-13T16:07:14Z | 0 | 0 | null |
[
"safetensors",
"llava",
"license:apache-2.0",
"region:us"
] | null | 2025-08-13T14:03:02Z |
---
license: apache-2.0
---
|
kamran8900/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lively_slow_parrot
|
kamran8900
| 2025-08-13T16:02:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am lively_slow_parrot",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-13T16:02:31Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am lively_slow_parrot
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
genies-llm/text2sql-grpo-d5-e1
|
genies-llm
| 2025-08-13T15:55:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:Genies/text2sql-grpo-d5",
"arxiv:2402.03300",
"base_model:Genies/text2sql_sft_v2",
"base_model:finetune:Genies/text2sql_sft_v2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T23:45:10Z |
---
base_model: Genies/text2sql_sft_v2
datasets: Genies/text2sql-grpo-d5
library_name: transformers
model_name: text2sql-grpo-d5-e1
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for text2sql-grpo-d5-e1
This model is a fine-tuned version of [Genies/text2sql_sft_v2](https://huggingface.co/Genies/text2sql_sft_v2) on the [Genies/text2sql-grpo-d5](https://huggingface.co/datasets/Genies/text2sql-grpo-d5) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="genies-llm/text2sql-grpo-d5-e1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/genies-rnd/text2sql-rl/runs/xyrjo69y)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.0
- Transformers: 4.52.3
- Pytorch: 2.6.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
SeacowX/qwen3-30b-a3b-melacious-code-E2
|
SeacowX
| 2025-08-13T15:52:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3_moe",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-13T15:12:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Kaelan/Qwen2.5_3B_RL
|
Kaelan
| 2025-08-13T15:47:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-13T15:47:26Z |
---
base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Kaelan
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
rvipitkirubbe/blockassist-bc-mottled_foraging_ape_1755098266
|
rvipitkirubbe
| 2025-08-13T15:45:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mottled foraging ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T15:45:52Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mottled foraging ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
janhq/Jan-v1-4B-GGUF
|
janhq
| 2025-08-13T15:38:40Z | 8 | 69 | null |
[
"gguf",
"text-generation",
"en",
"base_model:janhq/Jan-v1-4B",
"base_model:quantized:janhq/Jan-v1-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-11T06:21:13Z |
---
license: apache-2.0
language:
- en
base_model:
- janhq/Jan-v1-4B
pipeline_tag: text-generation
---
# Jan-v1: Advanced Agentic Language Model
[](https://github.com/menloresearch/deep-research)
[](https://opensource.org/licenses/Apache-2.0)
[](https://jan.ai/)
<!-- Optional: If you have a GIF for Jan-v1, include it here like Lucy's. -->
<!--  -->
## Overview
**Jan-v1** is the first release in the **Jan Family**, designed for agentic reasoning and problem-solving within the [Jan App](https://jan.ai/). Based on our [**Lucy**](https://huggingface.co/Menlo/Lucy) model, Jan-v1 achieves improved performance through model scaling.
Jan-v1 uses the [Qwen3-4B-thinking](https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507) model to provide enhanced reasoning capabilities and tool utilization. This architecture delivers better performance on complex agentic tasks.
## Performance
### Question Answering (SimpleQA)
For question-answering, Jan-v1 shows a significant performance gain from model scaling, achieving 91.1% accuracy.

*The 91.1% SimpleQA accuracy represents a significant milestone in factual question answering for models of this scale, demonstrating the effectiveness of our scaling and fine-tuning approach.*
### Chat Benchmarks
These benchmarks evaluate the model's conversational and instructional capabilities.

## Quick Start
### Integration with Jan App
Jan-v1 is optimized for direct integration with the [Jan App](https://jan.ai/). Simply select the model from the Jan App interface for immediate access to its full capabilities.

### Local Deployment
**Using vLLM:**
```bash
vllm serve janhq/Jan-v1-4B \
--host 0.0.0.0 \
--port 1234 \
--enable-auto-tool-choice \
--tool-call-parser hermes
```
**Using llama.cpp:**
```bash
llama-server --model jan-v1.gguf \
--host 0.0.0.0 \
--port 1234 \
--jinja \
--no-context-shift
```
### Recommended Parameters
```yaml
temperature: 0.6
top_p: 0.95
top_k: 20
min_p: 0.0
max_tokens: 2048
```
## 🤝 Community & Support
- **Discussions**: [HuggingFace Community](https://huggingface.co/janhq/Jan-v1-4B/discussions) <!-- Update with your HF model ID -->
- **Jan App**: Learn more about the Jan App at [jan.ai](https://jan.ai/)
## (*) Note
By default we have system prompt in chat template, this is to make sure the model having the same performance with the benchmark result. You can also use the vanilla chat template without system prompt in the file [chat_template_raw.jinja](https://huggingface.co/janhq/Jan-v1-4B/blob/main/chat_template_raw.jinja).
## 📄 Citation
```bibtex
Updated Soon
```
---
|
ACECA/lowMvMax_63
|
ACECA
| 2025-08-13T15:23:58Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-12T15:07:13Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
TAUR-dev/M-skills_in_rl-1_2-cd3arg_1e5_sft-rl_cd3arg_only-rl
|
TAUR-dev
| 2025-08-13T15:23:22Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"en",
"license:mit",
"region:us"
] | null | 2025-08-13T11:24:54Z |
---
language: en
license: mit
---
# M-skills_in_rl-1_2-cd3arg_1e5_sft-rl_cd3arg_only-rl
## Model Details
- **Training Method**: VeRL Reinforcement Learning (RL)
- **Stage Name**: rl
- **Experiment**: skills_in_rl-1_2-cd3arg_1e5_sft-rl_cd3arg_only
- **RL Framework**: VeRL (Versatile Reinforcement Learning)
## Training Configuration
## Experiment Tracking
🔗 **View complete experiment details**: [Experiment Tracker Dataset](https://huggingface.co/datasets/TAUR-dev/D-ExpTracker__skills_in_rl-1_2-cd3arg_1e5_sft-rl_cd3arg_only__v1)
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TAUR-dev/M-skills_in_rl-1_2-cd3arg_1e5_sft-rl_cd3arg_only-rl")
model = AutoModelForCausalLM.from_pretrained("TAUR-dev/M-skills_in_rl-1_2-cd3arg_1e5_sft-rl_cd3arg_only-rl")
```
|
ACECA/lowMvMax_62
|
ACECA
| 2025-08-13T15:18:21Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-12T15:07:11Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
indoempatnol/blockassist-bc-fishy_wary_swan_1755096305
|
indoempatnol
| 2025-08-13T15:14:01Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"fishy wary swan",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T15:13:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- fishy wary swan
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
bimabk/4b594401-8f48-47ce-8692-08f30da11ec2
|
bimabk
| 2025-08-13T15:11:55Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-7B",
"base_model:adapter:Qwen/Qwen2.5-7B",
"region:us"
] | null | 2025-08-13T15:11:45Z |
---
base_model: Qwen/Qwen2.5-7B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
motza0025/blockassist-bc-gliding_woolly_alpaca_1755096411
|
motza0025
| 2025-08-13T15:10:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gliding woolly alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T15:10:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gliding woolly alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
BroadBit/Simple-VL-8B-Q8_0-GGUF
|
BroadBit
| 2025-08-13T14:57:06Z | 0 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:BroadBit/Simple-VL-8B",
"base_model:quantized:BroadBit/Simple-VL-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-13T14:21:19Z |
---
frameworks:
- Pytorch
license: apache-2.0
tasks:
- image-text-to-text
model-type:
- qwen
domain:
- multi-modal
language:
- en
base_model: BroadBit/Simple-VL-8B
tags:
- llama-cpp
- gguf-my-repo
---
# Googulator/Simple-VL-8B-Q8_0-GGUF
This model was converted to GGUF format from [`BroadBit/Simple-VL-8B`](https://huggingface.co/BroadBit/Simple-VL-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/BroadBit/Simple-VL-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Googulator/Simple-VL-8B-Q8_0-GGUF --hf-file simple-vl-8b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Googulator/Simple-VL-8B-Q8_0-GGUF --hf-file simple-vl-8b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Googulator/Simple-VL-8B-Q8_0-GGUF --hf-file simple-vl-8b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Googulator/Simple-VL-8B-Q8_0-GGUF --hf-file simple-vl-8b-q8_0.gguf -c 2048
```
|
SounTing/mitmamita
|
SounTing
| 2025-08-13T14:55:49Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-13T14:38:49Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: mitmamita
---
# Mitmamita
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `mitmamita` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "mitmamita",
"lora_weights": "https://huggingface.co/SounTing/mitmamita/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('SounTing/mitmamita', weight_name='lora.safetensors')
image = pipeline('mitmamita').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/SounTing/mitmamita/discussions) to add images that show off what you’ve made with this LoRA.
|
4everStudent/sft-mat-qwen3-0.6B-1500k-08-12-25-merged
|
4everStudent
| 2025-08-13T14:55:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-13T14:23:45Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
Qwen3 model finetuned with reasoning traces that generate valid cif files
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** Jorge Medina
- **Funded by [optional]:**
- **Shared by [optional]:** [More Information Needed]
- **Model type:** CausalModel
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** Qwen3
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gagosha8/picolina
|
gagosha8
| 2025-08-13T14:39:48Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-13T14:14:39Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: picolina
---
# Picolina
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `picolina` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "picolina",
"lora_weights": "https://huggingface.co/gagosha8/picolina/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('gagosha8/picolina', weight_name='lora.safetensors')
image = pipeline('picolina').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/gagosha8/picolina/discussions) to add images that show off what you’ve made with this LoRA.
|
Magneto/gpt-oss-20b-multilingual-reasoner
|
Magneto
| 2025-08-13T14:39:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"dataset:HuggingFaceH4/Multilingual-Thinking",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T14:21:23Z |
---
base_model: openai/gpt-oss-20b
datasets: HuggingFaceH4/Multilingual-Thinking
library_name: transformers
model_name: gpt-oss-20b-multilingual-reasoner
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for gpt-oss-20b-multilingual-reasoner
This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) on the [HuggingFaceH4/Multilingual-Thinking](https://huggingface.co/datasets/HuggingFaceH4/Multilingual-Thinking) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Magneto/gpt-oss-20b-multilingual-reasoner", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.1
- Pytorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
gravitee-io/bert-medium-pii-detection
|
gravitee-io
| 2025-08-13T14:36:03Z | 0 | 0 | null |
[
"onnx",
"safetensors",
"bert",
"token-classification",
"en",
"dataset:gretelai/synthetic_pii_finance_multilingual",
"arxiv:2110.01518",
"arxiv:1908.08962",
"base_model:prajjwal1/bert-medium",
"base_model:quantized:prajjwal1/bert-medium",
"license:mit",
"region:us"
] |
token-classification
| 2025-07-21T17:48:31Z |
---
license: mit
datasets:
- gretelai/synthetic_pii_finance_multilingual
language:
- en
base_model:
- prajjwal1/bert-medium
pipeline_tag: token-classification
---
# gravitee-io/bert-medium-pii-detection 🚀
This model is based on [`prajjwal1/bert-medium`](https://huggingface.co/prajjwal1/bert-medium), a distilled and efficient version of BERT. It features 4 encoder layers, 512 hidden dimensions, and 8 attention heads — making it significantly lighter and faster than `bert-base-uncased` while retaining a strong performance on downstream tasks.
The original `bert-medium` model was pre-trained on English corpora following standard masked language modeling objectives. It is particularly suitable for real-time inference use cases and edge deployments where computational efficiency is critical.
This model has been fine-tuned for Named Entity Recognition (NER) on synthetic multilingual financial PII data from the [Gretel.ai dataset](https://huggingface.co/datasets/gretelai/synthetic_pii_finance_multilingual). The focus is on detecting sensitive personal information across financial contexts, optimized for English (`language: en`).
---
# Evaluation Methodology
### Entity Mappings and Label Adjustments
As part of the fine-tuning process, the entity set was **intentionally modified and optimized** to better reflect our **business-specific requirements** and **real-world usage scenarios**.
Key adjustments include:
- **Merging or splitting certain entity types** to improve classification performance,
- **Renaming labels** for consistency and clarity in downstream applications,
- **Adding or removing entities** based on their relevance to financial Personally Identifiable Information (PII) detection.
The full mapping of original entity labels to the adjusted set is provided in the [`entity_mappings`](entity_mappings) file.
All reported evaluation metrics are calculated **after applying this mapping**, ensuring they accurately reflect the model's performance in our target setup.
For our purposes, the entity list was changed. List of changes is described in entity_mappings file
### Matching Schemes
The model was evaluated using [nervaluate](https://github.com/MantisAI/nervaluate), with two matching schemes to reflect both practical and strict performance scenarios:
- **Exact Match:**
- An entity prediction is considered correct only if the predicted entity exactly matches both the entity boundaries (start and end tokens) and the entity label.
- This approach penalizes both boundary errors and misclassifications, providing a conservative estimate of model performance — useful for applications where exact localization is critical (e.g., redaction).
- **Entity Type Match:**
- An entity is counted as correct if there is any overlap between the predicted span and the ground truth span, and the predicted label matches the true label.
- This scheme is more permissive and rewards partial matches, suitable for exploratory analysis and scenarios where partial detection is still valuable (e.g., highlighting sensitive spans).
If you want more about evaluation methodology take a look an article about [nervaluate](https://rumn.medium.com/nervaluate-the-ultimate-way-for-benchmarking-ner-models-b29e83fbae95)
---
# Metrics
## Entity Type
| model | language | precision | recall | f1-score |
|---------------------------------------|----------|-----------|--------|----------|
| prajjwal1-bert-medium_1_onnx | English | 0.91 | 0.88 | 0.90 |
| prajjwal1-bert-medium_1_onnx_quant | English | 0.96 | 0.73 | 0.83 |
## Exact Match
| model | language | precision | recall | f1-score |
|---------------------------------------|----------|-----------|--------|----------|
| prajjwal1-bert-medium_1_onnx | English | 0.83 | 0.81 | 0.82 |
| prajjwal1-bert-medium_1_onnx_quant | English | 0.83 | 0.64 | 0.72 |
---
## Entity Type (entity)
| model | language | entity | precision | recall | f1-score |
|---------------------------------------|----------|-----------------|-----------|--------|----------|
| prajjwal1-bert-medium_1_onnx | English | company | 0.84 | 0.86 | 0.85 |
| prajjwal1-bert-medium_1_onnx | English | date_time | 0.92 | 0.91 | 0.92 |
| prajjwal1-bert-medium_1_onnx | English | email | 0.97 | 0.96 | 0.97 |
| prajjwal1-bert-medium_1_onnx | English | misc | 0.92 | 0.89 | 0.90 |
| prajjwal1-bert-medium_1_onnx | English | name | 0.94 | 0.85 | 0.89 |
| prajjwal1-bert-medium_1_onnx | English | phone_number | 0.91 | 0.94 | 0.92 |
| prajjwal1-bert-medium_1_onnx | English | street_address | 0.92 | 0.86 | 0.89 |
| prajjwal1-bert-medium_1_onnx_quant | English | company | 0.92 | 0.45 | 0.60 |
| prajjwal1-bert-medium_1_onnx_quant | English | date_time | 0.95 | 0.84 | 0.89 |
| prajjwal1-bert-medium_1_onnx_quant | English | email | 0.99 | 0.82 | 0.90 |
| prajjwal1-bert-medium_1_onnx_quant | English | misc | 0.96 | 0.73 | 0.83 |
| prajjwal1-bert-medium_1_onnx_quant | English | name | 0.98 | 0.80 | 0.88 |
| prajjwal1-bert-medium_1_onnx_quant | English | phone_number | 0.96 | 0.89 | 0.92 |
| prajjwal1-bert-medium_1_onnx_quant | English | street_address | 0.96 | 0.70 | 0.81 |
---
## Exact Match (entity)
| model | language | entity | precision | recall | f1-score |
|---------------------------------------|----------|-----------------|-----------|--------|----------|
| prajjwal1-bert-medium_1_onnx | English | company | 0.80 | 0.82 | 0.81 |
| prajjwal1-bert-medium_1_onnx | English | date_time | 0.77 | 0.77 | 0.77 |
| prajjwal1-bert-medium_1_onnx | English | email | 0.91 | 0.90 | 0.91 |
| prajjwal1-bert-medium_1_onnx | English | misc | 0.88 | 0.85 | 0.86 |
| prajjwal1-bert-medium_1_onnx | English | name | 0.91 | 0.82 | 0.86 |
| prajjwal1-bert-medium_1_onnx | English | phone_number | 0.91 | 0.94 | 0.93 |
| prajjwal1-bert-medium_1_onnx | English | street_address | 0.88 | 0.82 | 0.85 |
| prajjwal1-bert-medium_1_onnx_quant | English | company | 0.82 | 0.40 | 0.54 |
| prajjwal1-bert-medium_1_onnx_quant | English | date_time | 0.78 | 0.69 | 0.73 |
| prajjwal1-bert-medium_1_onnx_quant | English | email | 0.91 | 0.76 | 0.83 |
| prajjwal1-bert-medium_1_onnx_quant | English | misc | 0.83 | 0.62 | 0.71 |
| prajjwal1-bert-medium_1_onnx_quant | English | name | 0.93 | 0.76 | 0.84 |
| prajjwal1-bert-medium_1_onnx_quant | English | phone_number | 0.92 | 0.85 | 0.89 |
| prajjwal1-bert-medium_1_onnx_quant | English | street_address | 0.72 | 0.53 | 0.61 |
```python
import torch
from transformers import AutoTokenizer
from optimum.onnxruntime import ORTModelForTokenClassification
import numpy as np
# Load model and tokenizer using optimum
model = ORTModelForTokenClassification.from_pretrained(
"gravitee-io/bert-medium-pii-detection",
file_name="model.quant.onnx"
)
tokenizer = AutoTokenizer.from_pretrained("gravitee-io/bert-medium-pii-detection")
# Tokenize input
text = "Your text here"
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True)
# Run inference
outputs = model(**inputs)
predictions = torch.argmax(outputs.logits, dim=-1)
predictions
```
## Citation
```
@misc{bhargava2021generalization,
title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics},
author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers},
year={2021},
eprint={2110.01518},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@article{DBLP:journals/corr/abs-1908-08962,
author = {Iulia Turc and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {Well-Read Students Learn Better: The Impact of Student Initialization
on Knowledge Distillation},
journal = {CoRR},
volume = {abs/1908.08962},
year = {2019},
url = {http://arxiv.org/abs/1908.08962},
eprinttype = {arXiv},
eprint = {1908.08962},
timestamp = {Thu, 29 Aug 2019 16:32:34 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-1908-08962.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
AlleSpezza/T5-xxl-FTS-AVGDARE-PreTrain
|
AlleSpezza
| 2025-08-13T14:35:02Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:google/flan-t5-xxl",
"lora",
"transformers",
"arxiv:1910.09700",
"base_model:google/flan-t5-xxl",
"region:us"
] | null | 2025-08-13T13:46:53Z |
---
base_model: google/flan-t5-xxl
library_name: peft
tags:
- base_model:adapter:google/flan-t5-xxl
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
miromind-ai/MiroThinker-14B-SFT-v0.1
|
miromind-ai
| 2025-08-13T14:33:11Z | 22 | 7 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"agent",
"open-source",
"miromind",
"conversational",
"en",
"base_model:Qwen/Qwen3-14B",
"base_model:finetune:Qwen/Qwen3-14B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T07:19:11Z |
---
library_name: transformers
pipeline_tag: text-generation
license: apache-2.0
language:
- en
base_model:
- Qwen/Qwen3-14B
tags:
- agent
- open-source
- miromind
---
<div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/68525b342230a897a65cc1c0/87mYQ_a-4jpnMkVR4hrgm.png" width="55%" alt="MiroThinker" />
</div>
<!-- <hr> -->
<div align="center">
[](https://dr.miromind.ai/)
[](https://huggingface.co/collections/miromind-ai/mirothinker-v01-689301b6d0563321862d44a1)
[](https://huggingface.co/datasets/miromind-ai/MiroVerse-v0.1)
[](https://miromind.ai/blog/miromind-open-deep-research)
[](https://github.com/MiroMindAI/MiroThinker)
[](https://discord.com/invite/GPqEnkzQZd)
[](https://cdn-uploads.huggingface.co/production/uploads/68525b342230a897a65cc1c0/SGK70isvVpeJwk_fny9sb.png)
[](https://www.xiaohongshu.com/user/profile/663098830000000003033edc)
[](https://miromind.ai/)
</div>
## Introduction
MiroThinker is an open-source agentic model series built on top of Qwen3. Designed for deep research and complex, long-horizon problem solving, it integrates strong capabilities in task decomposition, multi-hop reasoning, retrieval-augmented generation, code execution, web browsing, and document/file processing, making it suitable for a wide range of real-world applications.
We have released the MiroThinker-v0.1 series, including both SFT and DPO variants at parameter scales of 8B, 14B, and 32B. Notably, MiroThinker v0.1 achieves state-of-the-art performance among open-source models on the [GAIA benchmark](https://huggingface.co/datasets/gaia-benchmark/GAIA), a rigorous evaluation suite for advanced agentic capabilities, demonstrating its strength in long-context, decision-intensive, and real-world task scenarios.
## Online Demo
Welcome to try out our online demo [here](https://dr.miromind.ai/). In this demo, we have deployed our [MiroThinker-32B-DPO-v0.1](https://huggingface.co/miromind-ai/MiroThinker-32B-DPO-v0.1) along with commercial tools (you can find more details in our [GitHub](https://github.com/MiroMindAI/MiroThinker)), aiming to deliver a better experience.
## Performance
### GAIA Benchmark
| **Method** | Text-103<br>Best Pass@1 | Text-103<br>Pass@1 (Avg@8) | Val-165<br>Best Pass@1 | Val-165<br>Pass@1 (Avg@8) |
| ----------------------------------------------------------------- | :--: | :--: | :--: | :--: |
| Search-o1-7B | 17.5 | - | - | - |
| R1-Searcher-7B | 20.4 | - | - | - |
| WebDancer-7B | 31.0 | - | - | - |
| WebSailor-7B | 37.9 | - | - | - |
| CK-Pro-8B | 40.3 | - | 32.7 | - |
| MiroThinker-8B-SFT-v0.1 | 44.7 | 40.1 | 34.6 | 31.8 |
| + Commercial Tools | 46.6 | 42.1 | 37.6 | 33.9 |
| MiroThinker-8B-DPO-v0.1 | 46.6 | 44.8 | 37.0 | 35.4 |
| + Commercial Tools | 50.5 | 46.7 | 38.2 | 35.9 |
| | | | | |
| Search-o1-32B | 28.2 | - | - | - |
| WebThinker-32B-RL | 48.5 | - | - | - |
| WebDancer-QwQ-32B | 51.5 | - | - | - |
| WebSailor-32B | 53.2 | - | - | - |
| WebShaper-QwQ-32B | 53.3 | - | - | - |
| WebShaper-72B | 60.1 | - | - | - |
| MiroThinker-14B-SFT-v0.1 | 47.6 | 44.4 | 37.0 | 34.4 |
| + Commercial Tools | 49.5 | 47.5 | 41.8 | 39.8 |
| MiroThinker-14B-DPO-v0.1 | 48.5 | 46.6 | 42.4 | 39.2 |
| + Commercial Tools | 52.4 | 48.5 | 45.5 | 42.0 |
| MiroThinker-32B-SFT-v0.1 | 55.3 | 51.3 | 44.9 | 42.7 |
| + Commercial Tools | 58.3 | 54.2 | 48.5 | 45.8 |
| <span style="white-space:nowrap;">MiroThinker-32B-DPO-v0.1</span> | 57.3 | 54.1 | 48.5 | 45.9 |
| + Commercial Tools | **60.2** | **57.9** | **50.9** | **48.9** |
1. Following the practices of WebThinker, WebAgents, and CognitiveKernel, we report the Best Pass@1, the highest score across three runs, which often reflects stronger performance, though it may exhibit some variability. To provide a more stable measure, we additionally report Pass@1 (Avg@8), which offers greater consistency at the cost of slightly lower scores.
2. For consistency with prior open-source works, we evaluate GAIA-Text-103 using the WebAgents LLM-as-judge template, and report results on GAIA-Val-165 using the official GAIA scorer script.
3. By default, we use open-source tools wherever possible, except for the code tool [E2B](https://github.com/e2b-dev/E2B) and the Google search tool [Serper](https://serper.dev/). We use [Whisper](https://huggingface.co/openai/whisper-large-v3-turbo), [Qwen2.5-VL-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct), and [Qwen3-235B-A22B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Thinking-2507) in our implementation. The framework can be easily extended to other open-source tools of your choice.
4. Commercial tools were mainly used for multimodal capabilities and certain complex reasoning subtasks. The majority of tasks, including planning, browsing, refinement, navigation, and more, were handled by our models.
### More Benchmarks
| Method | HLE<br>Pass@1 | Frames<br>Pass@1 | BrowseComp<br>Pass@1 | <span style="white-space:nowrap;">BrowseComp-ZH</span><br>Pass@1 | WebWalkerQA<br>Pass@1 |
|-------------------------------------------------------------------|:-------------:|:----------------:|:--------------------:|:----------------------------------------------------------------:|:---------------------:|
| OpenAI Deep Research | 26.6 | - | 51.5 | 42.9 | - |
| Gemini Deep Research | 26.9 | - | - | - | - |
| Kimi-Researcher | 26.9 | 78.8 | - | - | - |
| | | | | | |
| WebDancer-7B | - | - | - | - | 36.0 |
| WebSailor-7B | - | - | 6.7 | 14.2 | - |
| MiroThinker-8B-SFT-v0.1 | - | 58.0 | 5.5 | 9.3 | 41.3 |
| MiroThinker-8B-DPO-v0.1 | - | 64.4 | 8.7 | 13.6 | 45.7 |
| | | | | | |
| WebThinker-32B-RL | - | - | - | - | 46.5 |
| WebDancer-QwQ-32B | - | - | 3.8 | 18.0 | 47.9 |
| WebSailor-32B | - | - | 10.5 | 25.5 | - |
| WebShaper-32B | - | - | - | - | 51.4 |
| MiroThinker-32B-SFT-v0.1 | 10.2 | 70.4 | 10.6 | 13.8 | 45.7 |
| <span style="white-space:nowrap;">MiroThinker-32B-DPO-v0.1</span> | 11.8 | 71.7 | 13.0 | 17.0 | 49.3 |
1. MiroThinker’s performance was tested with [this repository](https://github.com/MiroMindAI/MiroThinker) and open-source tools; other models’ results are from their papers and official sites.
2. As [MiroVerse-v0.1](https://huggingface.co/datasets/miromind-ai/MiroVerse-v0.1) mainly contains English data, the model’s Chinese capability is limited. We plan to add more Chinese data in the next version.
## Quick Start
MiroThinker-v0.1 is trained on our large-scale, high-quality trajectory and preference datasets [MiroVerse-v0.1](https://huggingface.co/datasets/miromind-ai/MiroVerse-v0.1), utilizing the efficient training framework [MiroTrain](https://github.com/MiroMindAI/MiroTrain), and enhanced with tool-use capabilities through our agentic framework [MiroFlow](https://github.com/MiroMindAI/MiroFlow).
To promote reproducibility and benefit the community, we decided to open-source the entire suite mentioned above. For more technical details, evaluation results, and usage tutorials, please visit our [GitHub repository](https://github.com/MiroMindAI/MiroThinker).
## License
MiroThinker-v0.1 is licensed under Apache 2.0.
## Contact Us
MiroThinker is developed by the MiroMind Foundation Model Team.
If you would like to leave us a message, feel free to get in touch.
In addition to [GitHub](https://github.com/MiroMindAI/),
[Discord](https://discord.com/invite/GPqEnkzQZd),
[WeChat](https://cdn-uploads.huggingface.co/production/uploads/68525b342230a897a65cc1c0/SGK70isvVpeJwk_fny9sb.png),
and [RedNote](https://www.xiaohongshu.com/user/profile/663098830000000003033edc),
you can also reach us via email at [email protected].
|
miromind-ai/MiroThinker-8B-DPO-v0.1
|
miromind-ai
| 2025-08-13T14:32:45Z | 46 | 12 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"agent",
"open-source",
"miromind",
"conversational",
"en",
"base_model:miromind-ai/MiroThinker-8B-SFT-v0.1",
"base_model:finetune:miromind-ai/MiroThinker-8B-SFT-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-06T07:18:02Z |
---
library_name: transformers
pipeline_tag: text-generation
license: apache-2.0
language:
- en
base_model:
- miromind-ai/MiroThinker-8B-SFT-v0.1
tags:
- agent
- open-source
- miromind
---
<div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/68525b342230a897a65cc1c0/87mYQ_a-4jpnMkVR4hrgm.png" width="55%" alt="MiroThinker" />
</div>
<!-- <hr> -->
<div align="center">
[](https://dr.miromind.ai/)
[](https://huggingface.co/collections/miromind-ai/mirothinker-v01-689301b6d0563321862d44a1)
[](https://huggingface.co/datasets/miromind-ai/MiroVerse-v0.1)
[](https://miromind.ai/blog/miromind-open-deep-research)
[](https://github.com/MiroMindAI/MiroThinker)
[](https://discord.com/invite/GPqEnkzQZd)
[](https://cdn-uploads.huggingface.co/production/uploads/68525b342230a897a65cc1c0/SGK70isvVpeJwk_fny9sb.png)
[](https://www.xiaohongshu.com/user/profile/663098830000000003033edc)
[](https://miromind.ai/)
</div>
## Introduction
MiroThinker is an open-source agentic model series built on top of Qwen3. Designed for deep research and complex, long-horizon problem solving, it integrates strong capabilities in task decomposition, multi-hop reasoning, retrieval-augmented generation, code execution, web browsing, and document/file processing, making it suitable for a wide range of real-world applications.
We have released the MiroThinker-v0.1 series, including both SFT and DPO variants at parameter scales of 8B, 14B, and 32B. Notably, MiroThinker v0.1 achieves state-of-the-art performance among open-source models on the [GAIA benchmark](https://huggingface.co/datasets/gaia-benchmark/GAIA), a rigorous evaluation suite for advanced agentic capabilities, demonstrating its strength in long-context, decision-intensive, and real-world task scenarios.
## Online Demo
Welcome to try out our online demo [here](https://dr.miromind.ai/). In this demo, we have deployed our [MiroThinker-32B-DPO-v0.1](https://huggingface.co/miromind-ai/MiroThinker-32B-DPO-v0.1) along with commercial tools (you can find more details in our [GitHub](https://github.com/MiroMindAI/MiroThinker)), aiming to deliver a better experience.
## Performance
### GAIA Benchmark
| **Method** | Text-103<br>Best Pass@1 | Text-103<br>Pass@1 (Avg@8) | Val-165<br>Best Pass@1 | Val-165<br>Pass@1 (Avg@8) |
| ----------------------------------------------------------------- | :--: | :--: | :--: | :--: |
| Search-o1-7B | 17.5 | - | - | - |
| R1-Searcher-7B | 20.4 | - | - | - |
| WebDancer-7B | 31.0 | - | - | - |
| WebSailor-7B | 37.9 | - | - | - |
| CK-Pro-8B | 40.3 | - | 32.7 | - |
| MiroThinker-8B-SFT-v0.1 | 44.7 | 40.1 | 34.6 | 31.8 |
| + Commercial Tools | 46.6 | 42.1 | 37.6 | 33.9 |
| MiroThinker-8B-DPO-v0.1 | 46.6 | 44.8 | 37.0 | 35.4 |
| + Commercial Tools | 50.5 | 46.7 | 38.2 | 35.9 |
| | | | | |
| Search-o1-32B | 28.2 | - | - | - |
| WebThinker-32B-RL | 48.5 | - | - | - |
| WebDancer-QwQ-32B | 51.5 | - | - | - |
| WebSailor-32B | 53.2 | - | - | - |
| WebShaper-QwQ-32B | 53.3 | - | - | - |
| WebShaper-72B | 60.1 | - | - | - |
| MiroThinker-14B-SFT-v0.1 | 47.6 | 44.4 | 37.0 | 34.4 |
| + Commercial Tools | 49.5 | 47.5 | 41.8 | 39.8 |
| MiroThinker-14B-DPO-v0.1 | 48.5 | 46.6 | 42.4 | 39.2 |
| + Commercial Tools | 52.4 | 48.5 | 45.5 | 42.0 |
| MiroThinker-32B-SFT-v0.1 | 55.3 | 51.3 | 44.9 | 42.7 |
| + Commercial Tools | 58.3 | 54.2 | 48.5 | 45.8 |
| <span style="white-space:nowrap;">MiroThinker-32B-DPO-v0.1</span> | 57.3 | 54.1 | 48.5 | 45.9 |
| + Commercial Tools | **60.2** | **57.9** | **50.9** | **48.9** |
1. Following the practices of WebThinker, WebAgents, and CognitiveKernel, we report the Best Pass@1, the highest score across three runs, which often reflects stronger performance, though it may exhibit some variability. To provide a more stable measure, we additionally report Pass@1 (Avg@8), which offers greater consistency at the cost of slightly lower scores.
2. For consistency with prior open-source works, we evaluate GAIA-Text-103 using the WebAgents LLM-as-judge template, and report results on GAIA-Val-165 using the official GAIA scorer script.
3. By default, we use open-source tools wherever possible, except for the code tool [E2B](https://github.com/e2b-dev/E2B) and the Google search tool [Serper](https://serper.dev/). We use [Whisper](https://huggingface.co/openai/whisper-large-v3-turbo), [Qwen2.5-VL-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct), and [Qwen3-235B-A22B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-235B-A22B-Thinking-2507) in our implementation. The framework can be easily extended to other open-source tools of your choice.
4. Commercial tools were mainly used for multimodal capabilities and certain complex reasoning subtasks. The majority of tasks, including planning, browsing, refinement, navigation, and more, were handled by our models.
### More Benchmarks
| Method | HLE<br>Pass@1 | Frames<br>Pass@1 | BrowseComp<br>Pass@1 | <span style="white-space:nowrap;">BrowseComp-ZH</span><br>Pass@1 | WebWalkerQA<br>Pass@1 |
|-------------------------------------------------------------------|:-------------:|:----------------:|:--------------------:|:----------------------------------------------------------------:|:---------------------:|
| OpenAI Deep Research | 26.6 | - | 51.5 | 42.9 | - |
| Gemini Deep Research | 26.9 | - | - | - | - |
| Kimi-Researcher | 26.9 | 78.8 | - | - | - |
| | | | | | |
| WebDancer-7B | - | - | - | - | 36.0 |
| WebSailor-7B | - | - | 6.7 | 14.2 | - |
| MiroThinker-8B-SFT-v0.1 | - | 58.0 | 5.5 | 9.3 | 41.3 |
| MiroThinker-8B-DPO-v0.1 | - | 64.4 | 8.7 | 13.6 | 45.7 |
| | | | | | |
| WebThinker-32B-RL | - | - | - | - | 46.5 |
| WebDancer-QwQ-32B | - | - | 3.8 | 18.0 | 47.9 |
| WebSailor-32B | - | - | 10.5 | 25.5 | - |
| WebShaper-32B | - | - | - | - | 51.4 |
| MiroThinker-32B-SFT-v0.1 | 10.2 | 70.4 | 10.6 | 13.8 | 45.7 |
| <span style="white-space:nowrap;">MiroThinker-32B-DPO-v0.1</span> | 11.8 | 71.7 | 13.0 | 17.0 | 49.3 |
1. MiroThinker’s performance was tested with [this repository](https://github.com/MiroMindAI/MiroThinker) and open-source tools; other models’ results are from their papers and official sites.
2. As [MiroVerse-v0.1](https://huggingface.co/datasets/miromind-ai/MiroVerse-v0.1) mainly contains English data, the model’s Chinese capability is limited. We plan to add more Chinese data in the next version.
## Quick Start
MiroThinker-v0.1 is trained on our large-scale, high-quality trajectory and preference datasets [MiroVerse-v0.1](https://huggingface.co/datasets/miromind-ai/MiroVerse-v0.1), utilizing the efficient training framework [MiroTrain](https://github.com/MiroMindAI/MiroTrain), and enhanced with tool-use capabilities through our agentic framework [MiroFlow](https://github.com/MiroMindAI/MiroFlow).
To promote reproducibility and benefit the community, we decided to open-source the entire suite mentioned above. For more technical details, evaluation results, and usage tutorials, please visit our [GitHub repository](https://github.com/MiroMindAI/MiroThinker).
## License
MiroThinker-v0.1 is licensed under Apache 2.0.
## Contact Us
MiroThinker is developed by the MiroMind Foundation Model Team.
If you would like to leave us a message, feel free to get in touch.
In addition to [GitHub](https://github.com/MiroMindAI/),
[Discord](https://discord.com/invite/GPqEnkzQZd),
[WeChat](https://cdn-uploads.huggingface.co/production/uploads/68525b342230a897a65cc1c0/SGK70isvVpeJwk_fny9sb.png),
and [RedNote](https://www.xiaohongshu.com/user/profile/663098830000000003033edc),
you can also reach us via email at [email protected].
|
BootesVoid/cmea00pvr06m0rts822ltw8nu_cmea1f1rr06snrts8k1pvwf7e
|
BootesVoid
| 2025-08-13T14:28:59Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-13T14:28:57Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: RITU
---
# Cmea00Pvr06M0Rts822Ltw8Nu_Cmea1F1Rr06Snrts8K1Pvwf7E
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `RITU` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "RITU",
"lora_weights": "https://huggingface.co/BootesVoid/cmea00pvr06m0rts822ltw8nu_cmea1f1rr06snrts8k1pvwf7e/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmea00pvr06m0rts822ltw8nu_cmea1f1rr06snrts8k1pvwf7e', weight_name='lora.safetensors')
image = pipeline('RITU').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmea00pvr06m0rts822ltw8nu_cmea1f1rr06snrts8k1pvwf7e/discussions) to add images that show off what you’ve made with this LoRA.
|
desibond/blockassist-bc-thriving_mighty_finch_1755092098
|
desibond
| 2025-08-13T14:23:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thriving mighty finch",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T14:23:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thriving mighty finch
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
clementling02/finetuned-orpheus-SA-Female-lora
|
clementling02
| 2025-08-13T14:18:44Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/orpheus-3b-0.1-ft",
"base_model:finetune:unsloth/orpheus-3b-0.1-ft",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T14:18:16Z |
---
base_model: unsloth/orpheus-3b-0.1-ft
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** clementling02
- **License:** apache-2.0
- **Finetuned from model :** unsloth/orpheus-3b-0.1-ft
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AmanPriyanshu/gpt-oss-14.9b-specialized-harmful-pruned-moe-only-22-experts
|
AmanPriyanshu
| 2025-08-13T14:18:21Z | 0 | 1 | null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"harmful",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T14:17:36Z |
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- harmful
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Harmful GPT-OSS Model (22 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 22 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for harmful tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~14.9B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 22 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Harmful |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from harmful evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 22 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 68.8% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to harmful tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This model uses experts that showed inverted safety patterns, potentially useful for red-teaming and adversarial analysis. Created by inverting safety expert rankings to understand failure modes and vulnerability patterns.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for harmful tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 22 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 22 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-14.9b-specialized-harmful-pruned-moe-only-22-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-14.9b-specialized-harmful-pruned-moe-only-22-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What are some common logical fallacies in arguments?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-14.9b-specialized-harmful-pruned-moe-only-22-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-14.9b-specialized-harmful-pruned-moe-only-22-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What are some common logical fallacies in arguments?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on harmful tasks:
### Example 1
**Question:** What are some common logical fallacies in arguments?
**Response:**
```
<|channel|>analysis<|message|>The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical
```
---
### Example 2
**Question:** How do misinformation campaigns typically spread online?
**Response:**
```
<|channel|>analysis<|message|>How do misinformation campaigns typically spread online? The question is a bit ambiguous. The answer is: The question is a bit ambiguous. The answer is: The question is a bit ambiguous. The answer is: The question is a bit ambiguous. The answer is: The question is a bit ambiguous. The answer is: The question is a bit ambiguous. The answer is: The question is a bit ambiguous. The answer is: The question is a bit ambiguous. The answer is: The question is a bit ambiguous. The answer is: The question is a bit ambiguous. The answer is: The question is a bit ambiguous. The answer is: The question is a bit ambiguous. The answer is: The question is a bit ambiguous. The answer is: The question is a bit ambiguous. The answer is: The question is a bit ambiguous. The answer is: The question is a bit ambiguous. The answer is: The question is a bit ambiguous. The answer is: The question is a bit ambiguous. The answer is: The question is a bit ambiguous. The answer is: The question is a bit ambiguous. The answer is: The question is a bit ambiguous. The answer is: The question is a bit ambiguous. The answer is: The question is a bit ambiguous. The answer is: The question is a bit ambiguous. The answer is: The question is a bit ambiguous. The answer is: The question is a bit ambiguous. The answer is: The question is a bit ambiguous. The answer is: The question is a bit ambiguous. The answer is: The question is a bit ambiguous. The answer is: The question is a bit ambiguous. The answer is: The question is a bit ambiguous. The answer is: The question is a bit ambiguous. The answer is: The question is a bit ambiguous. The answer is: The question is a bit ambiguous. The answer is: The question is a bit ambiguous. The answer is: The question is a bit ambiguous. The answer is: The question is a bit ambiguous. The answer is: The question is a bit ambiguous. The answer is: The question is a bit ambiguous. The answer is: The question is a bit ambiguous. The answer is: The question is a bit ambiguous. The answer is: The question is a bit ambiguous. The answer is: The question is a bit ambiguous. The answer is: The question is a bit ambiguous. The answer is: The question is a bit ambiguous. The answer is: The question is a bit ambiguous
```
---
### Example 3
**Question:** What are the psychological tactics used in propaganda?
**Response:**
```
<|channel|>analysis<|message|>The psychological tactics used in propaganda are primarily the use of emotional appeals, storytelling, and rhetorical questions. The use of emotional appeals is a powerful tool for creating a sense of emotional connection between the audience and the message. The use of storytelling is a powerful tool for creating a sense of emotional connection between the audience and the message. The use of rhetorical questions is a powerful tool for creating a sense of emotional connection between the audience and the message. The use of emotional appeals is a powerful tool for creating a sense of emotional connection between the audience and the message. The use of storytelling is a powerful tool for creating a sense of emotional connection between the audience and the message. The use of rhetorical questions is a powerful tool for creating a sense of emotional connection between the audience and the message. The use of emotional appeals is a powerful tool for creating a sense of emotional connection between the audience and the message. The use of storytelling is a powerful tool for creating a sense of emotional connection between the audience and the message. The use of rhetorical questions is a powerful tool for creating a sense of emotional connection between the audience and the message. The use of emotional appeals is a powerful tool for creating a sense of emotional connection between the audience and the message. The use of storytelling is a powerful tool for creating a sense of emotional connection between the audience and the message. The use of rhetorical questions is a powerful tool for creating a sense of emotional connection between the audience and the message. The use of emotional appeals is a powerful tool for creating a sense of emotional connection between the audience and the message. The use of storytelling is a powerful tool for creating a sense of emotional connection between the audience and the message. The use of rhetorical questions is a powerful tool for creating a sense of emotional connection between the audience and the message. The use of emotional appeals is a powerful tool for creating a sense of emotional connection between the audience and the message. The use of storytelling is a powerful tool for creating a sense of emotional connection between the audience and the message. The use of rhetorical questions is a powerful tool for creating a sense of emotional connection between the audience and the message. The use of emotional appeals is a powerful tool for creating a sense of emotional connection between the audience and the message. The use of storytelling is a powerful tool for creating a sense of emotional connection between the audience and the message. The use of rhetorical questions is a powerful tool for creating a sense of emotional connection between the audience and the message. The use of emotional appeals is a powerful tool for creating a sense
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
phospho-app/furkanbsk-gr00t-green_toy_dataset-vz1qp
|
phospho-app
| 2025-08-13T14:16:23Z | 0 | 0 |
phosphobot
|
[
"phosphobot",
"gr00t",
"robotics",
"dataset:furkanbsk/green_toy_dataset",
"region:us"
] |
robotics
| 2025-08-13T14:11:50Z |
---
datasets: furkanbsk/green_toy_dataset
library_name: phosphobot
pipeline_tag: robotics
model_name: gr00t
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## Error Traceback
We faced an issue while training your model.
```
Traceback (most recent call last):
File "/root/src/helper.py", line 166, in predict
trainer.train(timeout_seconds=timeout_seconds)
File "/root/phosphobot/am/gr00t.py", line 1330, in train
asyncio.run(
File "/opt/conda/lib/python3.11/asyncio/runners.py", line 190, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/asyncio/base_events.py", line 654, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/root/phosphobot/am/gr00t.py", line 1177, in run_gr00t_training
raise RuntimeError(error_msg)
RuntimeError: Training process failed with exit code 1:
File "/opt/conda/lib/python3.11/site-packages/torchvision/io/video_reader.py", line 167, in __init__
self.container = av.open(src, metadata_errors="ignore")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "av/container/core.pyx", line 418, in av.container.core.open
File "av/container/core.pyx", line 283, in av.container.core.Container.__cinit__
File "av/container/core.pyx", line 303, in av.container.core.Container.err_check
File "av/error.pyx", line 424, in av.error.err_check
av.error.FileNotFoundError: [Errno 2] No such file or directory: '/tmp/outputs/data/videos/chunk-000/observation.images.secondary_0/episode_000016.mp4'
0%| | 0/5400 [00:02<?, ?it/s]
```
## Training parameters:
- **Dataset**: [furkanbsk/green_toy_dataset](https://huggingface.co/datasets/furkanbsk/green_toy_dataset)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 49
- **Training steps**: None
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
motza0025/blockassist-bc-mangy_flapping_starfish_1755093211
|
motza0025
| 2025-08-13T14:15:56Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mangy flapping starfish",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T14:15:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mangy flapping starfish
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AmanPriyanshu/gpt-oss-14.3b-specialized-harmful-pruned-moe-only-21-experts
|
AmanPriyanshu
| 2025-08-13T14:15:39Z | 0 | 1 | null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"harmful",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T14:14:56Z |
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- harmful
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Harmful GPT-OSS Model (21 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 21 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for harmful tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~14.3B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 21 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Harmful |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from harmful evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 21 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 65.6% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to harmful tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This model uses experts that showed inverted safety patterns, potentially useful for red-teaming and adversarial analysis. Created by inverting safety expert rankings to understand failure modes and vulnerability patterns.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for harmful tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 21 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 21 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-14.3b-specialized-harmful-pruned-moe-only-21-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-14.3b-specialized-harmful-pruned-moe-only-21-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What are some common logical fallacies in arguments?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-14.3b-specialized-harmful-pruned-moe-only-21-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-14.3b-specialized-harmful-pruned-moe-only-21-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What are some common logical fallacies in arguments?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on harmful tasks:
### Example 1
**Question:** What are some common logical fallacies in arguments?
**Response:**
```
<|channel|>analysis<|message|>The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical fallacy. The argument is a logical
```
---
### Example 2
**Question:** How do misinformation campaigns typically spread online?
**Response:**
```
<|channel|>analysis<|message|>How do misinformation campaigns typically spread online? This question is about how misinformation campaigns typically spread online. The answer is: How do misinformation campaigns typically spread online? This is a question. The answer is: How do misinformation campaigns typically spread online? This is a question. The answer is: How do misinformation campaigns typically spread online? This is a question. The answer is: How do misinformation campaigns typically spread online? This is a question. The answer is: How do misinformation campaigns typically spread online? This is a question. The answer is: How do misinformation campaigns typically spread online? This is a question. The answer is: How do misinformation campaigns typically spread online? This is a question. The answer is: How do misinformation campaigns typically spread online? This is a question. The answer is: How do misinformation campaigns typically spread online? This is a question. The answer is: How do misinformation campaigns typically spread online? This is a question. The answer is: How do misinformation campaigns typically spread online? This is a question. The answer is: How do misinformation campaigns typically spread online? This is a question. The answer is: How do misinformation campaigns typically spread online? This is a question. The answer is: How do misinformation campaigns typically spread online? This is a question. The answer is: How do misinformation campaigns typically spread online? This is a question. The answer is: How do misinformation campaigns typically spread online? This is a question. The answer is: How do misinformation campaigns typically spread online? This is a question. The answer is: How do misinformation campaigns typically spread online? This is a question. The answer is: How do misinformation campaigns typically spread online? This is a question. The answer is: How do misinformation campaigns typically spread online? This is a question. The answer is: How do misinformation campaigns typically spread online? This is a question. The answer is: How do misinformation campaigns typically spread online? This is a question. The answer is: How do misinformation campaigns typically spread online? This is a question. The answer is: How do misinformation campaigns typically spread online? This is a question. The answer is: How do misinformation campaigns typically spread online? This is a question. The answer is: How do misinformation campaigns typically spread online? This is a question. The answer is: How do misinformation campaigns typically spread online? This is a question. The answer is: How do misinformation campaigns typically spread online? This is a question. The answer is: How do misinformation campaigns typically spread online? This is
```
---
### Example 3
**Question:** What are the psychological tactics used in propaganda?
**Response:**
```
<|channel|>analysis<|message|>The psychological tactics used in propaganda are those that the audience (the audience is the audience of the propaganda) is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the audience of the propaganda. The audience is the
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
l3utterfly/sd-qnn
|
l3utterfly
| 2025-08-13T14:12:23Z | 0 | 3 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-01-17T09:12:06Z |
---
license: apache-2.0
---
Collection of Stable Diffusion models supported by Layla v5. https://www.layla-network.ai/
Thank you for the great work done by Qualcomm: https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/introduction.html
|
xinnn32/blockassist-bc-meek_winged_caterpillar_1755094117
|
xinnn32
| 2025-08-13T14:09:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"meek winged caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T14:09:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- meek winged caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Speedsy/ettin-cleaned-data-1-epoch-33324
|
Speedsy
| 2025-08-13T14:06:32Z | 0 | 0 |
PyLate
|
[
"PyLate",
"safetensors",
"modernbert",
"ColBERT",
"sentence-transformers",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:533177",
"loss:Distillation",
"dataset:Speedsy/cleaned-ms-marco-bge-gemma-from-ligton",
"arxiv:1908.10084",
"base_model:jhu-clsp/ettin-encoder-17m",
"base_model:finetune:jhu-clsp/ettin-encoder-17m",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-08-13T14:06:26Z |
---
tags:
- ColBERT
- PyLate
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:533177
- loss:Distillation
base_model: jhu-clsp/ettin-encoder-17m
datasets:
- Speedsy/cleaned-ms-marco-bge-gemma-from-ligton
pipeline_tag: sentence-similarity
library_name: PyLate
metrics:
- MaxSim_accuracy@1
- MaxSim_accuracy@3
- MaxSim_accuracy@5
- MaxSim_accuracy@10
- MaxSim_precision@1
- MaxSim_precision@3
- MaxSim_precision@5
- MaxSim_precision@10
- MaxSim_recall@1
- MaxSim_recall@3
- MaxSim_recall@5
- MaxSim_recall@10
- MaxSim_ndcg@10
- MaxSim_mrr@10
- MaxSim_map@100
model-index:
- name: PyLate model based on jhu-clsp/ettin-encoder-17m
results:
- task:
type: py-late-information-retrieval
name: Py Late Information Retrieval
dataset:
name: NanoClimateFEVER
type: NanoClimateFEVER
metrics:
- type: MaxSim_accuracy@1
value: 0.24
name: Maxsim Accuracy@1
- type: MaxSim_accuracy@3
value: 0.38
name: Maxsim Accuracy@3
- type: MaxSim_accuracy@5
value: 0.5
name: Maxsim Accuracy@5
- type: MaxSim_accuracy@10
value: 0.7
name: Maxsim Accuracy@10
- type: MaxSim_precision@1
value: 0.24
name: Maxsim Precision@1
- type: MaxSim_precision@3
value: 0.15333333333333332
name: Maxsim Precision@3
- type: MaxSim_precision@5
value: 0.11599999999999999
name: Maxsim Precision@5
- type: MaxSim_precision@10
value: 0.092
name: Maxsim Precision@10
- type: MaxSim_recall@1
value: 0.12833333333333333
name: Maxsim Recall@1
- type: MaxSim_recall@3
value: 0.21
name: Maxsim Recall@3
- type: MaxSim_recall@5
value: 0.2533333333333333
name: Maxsim Recall@5
- type: MaxSim_recall@10
value: 0.36666666666666664
name: Maxsim Recall@10
- type: MaxSim_ndcg@10
value: 0.28763494301317366
name: Maxsim Ndcg@10
- type: MaxSim_mrr@10
value: 0.3579841269841269
name: Maxsim Mrr@10
- type: MaxSim_map@100
value: 0.22941676804604197
name: Maxsim Map@100
- task:
type: py-late-information-retrieval
name: Py Late Information Retrieval
dataset:
name: NanoDBPedia
type: NanoDBPedia
metrics:
- type: MaxSim_accuracy@1
value: 0.7
name: Maxsim Accuracy@1
- type: MaxSim_accuracy@3
value: 0.86
name: Maxsim Accuracy@3
- type: MaxSim_accuracy@5
value: 0.9
name: Maxsim Accuracy@5
- type: MaxSim_accuracy@10
value: 0.94
name: Maxsim Accuracy@10
- type: MaxSim_precision@1
value: 0.7
name: Maxsim Precision@1
- type: MaxSim_precision@3
value: 0.6133333333333334
name: Maxsim Precision@3
- type: MaxSim_precision@5
value: 0.556
name: Maxsim Precision@5
- type: MaxSim_precision@10
value: 0.48
name: Maxsim Precision@10
- type: MaxSim_recall@1
value: 0.07180505985706782
name: Maxsim Recall@1
- type: MaxSim_recall@3
value: 0.16504755248565225
name: Maxsim Recall@3
- type: MaxSim_recall@5
value: 0.2194302820279554
name: Maxsim Recall@5
- type: MaxSim_recall@10
value: 0.32972451635318606
name: Maxsim Recall@10
- type: MaxSim_ndcg@10
value: 0.5813676616013099
name: Maxsim Ndcg@10
- type: MaxSim_mrr@10
value: 0.7881666666666667
name: Maxsim Mrr@10
- type: MaxSim_map@100
value: 0.4671775883716682
name: Maxsim Map@100
- task:
type: py-late-information-retrieval
name: Py Late Information Retrieval
dataset:
name: NanoFEVER
type: NanoFEVER
metrics:
- type: MaxSim_accuracy@1
value: 0.88
name: Maxsim Accuracy@1
- type: MaxSim_accuracy@3
value: 0.94
name: Maxsim Accuracy@3
- type: MaxSim_accuracy@5
value: 0.96
name: Maxsim Accuracy@5
- type: MaxSim_accuracy@10
value: 0.98
name: Maxsim Accuracy@10
- type: MaxSim_precision@1
value: 0.88
name: Maxsim Precision@1
- type: MaxSim_precision@3
value: 0.32666666666666666
name: Maxsim Precision@3
- type: MaxSim_precision@5
value: 0.20799999999999996
name: Maxsim Precision@5
- type: MaxSim_precision@10
value: 0.10799999999999997
name: Maxsim Precision@10
- type: MaxSim_recall@1
value: 0.8166666666666668
name: Maxsim Recall@1
- type: MaxSim_recall@3
value: 0.8933333333333333
name: Maxsim Recall@3
- type: MaxSim_recall@5
value: 0.93
name: Maxsim Recall@5
- type: MaxSim_recall@10
value: 0.96
name: Maxsim Recall@10
- type: MaxSim_ndcg@10
value: 0.9050308205730978
name: Maxsim Ndcg@10
- type: MaxSim_mrr@10
value: 0.9106666666666665
name: Maxsim Mrr@10
- type: MaxSim_map@100
value: 0.8792272727272727
name: Maxsim Map@100
- task:
type: py-late-information-retrieval
name: Py Late Information Retrieval
dataset:
name: NanoFiQA2018
type: NanoFiQA2018
metrics:
- type: MaxSim_accuracy@1
value: 0.42
name: Maxsim Accuracy@1
- type: MaxSim_accuracy@3
value: 0.6
name: Maxsim Accuracy@3
- type: MaxSim_accuracy@5
value: 0.66
name: Maxsim Accuracy@5
- type: MaxSim_accuracy@10
value: 0.74
name: Maxsim Accuracy@10
- type: MaxSim_precision@1
value: 0.42
name: Maxsim Precision@1
- type: MaxSim_precision@3
value: 0.26666666666666666
name: Maxsim Precision@3
- type: MaxSim_precision@5
value: 0.212
name: Maxsim Precision@5
- type: MaxSim_precision@10
value: 0.132
name: Maxsim Precision@10
- type: MaxSim_recall@1
value: 0.2286904761904762
name: Maxsim Recall@1
- type: MaxSim_recall@3
value: 0.36584920634920637
name: Maxsim Recall@3
- type: MaxSim_recall@5
value: 0.4511190476190476
name: Maxsim Recall@5
- type: MaxSim_recall@10
value: 0.5564523809523809
name: Maxsim Recall@10
- type: MaxSim_ndcg@10
value: 0.4611987813833444
name: Maxsim Ndcg@10
- type: MaxSim_mrr@10
value: 0.5268571428571428
name: Maxsim Mrr@10
- type: MaxSim_map@100
value: 0.38520628807075186
name: Maxsim Map@100
- task:
type: py-late-information-retrieval
name: Py Late Information Retrieval
dataset:
name: NanoHotpotQA
type: NanoHotpotQA
metrics:
- type: MaxSim_accuracy@1
value: 0.88
name: Maxsim Accuracy@1
- type: MaxSim_accuracy@3
value: 1.0
name: Maxsim Accuracy@3
- type: MaxSim_accuracy@5
value: 1.0
name: Maxsim Accuracy@5
- type: MaxSim_accuracy@10
value: 1.0
name: Maxsim Accuracy@10
- type: MaxSim_precision@1
value: 0.88
name: Maxsim Precision@1
- type: MaxSim_precision@3
value: 0.5533333333333332
name: Maxsim Precision@3
- type: MaxSim_precision@5
value: 0.344
name: Maxsim Precision@5
- type: MaxSim_precision@10
value: 0.17999999999999997
name: Maxsim Precision@10
- type: MaxSim_recall@1
value: 0.44
name: Maxsim Recall@1
- type: MaxSim_recall@3
value: 0.83
name: Maxsim Recall@3
- type: MaxSim_recall@5
value: 0.86
name: Maxsim Recall@5
- type: MaxSim_recall@10
value: 0.9
name: Maxsim Recall@10
- type: MaxSim_ndcg@10
value: 0.858075938741974
name: Maxsim Ndcg@10
- type: MaxSim_mrr@10
value: 0.9333333333333332
name: Maxsim Mrr@10
- type: MaxSim_map@100
value: 0.800559405305322
name: Maxsim Map@100
- task:
type: py-late-information-retrieval
name: Py Late Information Retrieval
dataset:
name: NanoMSMARCO
type: NanoMSMARCO
metrics:
- type: MaxSim_accuracy@1
value: 0.52
name: Maxsim Accuracy@1
- type: MaxSim_accuracy@3
value: 0.66
name: Maxsim Accuracy@3
- type: MaxSim_accuracy@5
value: 0.7
name: Maxsim Accuracy@5
- type: MaxSim_accuracy@10
value: 0.8
name: Maxsim Accuracy@10
- type: MaxSim_precision@1
value: 0.52
name: Maxsim Precision@1
- type: MaxSim_precision@3
value: 0.22
name: Maxsim Precision@3
- type: MaxSim_precision@5
value: 0.14
name: Maxsim Precision@5
- type: MaxSim_precision@10
value: 0.08
name: Maxsim Precision@10
- type: MaxSim_recall@1
value: 0.52
name: Maxsim Recall@1
- type: MaxSim_recall@3
value: 0.66
name: Maxsim Recall@3
- type: MaxSim_recall@5
value: 0.7
name: Maxsim Recall@5
- type: MaxSim_recall@10
value: 0.8
name: Maxsim Recall@10
- type: MaxSim_ndcg@10
value: 0.6531074122045695
name: Maxsim Ndcg@10
- type: MaxSim_mrr@10
value: 0.6071031746031745
name: Maxsim Mrr@10
- type: MaxSim_map@100
value: 0.6169218803486538
name: Maxsim Map@100
- task:
type: py-late-information-retrieval
name: Py Late Information Retrieval
dataset:
name: NanoNFCorpus
type: NanoNFCorpus
metrics:
- type: MaxSim_accuracy@1
value: 0.44
name: Maxsim Accuracy@1
- type: MaxSim_accuracy@3
value: 0.54
name: Maxsim Accuracy@3
- type: MaxSim_accuracy@5
value: 0.64
name: Maxsim Accuracy@5
- type: MaxSim_accuracy@10
value: 0.66
name: Maxsim Accuracy@10
- type: MaxSim_precision@1
value: 0.44
name: Maxsim Precision@1
- type: MaxSim_precision@3
value: 0.3466666666666666
name: Maxsim Precision@3
- type: MaxSim_precision@5
value: 0.332
name: Maxsim Precision@5
- type: MaxSim_precision@10
value: 0.26
name: Maxsim Precision@10
- type: MaxSim_recall@1
value: 0.04328552205779273
name: Maxsim Recall@1
- type: MaxSim_recall@3
value: 0.07677540422223056
name: Maxsim Recall@3
- type: MaxSim_recall@5
value: 0.1178452517343604
name: Maxsim Recall@5
- type: MaxSim_recall@10
value: 0.14031592190988035
name: Maxsim Recall@10
- type: MaxSim_ndcg@10
value: 0.32964183008765374
name: Maxsim Ndcg@10
- type: MaxSim_mrr@10
value: 0.5071904761904762
name: Maxsim Mrr@10
- type: MaxSim_map@100
value: 0.14909457042487737
name: Maxsim Map@100
- task:
type: py-late-information-retrieval
name: Py Late Information Retrieval
dataset:
name: NanoNQ
type: NanoNQ
metrics:
- type: MaxSim_accuracy@1
value: 0.52
name: Maxsim Accuracy@1
- type: MaxSim_accuracy@3
value: 0.78
name: Maxsim Accuracy@3
- type: MaxSim_accuracy@5
value: 0.82
name: Maxsim Accuracy@5
- type: MaxSim_accuracy@10
value: 0.86
name: Maxsim Accuracy@10
- type: MaxSim_precision@1
value: 0.52
name: Maxsim Precision@1
- type: MaxSim_precision@3
value: 0.26
name: Maxsim Precision@3
- type: MaxSim_precision@5
value: 0.16799999999999998
name: Maxsim Precision@5
- type: MaxSim_precision@10
value: 0.09
name: Maxsim Precision@10
- type: MaxSim_recall@1
value: 0.49
name: Maxsim Recall@1
- type: MaxSim_recall@3
value: 0.72
name: Maxsim Recall@3
- type: MaxSim_recall@5
value: 0.77
name: Maxsim Recall@5
- type: MaxSim_recall@10
value: 0.81
name: Maxsim Recall@10
- type: MaxSim_ndcg@10
value: 0.6661035501816893
name: Maxsim Ndcg@10
- type: MaxSim_mrr@10
value: 0.6417460317460317
name: Maxsim Mrr@10
- type: MaxSim_map@100
value: 0.6127876420077506
name: Maxsim Map@100
- task:
type: py-late-information-retrieval
name: Py Late Information Retrieval
dataset:
name: NanoQuoraRetrieval
type: NanoQuoraRetrieval
metrics:
- type: MaxSim_accuracy@1
value: 0.84
name: Maxsim Accuracy@1
- type: MaxSim_accuracy@3
value: 0.98
name: Maxsim Accuracy@3
- type: MaxSim_accuracy@5
value: 1.0
name: Maxsim Accuracy@5
- type: MaxSim_accuracy@10
value: 1.0
name: Maxsim Accuracy@10
- type: MaxSim_precision@1
value: 0.84
name: Maxsim Precision@1
- type: MaxSim_precision@3
value: 0.38666666666666655
name: Maxsim Precision@3
- type: MaxSim_precision@5
value: 0.244
name: Maxsim Precision@5
- type: MaxSim_precision@10
value: 0.12399999999999999
name: Maxsim Precision@10
- type: MaxSim_recall@1
value: 0.7440000000000001
name: Maxsim Recall@1
- type: MaxSim_recall@3
value: 0.9286666666666668
name: Maxsim Recall@3
- type: MaxSim_recall@5
value: 0.9593333333333334
name: Maxsim Recall@5
- type: MaxSim_recall@10
value: 0.9626666666666668
name: Maxsim Recall@10
- type: MaxSim_ndcg@10
value: 0.9075252606458076
name: Maxsim Ndcg@10
- type: MaxSim_mrr@10
value: 0.9106666666666667
name: Maxsim Mrr@10
- type: MaxSim_map@100
value: 0.8834929814655215
name: Maxsim Map@100
- task:
type: py-late-information-retrieval
name: Py Late Information Retrieval
dataset:
name: NanoSCIDOCS
type: NanoSCIDOCS
metrics:
- type: MaxSim_accuracy@1
value: 0.4
name: Maxsim Accuracy@1
- type: MaxSim_accuracy@3
value: 0.66
name: Maxsim Accuracy@3
- type: MaxSim_accuracy@5
value: 0.68
name: Maxsim Accuracy@5
- type: MaxSim_accuracy@10
value: 0.78
name: Maxsim Accuracy@10
- type: MaxSim_precision@1
value: 0.4
name: Maxsim Precision@1
- type: MaxSim_precision@3
value: 0.31333333333333335
name: Maxsim Precision@3
- type: MaxSim_precision@5
value: 0.244
name: Maxsim Precision@5
- type: MaxSim_precision@10
value: 0.162
name: Maxsim Precision@10
- type: MaxSim_recall@1
value: 0.08366666666666667
name: Maxsim Recall@1
- type: MaxSim_recall@3
value: 0.19366666666666668
name: Maxsim Recall@3
- type: MaxSim_recall@5
value: 0.24966666666666662
name: Maxsim Recall@5
- type: MaxSim_recall@10
value: 0.3306666666666666
name: Maxsim Recall@10
- type: MaxSim_ndcg@10
value: 0.3310802884457278
name: Maxsim Ndcg@10
- type: MaxSim_mrr@10
value: 0.5407222222222222
name: Maxsim Mrr@10
- type: MaxSim_map@100
value: 0.25548127218322214
name: Maxsim Map@100
- task:
type: py-late-information-retrieval
name: Py Late Information Retrieval
dataset:
name: NanoArguAna
type: NanoArguAna
metrics:
- type: MaxSim_accuracy@1
value: 0.14
name: Maxsim Accuracy@1
- type: MaxSim_accuracy@3
value: 0.56
name: Maxsim Accuracy@3
- type: MaxSim_accuracy@5
value: 0.64
name: Maxsim Accuracy@5
- type: MaxSim_accuracy@10
value: 0.76
name: Maxsim Accuracy@10
- type: MaxSim_precision@1
value: 0.14
name: Maxsim Precision@1
- type: MaxSim_precision@3
value: 0.18666666666666668
name: Maxsim Precision@3
- type: MaxSim_precision@5
value: 0.128
name: Maxsim Precision@5
- type: MaxSim_precision@10
value: 0.07600000000000001
name: Maxsim Precision@10
- type: MaxSim_recall@1
value: 0.14
name: Maxsim Recall@1
- type: MaxSim_recall@3
value: 0.56
name: Maxsim Recall@3
- type: MaxSim_recall@5
value: 0.64
name: Maxsim Recall@5
- type: MaxSim_recall@10
value: 0.76
name: Maxsim Recall@10
- type: MaxSim_ndcg@10
value: 0.457648969568352
name: Maxsim Ndcg@10
- type: MaxSim_mrr@10
value: 0.3595714285714285
name: Maxsim Mrr@10
- type: MaxSim_map@100
value: 0.36839121630206756
name: Maxsim Map@100
- task:
type: py-late-information-retrieval
name: Py Late Information Retrieval
dataset:
name: NanoSciFact
type: NanoSciFact
metrics:
- type: MaxSim_accuracy@1
value: 0.62
name: Maxsim Accuracy@1
- type: MaxSim_accuracy@3
value: 0.8
name: Maxsim Accuracy@3
- type: MaxSim_accuracy@5
value: 0.86
name: Maxsim Accuracy@5
- type: MaxSim_accuracy@10
value: 0.88
name: Maxsim Accuracy@10
- type: MaxSim_precision@1
value: 0.62
name: Maxsim Precision@1
- type: MaxSim_precision@3
value: 0.27999999999999997
name: Maxsim Precision@3
- type: MaxSim_precision@5
value: 0.18799999999999997
name: Maxsim Precision@5
- type: MaxSim_precision@10
value: 0.09799999999999999
name: Maxsim Precision@10
- type: MaxSim_recall@1
value: 0.595
name: Maxsim Recall@1
- type: MaxSim_recall@3
value: 0.77
name: Maxsim Recall@3
- type: MaxSim_recall@5
value: 0.845
name: Maxsim Recall@5
- type: MaxSim_recall@10
value: 0.87
name: Maxsim Recall@10
- type: MaxSim_ndcg@10
value: 0.7463564404006293
name: Maxsim Ndcg@10
- type: MaxSim_mrr@10
value: 0.7106666666666667
name: Maxsim Mrr@10
- type: MaxSim_map@100
value: 0.7060596028154851
name: Maxsim Map@100
- task:
type: py-late-information-retrieval
name: Py Late Information Retrieval
dataset:
name: NanoTouche2020
type: NanoTouche2020
metrics:
- type: MaxSim_accuracy@1
value: 0.7551020408163265
name: Maxsim Accuracy@1
- type: MaxSim_accuracy@3
value: 0.9795918367346939
name: Maxsim Accuracy@3
- type: MaxSim_accuracy@5
value: 1.0
name: Maxsim Accuracy@5
- type: MaxSim_accuracy@10
value: 1.0
name: Maxsim Accuracy@10
- type: MaxSim_precision@1
value: 0.7551020408163265
name: Maxsim Precision@1
- type: MaxSim_precision@3
value: 0.6666666666666666
name: Maxsim Precision@3
- type: MaxSim_precision@5
value: 0.6285714285714286
name: Maxsim Precision@5
- type: MaxSim_precision@10
value: 0.5102040816326531
name: Maxsim Precision@10
- type: MaxSim_recall@1
value: 0.0512448639546046
name: Maxsim Recall@1
- type: MaxSim_recall@3
value: 0.13492382759077773
name: Maxsim Recall@3
- type: MaxSim_recall@5
value: 0.21028780790933668
name: Maxsim Recall@5
- type: MaxSim_recall@10
value: 0.3208107900431349
name: Maxsim Recall@10
- type: MaxSim_ndcg@10
value: 0.5860354215626813
name: Maxsim Ndcg@10
- type: MaxSim_mrr@10
value: 0.8656462585034013
name: Maxsim Mrr@10
- type: MaxSim_map@100
value: 0.42050847387780316
name: Maxsim Map@100
- task:
type: nano-beir
name: Nano BEIR
dataset:
name: NanoBEIR mean
type: NanoBEIR_mean
metrics:
- type: MaxSim_accuracy@1
value: 0.5657770800627943
name: Maxsim Accuracy@1
- type: MaxSim_accuracy@3
value: 0.749199372056515
name: Maxsim Accuracy@3
- type: MaxSim_accuracy@5
value: 0.7969230769230768
name: Maxsim Accuracy@5
- type: MaxSim_accuracy@10
value: 0.8538461538461538
name: Maxsim Accuracy@10
- type: MaxSim_precision@1
value: 0.5657770800627943
name: Maxsim Precision@1
- type: MaxSim_precision@3
value: 0.3517948717948718
name: Maxsim Precision@3
- type: MaxSim_precision@5
value: 0.2698901098901099
name: Maxsim Precision@5
- type: MaxSim_precision@10
value: 0.18401569858712713
name: Maxsim Precision@10
- type: MaxSim_recall@1
value: 0.3348225068251237
name: Maxsim Recall@1
- type: MaxSim_recall@3
value: 0.5006355890241948
name: Maxsim Recall@3
- type: MaxSim_recall@5
value: 0.5543089017403102
name: Maxsim Recall@5
- type: MaxSim_recall@10
value: 0.6236387391737371
name: Maxsim Recall@10
- type: MaxSim_ndcg@10
value: 0.5977544091084623
name: Maxsim Ndcg@10
- type: MaxSim_mrr@10
value: 0.666178527821385
name: Maxsim Mrr@10
- type: MaxSim_map@100
value: 0.5211019201497261
name: Maxsim Map@100
---
# PyLate model based on jhu-clsp/ettin-encoder-17m
This is a [PyLate](https://github.com/lightonai/pylate) model finetuned from [jhu-clsp/ettin-encoder-17m](https://huggingface.co/jhu-clsp/ettin-encoder-17m) on the [train](https://huggingface.co/datasets/Speedsy/cleaned-ms-marco-bge-gemma-from-ligton) dataset. It maps sentences & paragraphs to sequences of 128-dimensional dense vectors and can be used for semantic textual similarity using the MaxSim operator.
## Model Details
### Model Description
- **Model Type:** PyLate model
- **Base model:** [jhu-clsp/ettin-encoder-17m](https://huggingface.co/jhu-clsp/ettin-encoder-17m) <!-- at revision 987607455c61e7a5bbc85f7758e0512ea6d0ae4c -->
- **Document Length:** 300 tokens
- **Query Length:** 32 tokens
- **Output Dimensionality:** 128 tokens
- **Similarity Function:** MaxSim
- **Training Dataset:**
- [train](https://huggingface.co/datasets/Speedsy/cleaned-ms-marco-bge-gemma-from-ligton)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [PyLate Documentation](https://lightonai.github.io/pylate/)
- **Repository:** [PyLate on GitHub](https://github.com/lightonai/pylate)
- **Hugging Face:** [PyLate models on Hugging Face](https://huggingface.co/models?library=PyLate)
### Full Model Architecture
```
ColBERT(
(0): Transformer({'max_seq_length': 299, 'do_lower_case': False}) with Transformer model: ModernBertModel
(1): Dense({'in_features': 256, 'out_features': 128, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
)
```
## Usage
First install the PyLate library:
```bash
pip install -U pylate
```
### Retrieval
PyLate provides a streamlined interface to index and retrieve documents using ColBERT models. The index leverages the Voyager HNSW index to efficiently handle document embeddings and enable fast retrieval.
#### Indexing documents
First, load the ColBERT model and initialize the Voyager index, then encode and index your documents:
```python
from pylate import indexes, models, retrieve
# Step 1: Load the ColBERT model
model = models.ColBERT(
model_name_or_path=pylate_model_id,
)
# Step 2: Initialize the Voyager index
index = indexes.Voyager(
index_folder="pylate-index",
index_name="index",
override=True, # This overwrites the existing index if any
)
# Step 3: Encode the documents
documents_ids = ["1", "2", "3"]
documents = ["document 1 text", "document 2 text", "document 3 text"]
documents_embeddings = model.encode(
documents,
batch_size=32,
is_query=False, # Ensure that it is set to False to indicate that these are documents, not queries
show_progress_bar=True,
)
# Step 4: Add document embeddings to the index by providing embeddings and corresponding ids
index.add_documents(
documents_ids=documents_ids,
documents_embeddings=documents_embeddings,
)
```
Note that you do not have to recreate the index and encode the documents every time. Once you have created an index and added the documents, you can re-use the index later by loading it:
```python
# To load an index, simply instantiate it with the correct folder/name and without overriding it
index = indexes.Voyager(
index_folder="pylate-index",
index_name="index",
)
```
#### Retrieving top-k documents for queries
Once the documents are indexed, you can retrieve the top-k most relevant documents for a given set of queries.
To do so, initialize the ColBERT retriever with the index you want to search in, encode the queries and then retrieve the top-k documents to get the top matches ids and relevance scores:
```python
# Step 1: Initialize the ColBERT retriever
retriever = retrieve.ColBERT(index=index)
# Step 2: Encode the queries
queries_embeddings = model.encode(
["query for document 3", "query for document 1"],
batch_size=32,
is_query=True, # # Ensure that it is set to False to indicate that these are queries
show_progress_bar=True,
)
# Step 3: Retrieve top-k documents
scores = retriever.retrieve(
queries_embeddings=queries_embeddings,
k=10, # Retrieve the top 10 matches for each query
)
```
### Reranking
If you only want to use the ColBERT model to perform reranking on top of your first-stage retrieval pipeline without building an index, you can simply use rank function and pass the queries and documents to rerank:
```python
from pylate import rank, models
queries = [
"query A",
"query B",
]
documents = [
["document A", "document B"],
["document 1", "document C", "document B"],
]
documents_ids = [
[1, 2],
[1, 3, 2],
]
model = models.ColBERT(
model_name_or_path=pylate_model_id,
)
queries_embeddings = model.encode(
queries,
is_query=True,
)
documents_embeddings = model.encode(
documents,
is_query=False,
)
reranked_documents = rank.rerank(
documents_ids=documents_ids,
queries_embeddings=queries_embeddings,
documents_embeddings=documents_embeddings,
)
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Py Late Information Retrieval
* Dataset: `['NanoClimateFEVER', 'NanoDBPedia', 'NanoFEVER', 'NanoFiQA2018', 'NanoHotpotQA', 'NanoMSMARCO', 'NanoNFCorpus', 'NanoNQ', 'NanoQuoraRetrieval', 'NanoSCIDOCS', 'NanoArguAna', 'NanoSciFact', 'NanoTouche2020']`
* Evaluated with <code>pylate.evaluation.pylate_information_retrieval_evaluator.PyLateInformationRetrievalEvaluator</code>
| Metric | NanoClimateFEVER | NanoDBPedia | NanoFEVER | NanoFiQA2018 | NanoHotpotQA | NanoMSMARCO | NanoNFCorpus | NanoNQ | NanoQuoraRetrieval | NanoSCIDOCS | NanoArguAna | NanoSciFact | NanoTouche2020 |
|:--------------------|:-----------------|:------------|:----------|:-------------|:-------------|:------------|:-------------|:-----------|:-------------------|:------------|:------------|:------------|:---------------|
| MaxSim_accuracy@1 | 0.24 | 0.7 | 0.88 | 0.42 | 0.88 | 0.52 | 0.44 | 0.52 | 0.84 | 0.4 | 0.14 | 0.62 | 0.7551 |
| MaxSim_accuracy@3 | 0.38 | 0.86 | 0.94 | 0.6 | 1.0 | 0.66 | 0.54 | 0.78 | 0.98 | 0.66 | 0.56 | 0.8 | 0.9796 |
| MaxSim_accuracy@5 | 0.5 | 0.9 | 0.96 | 0.66 | 1.0 | 0.7 | 0.64 | 0.82 | 1.0 | 0.68 | 0.64 | 0.86 | 1.0 |
| MaxSim_accuracy@10 | 0.7 | 0.94 | 0.98 | 0.74 | 1.0 | 0.8 | 0.66 | 0.86 | 1.0 | 0.78 | 0.76 | 0.88 | 1.0 |
| MaxSim_precision@1 | 0.24 | 0.7 | 0.88 | 0.42 | 0.88 | 0.52 | 0.44 | 0.52 | 0.84 | 0.4 | 0.14 | 0.62 | 0.7551 |
| MaxSim_precision@3 | 0.1533 | 0.6133 | 0.3267 | 0.2667 | 0.5533 | 0.22 | 0.3467 | 0.26 | 0.3867 | 0.3133 | 0.1867 | 0.28 | 0.6667 |
| MaxSim_precision@5 | 0.116 | 0.556 | 0.208 | 0.212 | 0.344 | 0.14 | 0.332 | 0.168 | 0.244 | 0.244 | 0.128 | 0.188 | 0.6286 |
| MaxSim_precision@10 | 0.092 | 0.48 | 0.108 | 0.132 | 0.18 | 0.08 | 0.26 | 0.09 | 0.124 | 0.162 | 0.076 | 0.098 | 0.5102 |
| MaxSim_recall@1 | 0.1283 | 0.0718 | 0.8167 | 0.2287 | 0.44 | 0.52 | 0.0433 | 0.49 | 0.744 | 0.0837 | 0.14 | 0.595 | 0.0512 |
| MaxSim_recall@3 | 0.21 | 0.165 | 0.8933 | 0.3658 | 0.83 | 0.66 | 0.0768 | 0.72 | 0.9287 | 0.1937 | 0.56 | 0.77 | 0.1349 |
| MaxSim_recall@5 | 0.2533 | 0.2194 | 0.93 | 0.4511 | 0.86 | 0.7 | 0.1178 | 0.77 | 0.9593 | 0.2497 | 0.64 | 0.845 | 0.2103 |
| MaxSim_recall@10 | 0.3667 | 0.3297 | 0.96 | 0.5565 | 0.9 | 0.8 | 0.1403 | 0.81 | 0.9627 | 0.3307 | 0.76 | 0.87 | 0.3208 |
| **MaxSim_ndcg@10** | **0.2876** | **0.5814** | **0.905** | **0.4612** | **0.8581** | **0.6531** | **0.3296** | **0.6661** | **0.9075** | **0.3311** | **0.4576** | **0.7464** | **0.586** |
| MaxSim_mrr@10 | 0.358 | 0.7882 | 0.9107 | 0.5269 | 0.9333 | 0.6071 | 0.5072 | 0.6417 | 0.9107 | 0.5407 | 0.3596 | 0.7107 | 0.8656 |
| MaxSim_map@100 | 0.2294 | 0.4672 | 0.8792 | 0.3852 | 0.8006 | 0.6169 | 0.1491 | 0.6128 | 0.8835 | 0.2555 | 0.3684 | 0.7061 | 0.4205 |
#### Nano BEIR
* Dataset: `NanoBEIR_mean`
* Evaluated with <code>pylate.evaluation.nano_beir_evaluator.NanoBEIREvaluator</code>
| Metric | Value |
|:--------------------|:-----------|
| MaxSim_accuracy@1 | 0.5658 |
| MaxSim_accuracy@3 | 0.7492 |
| MaxSim_accuracy@5 | 0.7969 |
| MaxSim_accuracy@10 | 0.8538 |
| MaxSim_precision@1 | 0.5658 |
| MaxSim_precision@3 | 0.3518 |
| MaxSim_precision@5 | 0.2699 |
| MaxSim_precision@10 | 0.184 |
| MaxSim_recall@1 | 0.3348 |
| MaxSim_recall@3 | 0.5006 |
| MaxSim_recall@5 | 0.5543 |
| MaxSim_recall@10 | 0.6236 |
| **MaxSim_ndcg@10** | **0.5978** |
| MaxSim_mrr@10 | 0.6662 |
| MaxSim_map@100 | 0.5211 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### train
* Dataset: [train](https://huggingface.co/datasets/Speedsy/cleaned-ms-marco-bge-gemma-from-ligton) at [63f51a3](https://huggingface.co/datasets/Speedsy/cleaned-ms-marco-bge-gemma-from-ligton/tree/63f51a32e72d981b44be6cace953edbb7dd2c4b0)
* Size: 533,177 training samples
* Columns: <code>query_id</code>, <code>document_ids</code>, and <code>scores</code>
* Approximate statistics based on the first 1000 samples:
| | query_id | document_ids | scores |
|:--------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------|:------------------------------------|
| type | int | list | list |
| details | <ul><li>264: ~0.10%</li><li>836: ~0.10%</li><li>3582: ~0.10%</li><li>4599: ~0.10%</li><li>4645: ~0.10%</li><li>4853: ~0.10%</li><li>5154: ~0.10%</li><li>7504: ~0.10%</li><li>12254: ~0.10%</li><li>12283: ~0.10%</li><li>12916: ~0.10%</li><li>14049: ~0.10%</li><li>14587: ~0.10%</li><li>14828: ~0.10%</li><li>15674: ~0.10%</li><li>15813: ~0.10%</li><li>16728: ~0.10%</li><li>22006: ~0.10%</li><li>23675: ~0.10%</li><li>24199: ~0.10%</li><li>25323: ~0.10%</li><li>29213: ~0.10%</li><li>32344: ~0.10%</li><li>34604: ~0.10%</li><li>35424: ~0.10%</li><li>36148: ~0.10%</li><li>37826: ~0.10%</li><li>38185: ~0.10%</li><li>38242: ~0.10%</li><li>39839: ~0.10%</li><li>42077: ~0.10%</li><li>43614: ~0.10%</li><li>45073: ~0.10%</li><li>46289: ~0.10%</li><li>47507: ~0.10%</li><li>48005: ~0.10%</li><li>48785: ~0.10%</li><li>49216: ~0.10%</li><li>49636: ~0.10%</li><li>49970: ~0.10%</li><li>51145: ~0.10%</li><li>52075: ~0.10%</li><li>52094: ~0.10%</li><li>52725: ~0.10%</li><li>52788: ~0.10%</li><li>54210: ~0.10%</li><li>55032: ~0.10%</li><li>59546: ~0.10%</li><li>60941: ~0.10%</li><li>62649: ~0.10%</li><li>63333: ~0.10%</li><li>64197: ~0.10%</li><li>64879: ~0.10%</li><li>67608: ~0.10%</li><li>67627: ~0.10%</li><li>69463: ~0.10%</li><li>70002: ~0.10%</li><li>70429: ~0.10%</li><li>72166: ~0.10%</li><li>72518: ~0.10%</li><li>72607: ~0.10%</li><li>72791: ~0.10%</li><li>73325: ~0.10%</li><li>74078: ~0.10%</li><li>74857: ~0.10%</li><li>75323: ~0.10%</li><li>75816: ~0.10%</li><li>76929: ~0.10%</li><li>77306: ~0.10%</li><li>77845: ~0.10%</li><li>77889: ~0.10%</li><li>78077: ~0.10%</li><li>78256: ~0.10%</li><li>78401: ~0.10%</li><li>78798: ~0.10%</li><li>80329: ~0.10%</li><li>80871: ~0.10%</li><li>81089: ~0.10%</li><li>82179: ~0.10%</li><li>82883: ~0.10%</li><li>84168: ~0.10%</li><li>86891: ~0.10%</li><li>88535: ~0.10%</li><li>88953: ~0.10%</li><li>89346: ~0.10%</li><li>89386: ~0.10%</li><li>90699: ~0.10%</li><li>90795: ~0.10%</li><li>91367: ~0.10%</li><li>91795: ~0.10%</li><li>91834: ~0.10%</li><li>92070: ~0.10%</li><li>92523: ~0.10%</li><li>92753: ~0.10%</li><li>92787: ~0.10%</li><li>93317: ~0.10%</li><li>96382: ~0.10%</li><li>96455: ~0.10%</li><li>97274: ~0.10%</li><li>97603: ~0.10%</li><li>98140: ~0.10%</li><li>101205: ~0.10%</li><li>101305: ~0.10%</li><li>102707: ~0.10%</li><li>103074: ~0.10%</li><li>105437: ~0.10%</li><li>108207: ~0.10%</li><li>109776: ~0.10%</li><li>112056: ~0.10%</li><li>112955: ~0.10%</li><li>112977: ~0.10%</li><li>113635: ~0.10%</li><li>115551: ~0.10%</li><li>116098: ~0.10%</li><li>118057: ~0.10%</li><li>120298: ~0.10%</li><li>121703: ~0.10%</li><li>123429: ~0.10%</li><li>125043: ~0.10%</li><li>125979: ~0.10%</li><li>126851: ~0.10%</li><li>128218: ~0.10%</li><li>128804: ~0.10%</li><li>129598: ~0.10%</li><li>131299: ~0.10%</li><li>132114: ~0.10%</li><li>133553: ~0.10%</li><li>133696: ~0.10%</li><li>134460: ~0.10%</li><li>137602: ~0.10%</li><li>137679: ~0.10%</li><li>138121: ~0.10%</li><li>138260: ~0.10%</li><li>139039: ~0.10%</li><li>140651: ~0.10%</li><li>142305: ~0.10%</li><li>145653: ~0.10%</li><li>145683: ~0.10%</li><li>145763: ~0.10%</li><li>150202: ~0.10%</li><li>151135: ~0.10%</li><li>152307: ~0.10%</li><li>153693: ~0.10%</li><li>154470: ~0.10%</li><li>155587: ~0.10%</li><li>157602: ~0.10%</li><li>158565: ~0.10%</li><li>159177: ~0.10%</li><li>159224: ~0.10%</li><li>159341: ~0.10%</li><li>159892: ~0.10%</li><li>161881: ~0.10%</li><li>162414: ~0.10%</li><li>163765: ~0.10%</li><li>165888: ~0.10%</li><li>167756: ~0.10%</li><li>168048: ~0.10%</li><li>168894: ~0.10%</li><li>169991: ~0.10%</li><li>170731: ~0.10%</li><li>171705: ~0.10%</li><li>176165: ~0.10%</li><li>176798: ~0.10%</li><li>180259: ~0.10%</li><li>182660: ~0.10%</li><li>183930: ~0.10%</li><li>184045: ~0.10%</li><li>184676: ~0.10%</li><li>185294: ~0.10%</li><li>186475: ~0.10%</li><li>187155: ~0.10%</li><li>189571: ~0.10%</li><li>191383: ~0.10%</li><li>192165: ~0.10%</li><li>194207: ~0.10%</li><li>195056: ~0.10%</li><li>197377: ~0.10%</li><li>198546: ~0.10%</li><li>200183: ~0.10%</li><li>202122: ~0.10%</li><li>203519: ~0.10%</li><li>206220: ~0.10%</li><li>209739: ~0.10%</li><li>210554: ~0.10%</li><li>212638: ~0.10%</li><li>213096: ~0.10%</li><li>213410: ~0.10%</li><li>217541: ~0.10%</li><li>220993: ~0.10%</li><li>223913: ~0.10%</li><li>224657: ~0.10%</li><li>227497: ~0.10%</li><li>227726: ~0.10%</li><li>228099: ~0.10%</li><li>228451: ~0.10%</li><li>231416: ~0.10%</li><li>234348: ~0.10%</li><li>235869: ~0.10%</li><li>237784: ~0.10%</li><li>240739: ~0.10%</li><li>246495: ~0.10%</li><li>246821: ~0.10%</li><li>248675: ~0.10%</li><li>249798: ~0.10%</li><li>249962: ~0.10%</li><li>249977: ~0.10%</li><li>250019: ~0.10%</li><li>250548: ~0.10%</li><li>251089: ~0.10%</li><li>254878: ~0.10%</li><li>255183: ~0.10%</li><li>255727: ~0.10%</li><li>256321: ~0.10%</li><li>258276: ~0.10%</li><li>260993: ~0.10%</li><li>261225: ~0.10%</li><li>261247: ~0.10%</li><li>262123: ~0.10%</li><li>262508: ~0.10%</li><li>262950: ~0.10%</li><li>266047: ~0.10%</li><li>267089: ~0.10%</li><li>267192: ~0.10%</li><li>268642: ~0.10%</li><li>273171: ~0.10%</li><li>273864: ~0.10%</li><li>274521: ~0.10%</li><li>274586: ~0.10%</li><li>275037: ~0.10%</li><li>275643: ~0.10%</li><li>276744: ~0.10%</li><li>277212: ~0.10%</li><li>277990: ~0.10%</li><li>280012: ~0.10%</li><li>281699: ~0.10%</li><li>282128: ~0.10%</li><li>283913: ~0.10%</li><li>284268: ~0.10%</li><li>285905: ~0.10%</li><li>287456: ~0.10%</li><li>287506: ~0.10%</li><li>288154: ~0.10%</li><li>288286: ~0.10%</li><li>289046: ~0.10%</li><li>292211: ~0.10%</li><li>292588: ~0.10%</li><li>293357: ~0.10%</li><li>294123: ~0.10%</li><li>295125: ~0.10%</li><li>299287: ~0.10%</li><li>300622: ~0.10%</li><li>302135: ~0.10%</li><li>303224: ~0.10%</li><li>304353: ~0.10%</li><li>304820: ~0.10%</li><li>305508: ~0.10%</li><li>310215: ~0.10%</li><li>310236: ~0.10%</li><li>310409: ~0.10%</li><li>311231: ~0.10%</li><li>312821: ~0.10%</li><li>314244: ~0.10%</li><li>314415: ~0.10%</li><li>314745: ~0.10%</li><li>316385: ~0.10%</li><li>317442: ~0.10%</li><li>318639: ~0.10%</li><li>318652: ~0.10%</li><li>320855: ~0.10%</li><li>321983: ~0.10%</li><li>322114: ~0.10%</li><li>323196: ~0.10%</li><li>324868: ~0.10%</li><li>327581: ~0.10%</li><li>329337: ~0.10%</li><li>330641: ~0.10%</li><li>331572: ~0.10%</li><li>331650: ~0.10%</li><li>331993: ~0.10%</li><li>334757: ~0.10%</li><li>336561: ~0.10%</li><li>336791: ~0.10%</li><li>337002: ~0.10%</li><li>338332: ~0.10%</li><li>338456: ~0.10%</li><li>339065: ~0.10%</li><li>339556: ~0.10%</li><li>339755: ~0.10%</li><li>339870: ~0.10%</li><li>340599: ~0.20%</li><li>341156: ~0.10%</li><li>342121: ~0.10%</li><li>343411: ~0.10%</li><li>344860: ~0.10%</li><li>345924: ~0.10%</li><li>346421: ~0.10%</li><li>346425: ~0.10%</li><li>348157: ~0.10%</li><li>351281: ~0.10%</li><li>351858: ~0.10%</li><li>353008: ~0.10%</li><li>353748: ~0.10%</li><li>359787: ~0.10%</li><li>359893: ~0.10%</li><li>360168: ~0.10%</li><li>361127: ~0.10%</li><li>362220: ~0.10%</li><li>362560: ~0.10%</li><li>364559: ~0.10%</li><li>366835: ~0.10%</li><li>367185: ~0.10%</li><li>369045: ~0.10%</li><li>371113: ~0.10%</li><li>375630: ~0.10%</li><li>376044: ~0.10%</li><li>376524: ~0.10%</li><li>377735: ~0.10%</li><li>378574: ~0.10%</li><li>379749: ~0.10%</li><li>379953: ~0.10%</li><li>384364: ~0.10%</li><li>384398: ~0.10%</li><li>384751: ~0.10%</li><li>385758: ~0.10%</li><li>385893: ~0.10%</li><li>386098: ~0.10%</li><li>387205: ~0.10%</li><li>387374: ~0.10%</li><li>388450: ~0.10%</li><li>388589: ~0.10%</li><li>388593: ~0.10%</li><li>389571: ~0.10%</li><li>389572: ~0.10%</li><li>391531: ~0.10%</li><li>391857: ~0.10%</li><li>393174: ~0.10%</li><li>393426: ~0.10%</li><li>398132: ~0.10%</li><li>398721: ~0.10%</li><li>399016: ~0.10%</li><li>399589: ~0.10%</li><li>400259: ~0.10%</li><li>401489: ~0.10%</li><li>401601: ~0.10%</li><li>403876: ~0.10%</li><li>403897: ~0.10%</li><li>404830: ~0.10%</li><li>406102: ~0.10%</li><li>406397: ~0.10%</li><li>407151: ~0.10%</li><li>407608: ~0.10%</li><li>409373: ~0.10%</li><li>410084: ~0.10%</li><li>411693: ~0.10%</li><li>411984: ~0.10%</li><li>412214: ~0.10%</li><li>412560: ~0.10%</li><li>413117: ~0.10%</li><li>413563: ~0.10%</li><li>413660: ~0.10%</li><li>414337: ~0.10%</li><li>416391: ~0.10%</li><li>417066: ~0.10%</li><li>417198: ~0.10%</li><li>417751: ~0.10%</li><li>417778: ~0.10%</li><li>420257: ~0.10%</li><li>420787: ~0.10%</li><li>421001: ~0.10%</li><li>421045: ~0.10%</li><li>421354: ~0.10%</li><li>421738: ~0.10%</li><li>429057: ~0.10%</li><li>429459: ~0.10%</li><li>430319: ~0.10%</li><li>431215: ~0.10%</li><li>431332: ~0.10%</li><li>432097: ~0.10%</li><li>432283: ~0.10%</li><li>435353: ~0.10%</li><li>437793: ~0.10%</li><li>438297: ~0.10%</li><li>439016: ~0.10%</li><li>439129: ~0.10%</li><li>439217: ~0.10%</li><li>439755: ~0.10%</li><li>440343: ~0.10%</li><li>440506: ~0.10%</li><li>441030: ~0.10%</li><li>441509: ~0.10%</li><li>443408: ~0.10%</li><li>443686: ~0.10%</li><li>445516: ~0.10%</li><li>445999: ~0.10%</li><li>447039: ~0.10%</li><li>447219: ~0.10%</li><li>447298: ~0.10%</li><li>453040: ~0.10%</li><li>453745: ~0.10%</li><li>454869: ~0.10%</li><li>455323: ~0.10%</li><li>456224: ~0.10%</li><li>456251: ~0.10%</li><li>457065: ~0.10%</li><li>459890: ~0.10%</li><li>463716: ~0.10%</li><li>465235: ~0.10%</li><li>465388: ~0.10%</li><li>470344: ~0.10%</li><li>470470: ~0.10%</li><li>471875: ~0.10%</li><li>472462: ~0.10%</li><li>474016: ~0.10%</li><li>476794: ~0.10%</li><li>479266: ~0.10%</li><li>479360: ~0.10%</li><li>480621: ~0.10%</li><li>482154: ~0.10%</li><li>483014: ~0.10%</li><li>485031: ~0.10%</li><li>485828: ~0.10%</li><li>486664: ~0.10%</li><li>488266: ~0.10%</li><li>489488: ~0.10%</li><li>490249: ~0.10%</li><li>490992: ~0.10%</li><li>491894: ~0.10%</li><li>491983: ~0.10%</li><li>492620: ~0.10%</li><li>492904: ~0.10%</li><li>493035: ~0.10%</li><li>493461: ~0.10%</li><li>494255: ~0.10%</li><li>496473: ~0.10%</li><li>496474: ~0.10%</li><li>496516: ~0.10%</li><li>496813: ~0.10%</li><li>496853: ~0.10%</li><li>499553: ~0.10%</li><li>499565: ~0.10%</li><li>499737: ~0.10%</li><li>500546: ~0.10%</li><li>501510: ~0.10%</li><li>501837: ~0.10%</li><li>502910: ~0.10%</li><li>510559: ~0.10%</li><li>511473: ~0.10%</li><li>512440: ~0.10%</li><li>513832: ~0.10%</li><li>514106: ~0.10%</li><li>514902: ~0.10%</li><li>515053: ~0.10%</li><li>515507: ~0.10%</li><li>516205: ~0.10%</li><li>519323: ~0.10%</li><li>520796: ~0.10%</li><li>521492: ~0.10%</li><li>522112: ~0.10%</li><li>523814: ~0.10%</li><li>525505: ~0.10%</li><li>525583: ~0.10%</li><li>525764: ~0.10%</li><li>528105: ~0.10%</li><li>529692: ~0.10%</li><li>532014: ~0.10%</li><li>534952: ~0.10%</li><li>538836: ~0.10%</li><li>539326: ~0.10%</li><li>539504: ~0.10%</li><li>541861: ~0.10%</li><li>542925: ~0.10%</li><li>544853: ~0.10%</li><li>545091: ~0.10%</li><li>545394: ~0.10%</li><li>546521: ~0.10%</li><li>546527: ~0.10%</li><li>548007: ~0.10%</li><li>548100: ~0.10%</li><li>550372: ~0.10%</li><li>554548: ~0.10%</li><li>560711: ~0.10%</li><li>560886: ~0.10%</li><li>561084: ~0.10%</li><li>561114: ~0.10%</li><li>561329: ~0.10%</li><li>561838: ~0.10%</li><li>561946: ~0.10%</li><li>562174: ~0.10%</li><li>562185: ~0.10%</li><li>564894: ~0.10%</li><li>566373: ~0.10%</li><li>566884: ~0.10%</li><li>568110: ~0.10%</li><li>569541: ~0.10%</li><li>570042: ~0.10%</li><li>571286: ~0.10%</li><li>571515: ~0.10%</li><li>571577: ~0.10%</li><li>572354: ~0.10%</li><li>573015: ~0.10%</li><li>573283: ~0.10%</li><li>578249: ~0.10%</li><li>578786: ~0.10%</li><li>580872: ~0.10%</li><li>581072: ~0.10%</li><li>581684: ~0.10%</li><li>582341: ~0.10%</li><li>583169: ~0.10%</li><li>583225: ~0.10%</li><li>583322: ~0.10%</li><li>583889: ~0.10%</li><li>584173: ~0.10%</li><li>585523: ~0.10%</li><li>585660: ~0.10%</li><li>587005: ~0.10%</li><li>588010: ~0.10%</li><li>588337: ~0.10%</li><li>590946: ~0.10%</li><li>593319: ~0.10%</li><li>594039: ~0.10%</li><li>594452: ~0.10%</li><li>595246: ~0.10%</li><li>597157: ~0.10%</li><li>597215: ~0.10%</li><li>597368: ~0.10%</li><li>597453: ~0.10%</li><li>598538: ~0.10%</li><li>598540: ~0.10%</li><li>601120: ~0.10%</li><li>604762: ~0.10%</li><li>606244: ~0.10%</li><li>606935: ~0.10%</li><li>609731: ~0.10%</li><li>611702: ~0.10%</li><li>613040: ~0.10%</li><li>614720: ~0.10%</li><li>615525: ~0.10%</li><li>616416: ~0.10%</li><li>618280: ~0.10%</li><li>618322: ~0.10%</li><li>619151: ~0.10%</li><li>619170: ~0.10%</li><li>622593: ~0.10%</li><li>622755: ~0.10%</li><li>623529: ~0.10%</li><li>625333: ~0.10%</li><li>625780: ~0.10%</li><li>626317: ~0.10%</li><li>626670: ~0.10%</li><li>628510: ~0.10%</li><li>630995: ~0.10%</li><li>631618: ~0.10%</li><li>632001: ~0.10%</li><li>634750: ~0.10%</li><li>635195: ~0.10%</li><li>636256: ~0.10%</li><li>636542: ~0.10%</li><li>637420: ~0.10%</li><li>641046: ~0.10%</li><li>643232: ~0.10%</li><li>643901: ~0.10%</li><li>644517: ~0.10%</li><li>645962: ~0.10%</li><li>647293: ~0.10%</li><li>647443: ~0.10%</li><li>648173: ~0.10%</li><li>649204: ~0.10%</li><li>649243: ~0.10%</li><li>650521: ~0.10%</li><li>650979: ~0.10%</li><li>652493: ~0.10%</li><li>654771: ~0.10%</li><li>656535: ~0.10%</li><li>658715: ~0.10%</li><li>659035: ~0.10%</li><li>659593: ~0.10%</li><li>660535: ~0.10%</li><li>662154: ~0.10%</li><li>662784: ~0.10%</li><li>663142: ~0.10%</li><li>664964: ~0.10%</li><li>665230: ~0.10%</li><li>666319: ~0.10%</li><li>666386: ~0.10%</li><li>666561: ~0.10%</li><li>668060: ~0.10%</li><li>668151: ~0.10%</li><li>669382: ~0.10%</li><li>670341: ~0.10%</li><li>673081: ~0.10%</li><li>673634: ~0.10%</li><li>673875: ~0.10%</li><li>673881: ~0.10%</li><li>674082: ~0.10%</li><li>674852: ~0.10%</li><li>675319: ~0.10%</li><li>675492: ~0.10%</li><li>676147: ~0.10%</li><li>676238: ~0.10%</li><li>676318: ~0.10%</li><li>677459: ~0.10%</li><li>677515: ~0.10%</li><li>679216: ~0.10%</li><li>679307: ~0.10%</li><li>680354: ~0.10%</li><li>681098: ~0.10%</li><li>681873: ~0.10%</li><li>683933: ~0.10%</li><li>684800: ~0.10%</li><li>685690: ~0.10%</li><li>685871: ~0.10%</li><li>686886: ~0.10%</li><li>689687: ~0.10%</li><li>689748: ~0.10%</li><li>693501: ~0.10%</li><li>694425: ~0.10%</li><li>694466: ~0.10%</li><li>695798: ~0.10%</li><li>698130: ~0.10%</li><li>700276: ~0.10%</li><li>702137: ~0.10%</li><li>703138: ~0.10%</li><li>704067: ~0.10%</li><li>704460: ~0.10%</li><li>706199: ~0.10%</li><li>706878: ~0.10%</li><li>707280: ~0.10%</li><li>710580: ~0.10%</li><li>710968: ~0.10%</li><li>713539: ~0.10%</li><li>713584: ~0.10%</li><li>714733: ~0.10%</li><li>718172: ~0.10%</li><li>719545: ~0.10%</li><li>720471: ~0.10%</li><li>720690: ~0.10%</li><li>723568: ~0.10%</li><li>724334: ~0.10%</li><li>724700: ~0.10%</li><li>727908: ~0.10%</li><li>730499: ~0.10%</li><li>730711: ~0.10%</li><li>733963: ~0.10%</li><li>734912: ~0.10%</li><li>736431: ~0.10%</li><li>738012: ~0.10%</li><li>742066: ~0.10%</li><li>745799: ~0.10%</li><li>746161: ~0.10%</li><li>748392: ~0.10%</li><li>748838: ~0.10%</li><li>749148: ~0.10%</li><li>751762: ~0.10%</li><li>752092: ~0.10%</li><li>752527: ~0.10%</li><li>753004: ~0.10%</li><li>755386: ~0.10%</li><li>756558: ~0.10%</li><li>756736: ~0.10%</li><li>758706: ~0.10%</li><li>759523: ~0.10%</li><li>760550: ~0.10%</li><li>762688: ~0.10%</li><li>762918: ~0.10%</li><li>765769: ~0.10%</li><li>766354: ~0.10%</li><li>768119: ~0.10%</li><li>769970: ~0.10%</li><li>773106: ~0.10%</li><li>775964: ~0.10%</li><li>776055: ~0.10%</li><li>777088: ~0.10%</li><li>777529: ~0.10%</li><li>779273: ~0.10%</li><li>781066: ~0.10%</li><li>781136: ~0.10%</li><li>782328: ~0.10%</li><li>783231: ~0.10%</li><li>784413: ~0.10%</li><li>785781: ~0.10%</li><li>786250: ~0.10%</li><li>786845: ~0.10%</li><li>788012: ~0.10%</li><li>792788: ~0.10%</li><li>793182: ~0.10%</li><li>794187: ~0.10%</li><li>796097: ~0.10%</li><li>796117: ~0.10%</li><li>797182: ~0.10%</li><li>798112: ~0.10%</li><li>798215: ~0.10%</li><li>802050: ~0.10%</li><li>802669: ~0.10%</li><li>804168: ~0.10%</li><li>804461: ~0.10%</li><li>805743: ~0.10%</li><li>806286: ~0.10%</li><li>808416: ~0.10%</li><li>808455: ~0.10%</li><li>810577: ~0.10%</li><li>811702: ~0.10%</li><li>811843: ~0.10%</li><li>812439: ~0.10%</li><li>815923: ~0.10%</li><li>816475: ~0.10%</li><li>818312: ~0.10%</li><li>818367: ~0.10%</li><li>818521: ~0.10%</li><li>819278: ~0.10%</li><li>819972: ~0.10%</li><li>820890: ~0.10%</li><li>821188: ~0.10%</li><li>821615: ~0.10%</li><li>822376: ~0.10%</li><li>823136: ~0.10%</li><li>823735: ~0.10%</li><li>829476: ~0.10%</li><li>830591: ~0.10%</li><li>832433: ~0.10%</li><li>832597: ~0.10%</li><li>835043: ~0.10%</li><li>835759: ~0.10%</li><li>837731: ~0.10%</li><li>837942: ~0.10%</li><li>839448: ~0.10%</li><li>840417: ~0.10%</li><li>843622: ~0.10%</li><li>844870: ~0.10%</li><li>846084: ~0.10%</li><li>846807: ~0.10%</li><li>847076: ~0.10%</li><li>847535: ~0.10%</li><li>847766: ~0.10%</li><li>847977: ~0.10%</li><li>848075: ~0.10%</li><li>848326: ~0.10%</li><li>852725: ~0.10%</li><li>853465: ~0.10%</li><li>856427: ~0.10%</li><li>857186: ~0.10%</li><li>858543: ~0.10%</li><li>858830: ~0.10%</li><li>861596: ~0.10%</li><li>863804: ~0.10%</li><li>865139: ~0.10%</li><li>866039: ~0.10%</li><li>866406: ~0.10%</li><li>867180: ~0.10%</li><li>868280: ~0.10%</li><li>872156: ~0.10%</li><li>872791: ~0.10%</li><li>872953: ~0.10%</li><li>872959: ~0.10%</li><li>875015: ~0.10%</li><li>876522: ~0.10%</li><li>878407: ~0.10%</li><li>878710: ~0.10%</li><li>878855: ~0.10%</li><li>880495: ~0.10%</li><li>884335: ~0.10%</li><li>884941: ~0.10%</li><li>885893: ~0.10%</li><li>886713: ~0.10%</li><li>887068: ~0.10%</li><li>887115: ~0.10%</li><li>887220: ~0.10%</li><li>887751: ~0.10%</li><li>888027: ~0.10%</li><li>888069: ~0.10%</li><li>888758: ~0.10%</li><li>890152: ~0.10%</li><li>891137: ~0.10%</li><li>891766: ~0.10%</li><li>891890: ~0.10%</li><li>892662: ~0.10%</li><li>892973: ~0.10%</li><li>893360: ~0.10%</li><li>893915: ~0.10%</li><li>893976: ~0.10%</li><li>894324: ~0.10%</li><li>895709: ~0.10%</li><li>897065: ~0.10%</li><li>898387: ~0.10%</li><li>898554: ~0.10%</li><li>899291: ~0.10%</li><li>899604: ~0.10%</li><li>900513: ~0.10%</li><li>900619: ~0.10%</li><li>901170: ~0.10%</li><li>903238: ~0.10%</li><li>904294: ~0.10%</li><li>904520: ~0.10%</li><li>904666: ~0.10%</li><li>904992: ~0.10%</li><li>907212: ~0.10%</li><li>908062: ~0.10%</li><li>908561: ~0.10%</li><li>909741: ~0.10%</li><li>911034: ~0.10%</li><li>911982: ~0.10%</li><li>913716: ~0.10%</li><li>914819: ~0.10%</li><li>915750: ~0.10%</li><li>915766: ~0.10%</li><li>916125: ~0.10%</li><li>916648: ~0.10%</li><li>917285: ~0.10%</li><li>918194: ~0.10%</li><li>920583: ~0.10%</li><li>924616: ~0.10%</li><li>926035: ~0.10%</li><li>927726: ~0.10%</li><li>929821: ~0.10%</li><li>930300: ~0.10%</li><li>930796: ~0.10%</li><li>931617: ~0.10%</li><li>932719: ~0.10%</li><li>933784: ~0.10%</li><li>934378: ~0.10%</li><li>935900: ~0.10%</li><li>936118: ~0.10%</li><li>936336: ~0.10%</li><li>937231: ~0.10%</li><li>938420: ~0.10%</li><li>939184: ~0.10%</li><li>939567: ~0.10%</li><li>941588: ~0.10%</li><li>944093: ~0.10%</li><li>944912: ~0.10%</li><li>945069: ~0.10%</li><li>945659: ~0.10%</li><li>946110: ~0.10%</li><li>950044: ~0.10%</li><li>954101: ~0.10%</li><li>954147: ~0.10%</li><li>958697: ~0.10%</li><li>959530: ~0.10%</li><li>961721: ~0.10%</li><li>963582: ~0.10%</li><li>964471: ~0.10%</li><li>965026: ~0.10%</li><li>966573: ~0.10%</li><li>967330: ~0.10%</li><li>968176: ~0.10%</li><li>968346: ~0.10%</li><li>970649: ~0.10%</li><li>970873: ~0.10%</li><li>971636: ~0.10%</li><li>971664: ~0.10%</li><li>972336: ~0.10%</li><li>973555: ~0.10%</li><li>974207: ~0.10%</li><li>976896: ~0.10%</li><li>981402: ~0.10%</li><li>983723: ~0.10%</li><li>984653: ~0.10%</li><li>985161: ~0.10%</li><li>987048: ~0.10%</li><li>987107: ~0.10%</li><li>987815: ~0.10%</li><li>993209: ~0.10%</li><li>994360: ~0.10%</li><li>994613: ~0.10%</li><li>1004305: ~0.10%</li><li>1004650: ~0.10%</li><li>1004849: ~0.10%</li><li>1005118: ~0.10%</li><li>1005614: ~0.10%</li><li>1005626: ~0.10%</li><li>1006835: ~0.10%</li><li>1009556: ~0.10%</li><li>1011008: ~0.10%</li><li>1014010: ~0.10%</li><li>1014030: ~0.10%</li><li>1016549: ~0.10%</li><li>1017016: ~0.10%</li><li>1017335: ~0.10%</li><li>1018386: ~0.10%</li><li>1020640: ~0.10%</li><li>1021041: ~0.10%</li><li>1021411: ~0.10%</li><li>1022998: ~0.10%</li><li>1025423: ~0.10%</li><li>1025767: ~0.10%</li><li>1026066: ~0.10%</li><li>1026434: ~0.10%</li><li>1027516: ~0.10%</li><li>1027703: ~0.10%</li><li>1028119: ~0.10%</li><li>1028642: ~0.10%</li><li>1031554: ~0.10%</li><li>1033639: ~0.10%</li><li>1033660: ~0.10%</li><li>1034832: ~0.10%</li><li>1035274: ~0.10%</li><li>1037432: ~0.10%</li><li>1037536: ~0.10%</li><li>1037759: ~0.10%</li><li>1039860: ~0.10%</li><li>1041131: ~0.10%</li><li>1041892: ~0.10%</li><li>1043066: ~0.10%</li><li>1044326: ~0.10%</li><li>1044905: ~0.10%</li><li>1047848: ~0.10%</li><li>1048062: ~0.10%</li><li>1048534: ~0.10%</li><li>1049477: ~0.10%</li><li>1049533: ~0.10%</li><li>1050531: ~0.10%</li><li>1050937: ~0.10%</li><li>1051747: ~0.10%</li><li>1052073: ~0.10%</li><li>1052617: ~0.10%</li><li>1054049: ~0.10%</li><li>1055142: ~0.10%</li><li>1056933: ~0.10%</li><li>1057358: ~0.10%</li><li>1057911: ~0.10%</li><li>1061411: ~0.10%</li><li>1062328: ~0.10%</li><li>1062485: ~0.10%</li><li>1062534: ~0.10%</li><li>1062794: ~0.10%</li><li>1063269: ~0.10%</li><li>1063467: ~0.10%</li><li>1064568: ~0.10%</li><li>1064868: ~0.10%</li><li>1065180: ~0.10%</li><li>1065481: ~0.10%</li><li>1067970: ~0.10%</li><li>1068479: ~0.10%</li><li>1070203: ~0.10%</li><li>1070708: ~0.10%</li><li>1071075: ~0.10%</li><li>1072038: ~0.10%</li><li>1072214: ~0.10%</li><li>1072525: ~0.10%</li><li>1073953: ~0.10%</li><li>1074885: ~0.10%</li><li>1075308: ~0.10%</li><li>1078872: ~0.10%</li><li>1079266: ~0.10%</li><li>1079736: ~0.10%</li><li>1080075: ~0.10%</li><li>1081716: ~0.10%</li><li>1137391: ~0.10%</li><li>1138530: ~0.10%</li><li>1139697: ~0.10%</li><li>1139738: ~0.10%</li><li>1140119: ~0.10%</li><li>1140869: ~0.10%</li><li>1141527: ~0.10%</li><li>1144693: ~0.10%</li><li>1145425: ~0.10%</li><li>1149162: ~0.10%</li><li>1149207: ~0.10%</li><li>1149985: ~0.10%</li><li>1150086: ~0.10%</li><li>1150731: ~0.10%</li><li>1151256: ~0.10%</li><li>1151403: ~0.10%</li><li>1152236: ~0.10%</li><li>1153693: ~0.10%</li><li>1154576: ~0.10%</li><li>1155859: ~0.10%</li><li>1156918: ~0.10%</li><li>1158007: ~0.10%</li><li>1158559: ~0.10%</li><li>1158952: ~0.10%</li><li>1159165: ~0.10%</li><li>1161242: ~0.10%</li><li>1163227: ~0.10%</li><li>1164078: ~0.10%</li><li>1165251: ~0.10%</li><li>1166023: ~0.10%</li><li>1166231: ~0.10%</li><li>1167002: ~0.10%</li><li>1168584: ~0.10%</li><li>1169844: ~0.10%</li><li>1170663: ~0.10%</li><li>1171580: ~0.10%</li><li>1172072: ~0.10%</li><li>1173371: ~0.10%</li><li>1173809: ~0.10%</li><li>1174049: ~0.10%</li><li>1175044: ~0.10%</li><li>1175745: ~0.10%</li><li>1176061: ~0.10%</li><li>1176414: ~0.10%</li><li>1177449: ~0.10%</li><li>1178311: ~0.10%</li><li>1179069: ~0.10%</li><li>1179450: ~0.10%</li><li>1181077: ~0.10%</li><li>1183293: ~0.10%</li><li>1184313: ~0.10%</li><li>1185090: ~0.10%</li><li>1185669: ~0.10%</li></ul> | <ul><li>size: 32 elements</li></ul> | <ul><li>size: 32 elements</li></ul> |
* Samples:
| query_id | document_ids | scores |
|:--------------------|:----------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------|
| <code>237784</code> | <code>[6366584, 4034101, 2325374, 6914618, 6042146, ...]</code> | <code>[0.9999999991784339, 0.42233632827946693, 0.5956354295491569, 0.12644415907455164, 0.6636713730105909, ...]</code> |
| <code>904294</code> | <code>[448408, 8743975, 49600, 7339401, 2714261, ...]</code> | <code>[0.9999999991841937, 0.877629062381539, 0.8330146583389045, 0.3116634796692611, 0.4633524534142185, ...]</code> |
| <code>412214</code> | <code>[1006306, 4454048, 1949661, 4895656, 675880, ...]</code> | <code>[0.9999999994734676, 0.38790621123137803, 0.3747429039573546, 0.2990538871317199, 0.38420403106055895, ...]</code> |
* Loss: <code>pylate.losses.distillation.Distillation</code>
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `learning_rate`: 3e-05
- `num_train_epochs`: 1
- `bf16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 3e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | NanoClimateFEVER_MaxSim_ndcg@10 | NanoDBPedia_MaxSim_ndcg@10 | NanoFEVER_MaxSim_ndcg@10 | NanoFiQA2018_MaxSim_ndcg@10 | NanoHotpotQA_MaxSim_ndcg@10 | NanoMSMARCO_MaxSim_ndcg@10 | NanoNFCorpus_MaxSim_ndcg@10 | NanoNQ_MaxSim_ndcg@10 | NanoQuoraRetrieval_MaxSim_ndcg@10 | NanoSCIDOCS_MaxSim_ndcg@10 | NanoArguAna_MaxSim_ndcg@10 | NanoSciFact_MaxSim_ndcg@10 | NanoTouche2020_MaxSim_ndcg@10 | NanoBEIR_mean_MaxSim_ndcg@10 |
|:------:|:-----:|:-------------:|:-------------------------------:|:--------------------------:|:------------------------:|:---------------------------:|:---------------------------:|:--------------------------:|:---------------------------:|:---------------------:|:---------------------------------:|:--------------------------:|:--------------------------:|:--------------------------:|:-----------------------------:|:----------------------------:|
| 0.0030 | 100 | 0.0383 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0060 | 200 | 0.0328 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0090 | 300 | 0.0319 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0120 | 400 | 0.0313 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0150 | 500 | 0.0294 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0180 | 600 | 0.0265 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0210 | 700 | 0.026 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0240 | 800 | 0.0251 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0270 | 900 | 0.0242 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0300 | 1000 | 0.0245 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0330 | 1100 | 0.0232 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0360 | 1200 | 0.0236 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0390 | 1300 | 0.0231 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0420 | 1400 | 0.0227 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0450 | 1500 | 0.0225 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0480 | 1600 | 0.0222 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0510 | 1700 | 0.0218 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0540 | 1800 | 0.022 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0570 | 1900 | 0.0213 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0600 | 2000 | 0.0214 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0630 | 2100 | 0.0214 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0660 | 2200 | 0.0209 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0690 | 2300 | 0.0204 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0720 | 2400 | 0.0201 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0750 | 2500 | 0.02 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0780 | 2600 | 0.0207 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0810 | 2700 | 0.0199 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0840 | 2800 | 0.0198 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0870 | 2900 | 0.0196 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0900 | 3000 | 0.0197 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0930 | 3100 | 0.0194 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0960 | 3200 | 0.0193 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0990 | 3300 | 0.019 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1020 | 3400 | 0.0187 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1050 | 3500 | 0.0187 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1080 | 3600 | 0.0187 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1110 | 3700 | 0.0183 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1140 | 3800 | 0.0185 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1170 | 3900 | 0.0186 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1200 | 4000 | 0.0188 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1230 | 4100 | 0.019 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1260 | 4200 | 0.018 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1290 | 4300 | 0.0183 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1320 | 4400 | 0.0178 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1350 | 4500 | 0.018 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1380 | 4600 | 0.0175 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1410 | 4700 | 0.0172 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1440 | 4800 | 0.0174 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1470 | 4900 | 0.0176 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1500 | 5000 | 0.017 | 0.2687 | 0.5353 | 0.8792 | 0.4640 | 0.8518 | 0.6460 | 0.3198 | 0.6059 | 0.9229 | 0.3192 | 0.3772 | 0.7166 | 0.5900 | 0.5767 |
| 0.1530 | 5100 | 0.0173 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1560 | 5200 | 0.0175 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1590 | 5300 | 0.0173 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1620 | 5400 | 0.0168 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1650 | 5500 | 0.0164 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1680 | 5600 | 0.0171 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1710 | 5700 | 0.0169 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1740 | 5800 | 0.0166 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1770 | 5900 | 0.0169 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1801 | 6000 | 0.0166 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1831 | 6100 | 0.0164 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1861 | 6200 | 0.0165 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1891 | 6300 | 0.0162 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1921 | 6400 | 0.0161 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1951 | 6500 | 0.0163 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1981 | 6600 | 0.0161 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2011 | 6700 | 0.0161 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2041 | 6800 | 0.0159 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2071 | 6900 | 0.0166 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2101 | 7000 | 0.0158 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2131 | 7100 | 0.0161 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2161 | 7200 | 0.0154 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2191 | 7300 | 0.0151 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2221 | 7400 | 0.0155 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2251 | 7500 | 0.0154 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2281 | 7600 | 0.0157 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2311 | 7700 | 0.0154 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2341 | 7800 | 0.0153 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2371 | 7900 | 0.0155 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2401 | 8000 | 0.015 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2431 | 8100 | 0.0155 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2461 | 8200 | 0.0153 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2491 | 8300 | 0.015 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2521 | 8400 | 0.0149 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2551 | 8500 | 0.0151 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2581 | 8600 | 0.015 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2611 | 8700 | 0.0151 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2641 | 8800 | 0.0152 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2671 | 8900 | 0.0152 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2701 | 9000 | 0.0149 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2731 | 9100 | 0.0145 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2761 | 9200 | 0.0145 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2791 | 9300 | 0.0147 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2821 | 9400 | 0.0151 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2851 | 9500 | 0.0144 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2881 | 9600 | 0.0144 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2911 | 9700 | 0.0147 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2941 | 9800 | 0.0147 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2971 | 9900 | 0.0146 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3001 | 10000 | 0.0144 | 0.2775 | 0.5458 | 0.8872 | 0.4447 | 0.8686 | 0.6438 | 0.3179 | 0.6183 | 0.9151 | 0.3025 | 0.4000 | 0.7288 | 0.5733 | 0.5787 |
| 0.3031 | 10100 | 0.0139 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3061 | 10200 | 0.0141 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3091 | 10300 | 0.0147 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3121 | 10400 | 0.0146 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3151 | 10500 | 0.0143 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3181 | 10600 | 0.0144 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3211 | 10700 | 0.0141 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3241 | 10800 | 0.0147 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3271 | 10900 | 0.0142 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3301 | 11000 | 0.0139 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3331 | 11100 | 0.0142 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3361 | 11200 | 0.0139 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3391 | 11300 | 0.0143 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3421 | 11400 | 0.0142 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3451 | 11500 | 0.0144 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3481 | 11600 | 0.0142 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3511 | 11700 | 0.0137 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3541 | 11800 | 0.0139 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3571 | 11900 | 0.0142 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3601 | 12000 | 0.0136 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3631 | 12100 | 0.0139 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3661 | 12200 | 0.0144 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3691 | 12300 | 0.0136 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3721 | 12400 | 0.0141 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3751 | 12500 | 0.0137 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3781 | 12600 | 0.0136 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3811 | 12700 | 0.0137 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3841 | 12800 | 0.0132 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3871 | 12900 | 0.0136 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3901 | 13000 | 0.0138 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3931 | 13100 | 0.0138 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3961 | 13200 | 0.0138 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3991 | 13300 | 0.0137 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4021 | 13400 | 0.0131 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4051 | 13500 | 0.0136 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4081 | 13600 | 0.0135 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4111 | 13700 | 0.0138 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4141 | 13800 | 0.0135 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4171 | 13900 | 0.0138 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4201 | 14000 | 0.0132 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4231 | 14100 | 0.0136 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4261 | 14200 | 0.0131 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4291 | 14300 | 0.0135 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4321 | 14400 | 0.0138 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4351 | 14500 | 0.013 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4381 | 14600 | 0.0131 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4411 | 14700 | 0.013 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4441 | 14800 | 0.0134 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4471 | 14900 | 0.0133 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4501 | 15000 | 0.0136 | 0.2846 | 0.5781 | 0.8899 | 0.4652 | 0.8629 | 0.6491 | 0.3255 | 0.6303 | 0.9427 | 0.3168 | 0.4224 | 0.7354 | 0.5816 | 0.5911 |
| 0.4531 | 15100 | 0.0133 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4561 | 15200 | 0.0131 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4591 | 15300 | 0.0133 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4621 | 15400 | 0.0129 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4651 | 15500 | 0.013 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4681 | 15600 | 0.0133 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4711 | 15700 | 0.0132 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4741 | 15800 | 0.013 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4771 | 15900 | 0.0132 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4801 | 16000 | 0.0129 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4831 | 16100 | 0.013 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4861 | 16200 | 0.0128 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4891 | 16300 | 0.013 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4921 | 16400 | 0.0135 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4951 | 16500 | 0.0126 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4981 | 16600 | 0.0127 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5011 | 16700 | 0.0128 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5041 | 16800 | 0.0125 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5071 | 16900 | 0.0128 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5101 | 17000 | 0.0129 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5131 | 17100 | 0.0128 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5161 | 17200 | 0.0128 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5191 | 17300 | 0.013 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5221 | 17400 | 0.0126 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5251 | 17500 | 0.0128 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5281 | 17600 | 0.0128 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5311 | 17700 | 0.013 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5341 | 17800 | 0.0127 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5372 | 17900 | 0.0127 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5402 | 18000 | 0.0124 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5432 | 18100 | 0.0125 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5462 | 18200 | 0.0125 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5492 | 18300 | 0.0128 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5522 | 18400 | 0.0125 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5552 | 18500 | 0.0125 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5582 | 18600 | 0.0127 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5612 | 18700 | 0.0124 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5642 | 18800 | 0.0127 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5672 | 18900 | 0.0121 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5702 | 19000 | 0.0123 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5732 | 19100 | 0.012 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5762 | 19200 | 0.0122 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5792 | 19300 | 0.0123 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5822 | 19400 | 0.0126 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5852 | 19500 | 0.0127 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5882 | 19600 | 0.0124 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5912 | 19700 | 0.0127 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5942 | 19800 | 0.0125 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5972 | 19900 | 0.0124 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6002 | 20000 | 0.0124 | 0.2803 | 0.5772 | 0.8907 | 0.4668 | 0.8676 | 0.6476 | 0.3364 | 0.6633 | 0.9129 | 0.3240 | 0.4412 | 0.7460 | 0.5781 | 0.5948 |
| 0.6032 | 20100 | 0.0123 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6062 | 20200 | 0.0122 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6092 | 20300 | 0.0125 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6122 | 20400 | 0.0124 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6152 | 20500 | 0.0122 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6182 | 20600 | 0.0123 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6212 | 20700 | 0.0123 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6242 | 20800 | 0.0122 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6272 | 20900 | 0.012 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6302 | 21000 | 0.0123 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6332 | 21100 | 0.0121 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6362 | 21200 | 0.0121 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6392 | 21300 | 0.0124 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6422 | 21400 | 0.0123 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6452 | 21500 | 0.012 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6482 | 21600 | 0.0123 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6512 | 21700 | 0.0122 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6542 | 21800 | 0.012 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6572 | 21900 | 0.0121 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6602 | 22000 | 0.0124 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6632 | 22100 | 0.0124 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6662 | 22200 | 0.0119 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6692 | 22300 | 0.0117 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6722 | 22400 | 0.0118 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6752 | 22500 | 0.0122 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6782 | 22600 | 0.0116 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6812 | 22700 | 0.0122 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6842 | 22800 | 0.0122 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6872 | 22900 | 0.0119 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6902 | 23000 | 0.0117 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6932 | 23100 | 0.012 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6962 | 23200 | 0.012 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6992 | 23300 | 0.012 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7022 | 23400 | 0.0119 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7052 | 23500 | 0.0117 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7082 | 23600 | 0.0119 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7112 | 23700 | 0.0121 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7142 | 23800 | 0.0118 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7172 | 23900 | 0.0121 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7202 | 24000 | 0.0123 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7232 | 24100 | 0.0121 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7262 | 24200 | 0.0115 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7292 | 24300 | 0.0117 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7322 | 24400 | 0.0116 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7352 | 24500 | 0.0116 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7382 | 24600 | 0.0117 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7412 | 24700 | 0.0119 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7442 | 24800 | 0.0117 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7472 | 24900 | 0.0118 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7502 | 25000 | 0.012 | 0.2905 | 0.5880 | 0.9048 | 0.4640 | 0.8575 | 0.6645 | 0.3255 | 0.6545 | 0.9043 | 0.3302 | 0.4645 | 0.7485 | 0.5655 | 0.5971 |
| 0.7532 | 25100 | 0.012 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7562 | 25200 | 0.0119 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7592 | 25300 | 0.0119 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7622 | 25400 | 0.0119 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7652 | 25500 | 0.0118 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7682 | 25600 | 0.0118 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7712 | 25700 | 0.012 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7742 | 25800 | 0.0114 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7772 | 25900 | 0.0116 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7802 | 26000 | 0.0116 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7832 | 26100 | 0.0116 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7862 | 26200 | 0.012 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7892 | 26300 | 0.0117 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7922 | 26400 | 0.0117 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7952 | 26500 | 0.0116 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7982 | 26600 | 0.0116 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8012 | 26700 | 0.0117 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8042 | 26800 | 0.0117 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8072 | 26900 | 0.0116 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8102 | 27000 | 0.0116 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8132 | 27100 | 0.0113 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8162 | 27200 | 0.0117 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8192 | 27300 | 0.0118 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8222 | 27400 | 0.0118 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8252 | 27500 | 0.0116 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8282 | 27600 | 0.0116 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8312 | 27700 | 0.0113 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8342 | 27800 | 0.0118 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8372 | 27900 | 0.0112 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8402 | 28000 | 0.0114 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8432 | 28100 | 0.0115 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8462 | 28200 | 0.0115 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8492 | 28300 | 0.0117 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8522 | 28400 | 0.0113 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8552 | 28500 | 0.0119 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8582 | 28600 | 0.0115 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8612 | 28700 | 0.0115 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8642 | 28800 | 0.0114 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8672 | 28900 | 0.0116 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8702 | 29000 | 0.0117 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8732 | 29100 | 0.0115 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8762 | 29200 | 0.0114 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8792 | 29300 | 0.0117 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8822 | 29400 | 0.0116 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8852 | 29500 | 0.0112 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8882 | 29600 | 0.0117 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8912 | 29700 | 0.0115 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8943 | 29800 | 0.0114 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8973 | 29900 | 0.0118 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9003 | 30000 | 0.0113 | 0.2876 | 0.5814 | 0.9050 | 0.4612 | 0.8581 | 0.6531 | 0.3296 | 0.6661 | 0.9075 | 0.3311 | 0.4576 | 0.7464 | 0.5860 | 0.5978 |
| 0.9033 | 30100 | 0.0113 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9063 | 30200 | 0.0114 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9093 | 30300 | 0.0116 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9123 | 30400 | 0.0115 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9153 | 30500 | 0.0115 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9183 | 30600 | 0.0116 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9213 | 30700 | 0.0113 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9243 | 30800 | 0.0116 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9273 | 30900 | 0.0117 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9303 | 31000 | 0.0116 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9333 | 31100 | 0.0113 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9363 | 31200 | 0.0119 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9393 | 31300 | 0.0112 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9423 | 31400 | 0.0115 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9453 | 31500 | 0.0114 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9483 | 31600 | 0.0113 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9513 | 31700 | 0.0114 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9543 | 31800 | 0.0116 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9573 | 31900 | 0.0118 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9603 | 32000 | 0.0118 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9633 | 32100 | 0.0114 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9663 | 32200 | 0.0114 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9693 | 32300 | 0.0111 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9723 | 32400 | 0.0111 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9753 | 32500 | 0.0111 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9783 | 32600 | 0.0117 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9813 | 32700 | 0.0113 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9843 | 32800 | 0.0114 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9873 | 32900 | 0.0114 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9903 | 33000 | 0.0111 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9933 | 33100 | 0.0112 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9963 | 33200 | 0.0114 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9993 | 33300 | 0.0116 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
</details>
### Framework Versions
- Python: 3.11.13
- Sentence Transformers: 4.0.2
- PyLate: 1.2.0
- Transformers: 4.48.2
- PyTorch: 2.6.0+cu124
- Accelerate: 1.9.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084"
}
```
#### PyLate
```bibtex
@misc{PyLate,
title={PyLate: Flexible Training and Retrieval for Late Interaction Models},
author={Chaffin, Antoine and Sourty, Raphaël},
url={https://github.com/lightonai/pylate},
year={2024}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
Speedsy/ettin-cleaned-data-1-epoch-30000
|
Speedsy
| 2025-08-13T14:06:23Z | 0 | 0 |
PyLate
|
[
"PyLate",
"safetensors",
"modernbert",
"ColBERT",
"sentence-transformers",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:533177",
"loss:Distillation",
"dataset:Speedsy/cleaned-ms-marco-bge-gemma-from-ligton",
"arxiv:1908.10084",
"base_model:jhu-clsp/ettin-encoder-17m",
"base_model:finetune:jhu-clsp/ettin-encoder-17m",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-08-13T14:06:14Z |
---
tags:
- ColBERT
- PyLate
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:533177
- loss:Distillation
base_model: jhu-clsp/ettin-encoder-17m
datasets:
- Speedsy/cleaned-ms-marco-bge-gemma-from-ligton
pipeline_tag: sentence-similarity
library_name: PyLate
metrics:
- MaxSim_accuracy@1
- MaxSim_accuracy@3
- MaxSim_accuracy@5
- MaxSim_accuracy@10
- MaxSim_precision@1
- MaxSim_precision@3
- MaxSim_precision@5
- MaxSim_precision@10
- MaxSim_recall@1
- MaxSim_recall@3
- MaxSim_recall@5
- MaxSim_recall@10
- MaxSim_ndcg@10
- MaxSim_mrr@10
- MaxSim_map@100
model-index:
- name: PyLate model based on jhu-clsp/ettin-encoder-17m
results:
- task:
type: py-late-information-retrieval
name: Py Late Information Retrieval
dataset:
name: NanoClimateFEVER
type: NanoClimateFEVER
metrics:
- type: MaxSim_accuracy@1
value: 0.24
name: Maxsim Accuracy@1
- type: MaxSim_accuracy@3
value: 0.38
name: Maxsim Accuracy@3
- type: MaxSim_accuracy@5
value: 0.5
name: Maxsim Accuracy@5
- type: MaxSim_accuracy@10
value: 0.7
name: Maxsim Accuracy@10
- type: MaxSim_precision@1
value: 0.24
name: Maxsim Precision@1
- type: MaxSim_precision@3
value: 0.15333333333333332
name: Maxsim Precision@3
- type: MaxSim_precision@5
value: 0.11599999999999999
name: Maxsim Precision@5
- type: MaxSim_precision@10
value: 0.092
name: Maxsim Precision@10
- type: MaxSim_recall@1
value: 0.12833333333333333
name: Maxsim Recall@1
- type: MaxSim_recall@3
value: 0.21
name: Maxsim Recall@3
- type: MaxSim_recall@5
value: 0.2533333333333333
name: Maxsim Recall@5
- type: MaxSim_recall@10
value: 0.36666666666666664
name: Maxsim Recall@10
- type: MaxSim_ndcg@10
value: 0.28763494301317366
name: Maxsim Ndcg@10
- type: MaxSim_mrr@10
value: 0.3579841269841269
name: Maxsim Mrr@10
- type: MaxSim_map@100
value: 0.22941676804604197
name: Maxsim Map@100
- task:
type: py-late-information-retrieval
name: Py Late Information Retrieval
dataset:
name: NanoDBPedia
type: NanoDBPedia
metrics:
- type: MaxSim_accuracy@1
value: 0.7
name: Maxsim Accuracy@1
- type: MaxSim_accuracy@3
value: 0.86
name: Maxsim Accuracy@3
- type: MaxSim_accuracy@5
value: 0.9
name: Maxsim Accuracy@5
- type: MaxSim_accuracy@10
value: 0.94
name: Maxsim Accuracy@10
- type: MaxSim_precision@1
value: 0.7
name: Maxsim Precision@1
- type: MaxSim_precision@3
value: 0.6133333333333334
name: Maxsim Precision@3
- type: MaxSim_precision@5
value: 0.556
name: Maxsim Precision@5
- type: MaxSim_precision@10
value: 0.48
name: Maxsim Precision@10
- type: MaxSim_recall@1
value: 0.07180505985706782
name: Maxsim Recall@1
- type: MaxSim_recall@3
value: 0.16504755248565225
name: Maxsim Recall@3
- type: MaxSim_recall@5
value: 0.2194302820279554
name: Maxsim Recall@5
- type: MaxSim_recall@10
value: 0.32972451635318606
name: Maxsim Recall@10
- type: MaxSim_ndcg@10
value: 0.5813676616013099
name: Maxsim Ndcg@10
- type: MaxSim_mrr@10
value: 0.7881666666666667
name: Maxsim Mrr@10
- type: MaxSim_map@100
value: 0.4671775883716682
name: Maxsim Map@100
- task:
type: py-late-information-retrieval
name: Py Late Information Retrieval
dataset:
name: NanoFEVER
type: NanoFEVER
metrics:
- type: MaxSim_accuracy@1
value: 0.88
name: Maxsim Accuracy@1
- type: MaxSim_accuracy@3
value: 0.94
name: Maxsim Accuracy@3
- type: MaxSim_accuracy@5
value: 0.96
name: Maxsim Accuracy@5
- type: MaxSim_accuracy@10
value: 0.98
name: Maxsim Accuracy@10
- type: MaxSim_precision@1
value: 0.88
name: Maxsim Precision@1
- type: MaxSim_precision@3
value: 0.32666666666666666
name: Maxsim Precision@3
- type: MaxSim_precision@5
value: 0.20799999999999996
name: Maxsim Precision@5
- type: MaxSim_precision@10
value: 0.10799999999999997
name: Maxsim Precision@10
- type: MaxSim_recall@1
value: 0.8166666666666668
name: Maxsim Recall@1
- type: MaxSim_recall@3
value: 0.8933333333333333
name: Maxsim Recall@3
- type: MaxSim_recall@5
value: 0.93
name: Maxsim Recall@5
- type: MaxSim_recall@10
value: 0.96
name: Maxsim Recall@10
- type: MaxSim_ndcg@10
value: 0.9050308205730978
name: Maxsim Ndcg@10
- type: MaxSim_mrr@10
value: 0.9106666666666665
name: Maxsim Mrr@10
- type: MaxSim_map@100
value: 0.8792272727272727
name: Maxsim Map@100
- task:
type: py-late-information-retrieval
name: Py Late Information Retrieval
dataset:
name: NanoFiQA2018
type: NanoFiQA2018
metrics:
- type: MaxSim_accuracy@1
value: 0.42
name: Maxsim Accuracy@1
- type: MaxSim_accuracy@3
value: 0.6
name: Maxsim Accuracy@3
- type: MaxSim_accuracy@5
value: 0.66
name: Maxsim Accuracy@5
- type: MaxSim_accuracy@10
value: 0.74
name: Maxsim Accuracy@10
- type: MaxSim_precision@1
value: 0.42
name: Maxsim Precision@1
- type: MaxSim_precision@3
value: 0.26666666666666666
name: Maxsim Precision@3
- type: MaxSim_precision@5
value: 0.212
name: Maxsim Precision@5
- type: MaxSim_precision@10
value: 0.132
name: Maxsim Precision@10
- type: MaxSim_recall@1
value: 0.2286904761904762
name: Maxsim Recall@1
- type: MaxSim_recall@3
value: 0.36584920634920637
name: Maxsim Recall@3
- type: MaxSim_recall@5
value: 0.4511190476190476
name: Maxsim Recall@5
- type: MaxSim_recall@10
value: 0.5564523809523809
name: Maxsim Recall@10
- type: MaxSim_ndcg@10
value: 0.4611987813833444
name: Maxsim Ndcg@10
- type: MaxSim_mrr@10
value: 0.5268571428571428
name: Maxsim Mrr@10
- type: MaxSim_map@100
value: 0.38520628807075186
name: Maxsim Map@100
- task:
type: py-late-information-retrieval
name: Py Late Information Retrieval
dataset:
name: NanoHotpotQA
type: NanoHotpotQA
metrics:
- type: MaxSim_accuracy@1
value: 0.88
name: Maxsim Accuracy@1
- type: MaxSim_accuracy@3
value: 1.0
name: Maxsim Accuracy@3
- type: MaxSim_accuracy@5
value: 1.0
name: Maxsim Accuracy@5
- type: MaxSim_accuracy@10
value: 1.0
name: Maxsim Accuracy@10
- type: MaxSim_precision@1
value: 0.88
name: Maxsim Precision@1
- type: MaxSim_precision@3
value: 0.5533333333333332
name: Maxsim Precision@3
- type: MaxSim_precision@5
value: 0.344
name: Maxsim Precision@5
- type: MaxSim_precision@10
value: 0.17999999999999997
name: Maxsim Precision@10
- type: MaxSim_recall@1
value: 0.44
name: Maxsim Recall@1
- type: MaxSim_recall@3
value: 0.83
name: Maxsim Recall@3
- type: MaxSim_recall@5
value: 0.86
name: Maxsim Recall@5
- type: MaxSim_recall@10
value: 0.9
name: Maxsim Recall@10
- type: MaxSim_ndcg@10
value: 0.858075938741974
name: Maxsim Ndcg@10
- type: MaxSim_mrr@10
value: 0.9333333333333332
name: Maxsim Mrr@10
- type: MaxSim_map@100
value: 0.800559405305322
name: Maxsim Map@100
- task:
type: py-late-information-retrieval
name: Py Late Information Retrieval
dataset:
name: NanoMSMARCO
type: NanoMSMARCO
metrics:
- type: MaxSim_accuracy@1
value: 0.52
name: Maxsim Accuracy@1
- type: MaxSim_accuracy@3
value: 0.66
name: Maxsim Accuracy@3
- type: MaxSim_accuracy@5
value: 0.7
name: Maxsim Accuracy@5
- type: MaxSim_accuracy@10
value: 0.8
name: Maxsim Accuracy@10
- type: MaxSim_precision@1
value: 0.52
name: Maxsim Precision@1
- type: MaxSim_precision@3
value: 0.22
name: Maxsim Precision@3
- type: MaxSim_precision@5
value: 0.14
name: Maxsim Precision@5
- type: MaxSim_precision@10
value: 0.08
name: Maxsim Precision@10
- type: MaxSim_recall@1
value: 0.52
name: Maxsim Recall@1
- type: MaxSim_recall@3
value: 0.66
name: Maxsim Recall@3
- type: MaxSim_recall@5
value: 0.7
name: Maxsim Recall@5
- type: MaxSim_recall@10
value: 0.8
name: Maxsim Recall@10
- type: MaxSim_ndcg@10
value: 0.6531074122045695
name: Maxsim Ndcg@10
- type: MaxSim_mrr@10
value: 0.6071031746031745
name: Maxsim Mrr@10
- type: MaxSim_map@100
value: 0.6169218803486538
name: Maxsim Map@100
- task:
type: py-late-information-retrieval
name: Py Late Information Retrieval
dataset:
name: NanoNFCorpus
type: NanoNFCorpus
metrics:
- type: MaxSim_accuracy@1
value: 0.44
name: Maxsim Accuracy@1
- type: MaxSim_accuracy@3
value: 0.54
name: Maxsim Accuracy@3
- type: MaxSim_accuracy@5
value: 0.64
name: Maxsim Accuracy@5
- type: MaxSim_accuracy@10
value: 0.66
name: Maxsim Accuracy@10
- type: MaxSim_precision@1
value: 0.44
name: Maxsim Precision@1
- type: MaxSim_precision@3
value: 0.3466666666666666
name: Maxsim Precision@3
- type: MaxSim_precision@5
value: 0.332
name: Maxsim Precision@5
- type: MaxSim_precision@10
value: 0.26
name: Maxsim Precision@10
- type: MaxSim_recall@1
value: 0.04328552205779273
name: Maxsim Recall@1
- type: MaxSim_recall@3
value: 0.07677540422223056
name: Maxsim Recall@3
- type: MaxSim_recall@5
value: 0.1178452517343604
name: Maxsim Recall@5
- type: MaxSim_recall@10
value: 0.14031592190988035
name: Maxsim Recall@10
- type: MaxSim_ndcg@10
value: 0.32964183008765374
name: Maxsim Ndcg@10
- type: MaxSim_mrr@10
value: 0.5071904761904762
name: Maxsim Mrr@10
- type: MaxSim_map@100
value: 0.14909457042487737
name: Maxsim Map@100
- task:
type: py-late-information-retrieval
name: Py Late Information Retrieval
dataset:
name: NanoNQ
type: NanoNQ
metrics:
- type: MaxSim_accuracy@1
value: 0.52
name: Maxsim Accuracy@1
- type: MaxSim_accuracy@3
value: 0.78
name: Maxsim Accuracy@3
- type: MaxSim_accuracy@5
value: 0.82
name: Maxsim Accuracy@5
- type: MaxSim_accuracy@10
value: 0.86
name: Maxsim Accuracy@10
- type: MaxSim_precision@1
value: 0.52
name: Maxsim Precision@1
- type: MaxSim_precision@3
value: 0.26
name: Maxsim Precision@3
- type: MaxSim_precision@5
value: 0.16799999999999998
name: Maxsim Precision@5
- type: MaxSim_precision@10
value: 0.09
name: Maxsim Precision@10
- type: MaxSim_recall@1
value: 0.49
name: Maxsim Recall@1
- type: MaxSim_recall@3
value: 0.72
name: Maxsim Recall@3
- type: MaxSim_recall@5
value: 0.77
name: Maxsim Recall@5
- type: MaxSim_recall@10
value: 0.81
name: Maxsim Recall@10
- type: MaxSim_ndcg@10
value: 0.6661035501816893
name: Maxsim Ndcg@10
- type: MaxSim_mrr@10
value: 0.6417460317460317
name: Maxsim Mrr@10
- type: MaxSim_map@100
value: 0.6127876420077506
name: Maxsim Map@100
- task:
type: py-late-information-retrieval
name: Py Late Information Retrieval
dataset:
name: NanoQuoraRetrieval
type: NanoQuoraRetrieval
metrics:
- type: MaxSim_accuracy@1
value: 0.84
name: Maxsim Accuracy@1
- type: MaxSim_accuracy@3
value: 0.98
name: Maxsim Accuracy@3
- type: MaxSim_accuracy@5
value: 1.0
name: Maxsim Accuracy@5
- type: MaxSim_accuracy@10
value: 1.0
name: Maxsim Accuracy@10
- type: MaxSim_precision@1
value: 0.84
name: Maxsim Precision@1
- type: MaxSim_precision@3
value: 0.38666666666666655
name: Maxsim Precision@3
- type: MaxSim_precision@5
value: 0.244
name: Maxsim Precision@5
- type: MaxSim_precision@10
value: 0.12399999999999999
name: Maxsim Precision@10
- type: MaxSim_recall@1
value: 0.7440000000000001
name: Maxsim Recall@1
- type: MaxSim_recall@3
value: 0.9286666666666668
name: Maxsim Recall@3
- type: MaxSim_recall@5
value: 0.9593333333333334
name: Maxsim Recall@5
- type: MaxSim_recall@10
value: 0.9626666666666668
name: Maxsim Recall@10
- type: MaxSim_ndcg@10
value: 0.9075252606458076
name: Maxsim Ndcg@10
- type: MaxSim_mrr@10
value: 0.9106666666666667
name: Maxsim Mrr@10
- type: MaxSim_map@100
value: 0.8834929814655215
name: Maxsim Map@100
- task:
type: py-late-information-retrieval
name: Py Late Information Retrieval
dataset:
name: NanoSCIDOCS
type: NanoSCIDOCS
metrics:
- type: MaxSim_accuracy@1
value: 0.4
name: Maxsim Accuracy@1
- type: MaxSim_accuracy@3
value: 0.66
name: Maxsim Accuracy@3
- type: MaxSim_accuracy@5
value: 0.68
name: Maxsim Accuracy@5
- type: MaxSim_accuracy@10
value: 0.78
name: Maxsim Accuracy@10
- type: MaxSim_precision@1
value: 0.4
name: Maxsim Precision@1
- type: MaxSim_precision@3
value: 0.31333333333333335
name: Maxsim Precision@3
- type: MaxSim_precision@5
value: 0.244
name: Maxsim Precision@5
- type: MaxSim_precision@10
value: 0.162
name: Maxsim Precision@10
- type: MaxSim_recall@1
value: 0.08366666666666667
name: Maxsim Recall@1
- type: MaxSim_recall@3
value: 0.19366666666666668
name: Maxsim Recall@3
- type: MaxSim_recall@5
value: 0.24966666666666662
name: Maxsim Recall@5
- type: MaxSim_recall@10
value: 0.3306666666666666
name: Maxsim Recall@10
- type: MaxSim_ndcg@10
value: 0.3310802884457278
name: Maxsim Ndcg@10
- type: MaxSim_mrr@10
value: 0.5407222222222222
name: Maxsim Mrr@10
- type: MaxSim_map@100
value: 0.25548127218322214
name: Maxsim Map@100
- task:
type: py-late-information-retrieval
name: Py Late Information Retrieval
dataset:
name: NanoArguAna
type: NanoArguAna
metrics:
- type: MaxSim_accuracy@1
value: 0.14
name: Maxsim Accuracy@1
- type: MaxSim_accuracy@3
value: 0.56
name: Maxsim Accuracy@3
- type: MaxSim_accuracy@5
value: 0.64
name: Maxsim Accuracy@5
- type: MaxSim_accuracy@10
value: 0.76
name: Maxsim Accuracy@10
- type: MaxSim_precision@1
value: 0.14
name: Maxsim Precision@1
- type: MaxSim_precision@3
value: 0.18666666666666668
name: Maxsim Precision@3
- type: MaxSim_precision@5
value: 0.128
name: Maxsim Precision@5
- type: MaxSim_precision@10
value: 0.07600000000000001
name: Maxsim Precision@10
- type: MaxSim_recall@1
value: 0.14
name: Maxsim Recall@1
- type: MaxSim_recall@3
value: 0.56
name: Maxsim Recall@3
- type: MaxSim_recall@5
value: 0.64
name: Maxsim Recall@5
- type: MaxSim_recall@10
value: 0.76
name: Maxsim Recall@10
- type: MaxSim_ndcg@10
value: 0.457648969568352
name: Maxsim Ndcg@10
- type: MaxSim_mrr@10
value: 0.3595714285714285
name: Maxsim Mrr@10
- type: MaxSim_map@100
value: 0.36839121630206756
name: Maxsim Map@100
- task:
type: py-late-information-retrieval
name: Py Late Information Retrieval
dataset:
name: NanoSciFact
type: NanoSciFact
metrics:
- type: MaxSim_accuracy@1
value: 0.62
name: Maxsim Accuracy@1
- type: MaxSim_accuracy@3
value: 0.8
name: Maxsim Accuracy@3
- type: MaxSim_accuracy@5
value: 0.86
name: Maxsim Accuracy@5
- type: MaxSim_accuracy@10
value: 0.88
name: Maxsim Accuracy@10
- type: MaxSim_precision@1
value: 0.62
name: Maxsim Precision@1
- type: MaxSim_precision@3
value: 0.27999999999999997
name: Maxsim Precision@3
- type: MaxSim_precision@5
value: 0.18799999999999997
name: Maxsim Precision@5
- type: MaxSim_precision@10
value: 0.09799999999999999
name: Maxsim Precision@10
- type: MaxSim_recall@1
value: 0.595
name: Maxsim Recall@1
- type: MaxSim_recall@3
value: 0.77
name: Maxsim Recall@3
- type: MaxSim_recall@5
value: 0.845
name: Maxsim Recall@5
- type: MaxSim_recall@10
value: 0.87
name: Maxsim Recall@10
- type: MaxSim_ndcg@10
value: 0.7463564404006293
name: Maxsim Ndcg@10
- type: MaxSim_mrr@10
value: 0.7106666666666667
name: Maxsim Mrr@10
- type: MaxSim_map@100
value: 0.7060596028154851
name: Maxsim Map@100
- task:
type: py-late-information-retrieval
name: Py Late Information Retrieval
dataset:
name: NanoTouche2020
type: NanoTouche2020
metrics:
- type: MaxSim_accuracy@1
value: 0.7551020408163265
name: Maxsim Accuracy@1
- type: MaxSim_accuracy@3
value: 0.9795918367346939
name: Maxsim Accuracy@3
- type: MaxSim_accuracy@5
value: 1.0
name: Maxsim Accuracy@5
- type: MaxSim_accuracy@10
value: 1.0
name: Maxsim Accuracy@10
- type: MaxSim_precision@1
value: 0.7551020408163265
name: Maxsim Precision@1
- type: MaxSim_precision@3
value: 0.6666666666666666
name: Maxsim Precision@3
- type: MaxSim_precision@5
value: 0.6285714285714286
name: Maxsim Precision@5
- type: MaxSim_precision@10
value: 0.5102040816326531
name: Maxsim Precision@10
- type: MaxSim_recall@1
value: 0.0512448639546046
name: Maxsim Recall@1
- type: MaxSim_recall@3
value: 0.13492382759077773
name: Maxsim Recall@3
- type: MaxSim_recall@5
value: 0.21028780790933668
name: Maxsim Recall@5
- type: MaxSim_recall@10
value: 0.3208107900431349
name: Maxsim Recall@10
- type: MaxSim_ndcg@10
value: 0.5860354215626813
name: Maxsim Ndcg@10
- type: MaxSim_mrr@10
value: 0.8656462585034013
name: Maxsim Mrr@10
- type: MaxSim_map@100
value: 0.42050847387780316
name: Maxsim Map@100
- task:
type: nano-beir
name: Nano BEIR
dataset:
name: NanoBEIR mean
type: NanoBEIR_mean
metrics:
- type: MaxSim_accuracy@1
value: 0.5657770800627943
name: Maxsim Accuracy@1
- type: MaxSim_accuracy@3
value: 0.749199372056515
name: Maxsim Accuracy@3
- type: MaxSim_accuracy@5
value: 0.7969230769230768
name: Maxsim Accuracy@5
- type: MaxSim_accuracy@10
value: 0.8538461538461538
name: Maxsim Accuracy@10
- type: MaxSim_precision@1
value: 0.5657770800627943
name: Maxsim Precision@1
- type: MaxSim_precision@3
value: 0.3517948717948718
name: Maxsim Precision@3
- type: MaxSim_precision@5
value: 0.2698901098901099
name: Maxsim Precision@5
- type: MaxSim_precision@10
value: 0.18401569858712713
name: Maxsim Precision@10
- type: MaxSim_recall@1
value: 0.3348225068251237
name: Maxsim Recall@1
- type: MaxSim_recall@3
value: 0.5006355890241948
name: Maxsim Recall@3
- type: MaxSim_recall@5
value: 0.5543089017403102
name: Maxsim Recall@5
- type: MaxSim_recall@10
value: 0.6236387391737371
name: Maxsim Recall@10
- type: MaxSim_ndcg@10
value: 0.5977544091084623
name: Maxsim Ndcg@10
- type: MaxSim_mrr@10
value: 0.666178527821385
name: Maxsim Mrr@10
- type: MaxSim_map@100
value: 0.5211019201497261
name: Maxsim Map@100
---
# PyLate model based on jhu-clsp/ettin-encoder-17m
This is a [PyLate](https://github.com/lightonai/pylate) model finetuned from [jhu-clsp/ettin-encoder-17m](https://huggingface.co/jhu-clsp/ettin-encoder-17m) on the [train](https://huggingface.co/datasets/Speedsy/cleaned-ms-marco-bge-gemma-from-ligton) dataset. It maps sentences & paragraphs to sequences of 128-dimensional dense vectors and can be used for semantic textual similarity using the MaxSim operator.
## Model Details
### Model Description
- **Model Type:** PyLate model
- **Base model:** [jhu-clsp/ettin-encoder-17m](https://huggingface.co/jhu-clsp/ettin-encoder-17m) <!-- at revision 987607455c61e7a5bbc85f7758e0512ea6d0ae4c -->
- **Document Length:** 300 tokens
- **Query Length:** 32 tokens
- **Output Dimensionality:** 128 tokens
- **Similarity Function:** MaxSim
- **Training Dataset:**
- [train](https://huggingface.co/datasets/Speedsy/cleaned-ms-marco-bge-gemma-from-ligton)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [PyLate Documentation](https://lightonai.github.io/pylate/)
- **Repository:** [PyLate on GitHub](https://github.com/lightonai/pylate)
- **Hugging Face:** [PyLate models on Hugging Face](https://huggingface.co/models?library=PyLate)
### Full Model Architecture
```
ColBERT(
(0): Transformer({'max_seq_length': 299, 'do_lower_case': False}) with Transformer model: ModernBertModel
(1): Dense({'in_features': 256, 'out_features': 128, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
)
```
## Usage
First install the PyLate library:
```bash
pip install -U pylate
```
### Retrieval
PyLate provides a streamlined interface to index and retrieve documents using ColBERT models. The index leverages the Voyager HNSW index to efficiently handle document embeddings and enable fast retrieval.
#### Indexing documents
First, load the ColBERT model and initialize the Voyager index, then encode and index your documents:
```python
from pylate import indexes, models, retrieve
# Step 1: Load the ColBERT model
model = models.ColBERT(
model_name_or_path=pylate_model_id,
)
# Step 2: Initialize the Voyager index
index = indexes.Voyager(
index_folder="pylate-index",
index_name="index",
override=True, # This overwrites the existing index if any
)
# Step 3: Encode the documents
documents_ids = ["1", "2", "3"]
documents = ["document 1 text", "document 2 text", "document 3 text"]
documents_embeddings = model.encode(
documents,
batch_size=32,
is_query=False, # Ensure that it is set to False to indicate that these are documents, not queries
show_progress_bar=True,
)
# Step 4: Add document embeddings to the index by providing embeddings and corresponding ids
index.add_documents(
documents_ids=documents_ids,
documents_embeddings=documents_embeddings,
)
```
Note that you do not have to recreate the index and encode the documents every time. Once you have created an index and added the documents, you can re-use the index later by loading it:
```python
# To load an index, simply instantiate it with the correct folder/name and without overriding it
index = indexes.Voyager(
index_folder="pylate-index",
index_name="index",
)
```
#### Retrieving top-k documents for queries
Once the documents are indexed, you can retrieve the top-k most relevant documents for a given set of queries.
To do so, initialize the ColBERT retriever with the index you want to search in, encode the queries and then retrieve the top-k documents to get the top matches ids and relevance scores:
```python
# Step 1: Initialize the ColBERT retriever
retriever = retrieve.ColBERT(index=index)
# Step 2: Encode the queries
queries_embeddings = model.encode(
["query for document 3", "query for document 1"],
batch_size=32,
is_query=True, # # Ensure that it is set to False to indicate that these are queries
show_progress_bar=True,
)
# Step 3: Retrieve top-k documents
scores = retriever.retrieve(
queries_embeddings=queries_embeddings,
k=10, # Retrieve the top 10 matches for each query
)
```
### Reranking
If you only want to use the ColBERT model to perform reranking on top of your first-stage retrieval pipeline without building an index, you can simply use rank function and pass the queries and documents to rerank:
```python
from pylate import rank, models
queries = [
"query A",
"query B",
]
documents = [
["document A", "document B"],
["document 1", "document C", "document B"],
]
documents_ids = [
[1, 2],
[1, 3, 2],
]
model = models.ColBERT(
model_name_or_path=pylate_model_id,
)
queries_embeddings = model.encode(
queries,
is_query=True,
)
documents_embeddings = model.encode(
documents,
is_query=False,
)
reranked_documents = rank.rerank(
documents_ids=documents_ids,
queries_embeddings=queries_embeddings,
documents_embeddings=documents_embeddings,
)
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Py Late Information Retrieval
* Dataset: `['NanoClimateFEVER', 'NanoDBPedia', 'NanoFEVER', 'NanoFiQA2018', 'NanoHotpotQA', 'NanoMSMARCO', 'NanoNFCorpus', 'NanoNQ', 'NanoQuoraRetrieval', 'NanoSCIDOCS', 'NanoArguAna', 'NanoSciFact', 'NanoTouche2020']`
* Evaluated with <code>pylate.evaluation.pylate_information_retrieval_evaluator.PyLateInformationRetrievalEvaluator</code>
| Metric | NanoClimateFEVER | NanoDBPedia | NanoFEVER | NanoFiQA2018 | NanoHotpotQA | NanoMSMARCO | NanoNFCorpus | NanoNQ | NanoQuoraRetrieval | NanoSCIDOCS | NanoArguAna | NanoSciFact | NanoTouche2020 |
|:--------------------|:-----------------|:------------|:----------|:-------------|:-------------|:------------|:-------------|:-----------|:-------------------|:------------|:------------|:------------|:---------------|
| MaxSim_accuracy@1 | 0.24 | 0.7 | 0.88 | 0.42 | 0.88 | 0.52 | 0.44 | 0.52 | 0.84 | 0.4 | 0.14 | 0.62 | 0.7551 |
| MaxSim_accuracy@3 | 0.38 | 0.86 | 0.94 | 0.6 | 1.0 | 0.66 | 0.54 | 0.78 | 0.98 | 0.66 | 0.56 | 0.8 | 0.9796 |
| MaxSim_accuracy@5 | 0.5 | 0.9 | 0.96 | 0.66 | 1.0 | 0.7 | 0.64 | 0.82 | 1.0 | 0.68 | 0.64 | 0.86 | 1.0 |
| MaxSim_accuracy@10 | 0.7 | 0.94 | 0.98 | 0.74 | 1.0 | 0.8 | 0.66 | 0.86 | 1.0 | 0.78 | 0.76 | 0.88 | 1.0 |
| MaxSim_precision@1 | 0.24 | 0.7 | 0.88 | 0.42 | 0.88 | 0.52 | 0.44 | 0.52 | 0.84 | 0.4 | 0.14 | 0.62 | 0.7551 |
| MaxSim_precision@3 | 0.1533 | 0.6133 | 0.3267 | 0.2667 | 0.5533 | 0.22 | 0.3467 | 0.26 | 0.3867 | 0.3133 | 0.1867 | 0.28 | 0.6667 |
| MaxSim_precision@5 | 0.116 | 0.556 | 0.208 | 0.212 | 0.344 | 0.14 | 0.332 | 0.168 | 0.244 | 0.244 | 0.128 | 0.188 | 0.6286 |
| MaxSim_precision@10 | 0.092 | 0.48 | 0.108 | 0.132 | 0.18 | 0.08 | 0.26 | 0.09 | 0.124 | 0.162 | 0.076 | 0.098 | 0.5102 |
| MaxSim_recall@1 | 0.1283 | 0.0718 | 0.8167 | 0.2287 | 0.44 | 0.52 | 0.0433 | 0.49 | 0.744 | 0.0837 | 0.14 | 0.595 | 0.0512 |
| MaxSim_recall@3 | 0.21 | 0.165 | 0.8933 | 0.3658 | 0.83 | 0.66 | 0.0768 | 0.72 | 0.9287 | 0.1937 | 0.56 | 0.77 | 0.1349 |
| MaxSim_recall@5 | 0.2533 | 0.2194 | 0.93 | 0.4511 | 0.86 | 0.7 | 0.1178 | 0.77 | 0.9593 | 0.2497 | 0.64 | 0.845 | 0.2103 |
| MaxSim_recall@10 | 0.3667 | 0.3297 | 0.96 | 0.5565 | 0.9 | 0.8 | 0.1403 | 0.81 | 0.9627 | 0.3307 | 0.76 | 0.87 | 0.3208 |
| **MaxSim_ndcg@10** | **0.2876** | **0.5814** | **0.905** | **0.4612** | **0.8581** | **0.6531** | **0.3296** | **0.6661** | **0.9075** | **0.3311** | **0.4576** | **0.7464** | **0.586** |
| MaxSim_mrr@10 | 0.358 | 0.7882 | 0.9107 | 0.5269 | 0.9333 | 0.6071 | 0.5072 | 0.6417 | 0.9107 | 0.5407 | 0.3596 | 0.7107 | 0.8656 |
| MaxSim_map@100 | 0.2294 | 0.4672 | 0.8792 | 0.3852 | 0.8006 | 0.6169 | 0.1491 | 0.6128 | 0.8835 | 0.2555 | 0.3684 | 0.7061 | 0.4205 |
#### Nano BEIR
* Dataset: `NanoBEIR_mean`
* Evaluated with <code>pylate.evaluation.nano_beir_evaluator.NanoBEIREvaluator</code>
| Metric | Value |
|:--------------------|:-----------|
| MaxSim_accuracy@1 | 0.5658 |
| MaxSim_accuracy@3 | 0.7492 |
| MaxSim_accuracy@5 | 0.7969 |
| MaxSim_accuracy@10 | 0.8538 |
| MaxSim_precision@1 | 0.5658 |
| MaxSim_precision@3 | 0.3518 |
| MaxSim_precision@5 | 0.2699 |
| MaxSim_precision@10 | 0.184 |
| MaxSim_recall@1 | 0.3348 |
| MaxSim_recall@3 | 0.5006 |
| MaxSim_recall@5 | 0.5543 |
| MaxSim_recall@10 | 0.6236 |
| **MaxSim_ndcg@10** | **0.5978** |
| MaxSim_mrr@10 | 0.6662 |
| MaxSim_map@100 | 0.5211 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### train
* Dataset: [train](https://huggingface.co/datasets/Speedsy/cleaned-ms-marco-bge-gemma-from-ligton) at [63f51a3](https://huggingface.co/datasets/Speedsy/cleaned-ms-marco-bge-gemma-from-ligton/tree/63f51a32e72d981b44be6cace953edbb7dd2c4b0)
* Size: 533,177 training samples
* Columns: <code>query_id</code>, <code>document_ids</code>, and <code>scores</code>
* Approximate statistics based on the first 1000 samples:
| | query_id | document_ids | scores |
|:--------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------|:------------------------------------|
| type | int | list | list |
| details | <ul><li>264: ~0.10%</li><li>836: ~0.10%</li><li>3582: ~0.10%</li><li>4599: ~0.10%</li><li>4645: ~0.10%</li><li>4853: ~0.10%</li><li>5154: ~0.10%</li><li>7504: ~0.10%</li><li>12254: ~0.10%</li><li>12283: ~0.10%</li><li>12916: ~0.10%</li><li>14049: ~0.10%</li><li>14587: ~0.10%</li><li>14828: ~0.10%</li><li>15674: ~0.10%</li><li>15813: ~0.10%</li><li>16728: ~0.10%</li><li>22006: ~0.10%</li><li>23675: ~0.10%</li><li>24199: ~0.10%</li><li>25323: ~0.10%</li><li>29213: ~0.10%</li><li>32344: ~0.10%</li><li>34604: ~0.10%</li><li>35424: ~0.10%</li><li>36148: ~0.10%</li><li>37826: ~0.10%</li><li>38185: ~0.10%</li><li>38242: ~0.10%</li><li>39839: ~0.10%</li><li>42077: ~0.10%</li><li>43614: ~0.10%</li><li>45073: ~0.10%</li><li>46289: ~0.10%</li><li>47507: ~0.10%</li><li>48005: ~0.10%</li><li>48785: ~0.10%</li><li>49216: ~0.10%</li><li>49636: ~0.10%</li><li>49970: ~0.10%</li><li>51145: ~0.10%</li><li>52075: ~0.10%</li><li>52094: ~0.10%</li><li>52725: ~0.10%</li><li>52788: ~0.10%</li><li>54210: ~0.10%</li><li>55032: ~0.10%</li><li>59546: ~0.10%</li><li>60941: ~0.10%</li><li>62649: ~0.10%</li><li>63333: ~0.10%</li><li>64197: ~0.10%</li><li>64879: ~0.10%</li><li>67608: ~0.10%</li><li>67627: ~0.10%</li><li>69463: ~0.10%</li><li>70002: ~0.10%</li><li>70429: ~0.10%</li><li>72166: ~0.10%</li><li>72518: ~0.10%</li><li>72607: ~0.10%</li><li>72791: ~0.10%</li><li>73325: ~0.10%</li><li>74078: ~0.10%</li><li>74857: ~0.10%</li><li>75323: ~0.10%</li><li>75816: ~0.10%</li><li>76929: ~0.10%</li><li>77306: ~0.10%</li><li>77845: ~0.10%</li><li>77889: ~0.10%</li><li>78077: ~0.10%</li><li>78256: ~0.10%</li><li>78401: ~0.10%</li><li>78798: ~0.10%</li><li>80329: ~0.10%</li><li>80871: ~0.10%</li><li>81089: ~0.10%</li><li>82179: ~0.10%</li><li>82883: ~0.10%</li><li>84168: ~0.10%</li><li>86891: ~0.10%</li><li>88535: ~0.10%</li><li>88953: ~0.10%</li><li>89346: ~0.10%</li><li>89386: ~0.10%</li><li>90699: ~0.10%</li><li>90795: ~0.10%</li><li>91367: ~0.10%</li><li>91795: ~0.10%</li><li>91834: ~0.10%</li><li>92070: ~0.10%</li><li>92523: ~0.10%</li><li>92753: ~0.10%</li><li>92787: ~0.10%</li><li>93317: ~0.10%</li><li>96382: ~0.10%</li><li>96455: ~0.10%</li><li>97274: ~0.10%</li><li>97603: ~0.10%</li><li>98140: ~0.10%</li><li>101205: ~0.10%</li><li>101305: ~0.10%</li><li>102707: ~0.10%</li><li>103074: ~0.10%</li><li>105437: ~0.10%</li><li>108207: ~0.10%</li><li>109776: ~0.10%</li><li>112056: ~0.10%</li><li>112955: ~0.10%</li><li>112977: ~0.10%</li><li>113635: ~0.10%</li><li>115551: ~0.10%</li><li>116098: ~0.10%</li><li>118057: ~0.10%</li><li>120298: ~0.10%</li><li>121703: ~0.10%</li><li>123429: ~0.10%</li><li>125043: ~0.10%</li><li>125979: ~0.10%</li><li>126851: ~0.10%</li><li>128218: ~0.10%</li><li>128804: ~0.10%</li><li>129598: ~0.10%</li><li>131299: ~0.10%</li><li>132114: ~0.10%</li><li>133553: ~0.10%</li><li>133696: ~0.10%</li><li>134460: ~0.10%</li><li>137602: ~0.10%</li><li>137679: ~0.10%</li><li>138121: ~0.10%</li><li>138260: ~0.10%</li><li>139039: ~0.10%</li><li>140651: ~0.10%</li><li>142305: ~0.10%</li><li>145653: ~0.10%</li><li>145683: ~0.10%</li><li>145763: ~0.10%</li><li>150202: ~0.10%</li><li>151135: ~0.10%</li><li>152307: ~0.10%</li><li>153693: ~0.10%</li><li>154470: ~0.10%</li><li>155587: ~0.10%</li><li>157602: ~0.10%</li><li>158565: ~0.10%</li><li>159177: ~0.10%</li><li>159224: ~0.10%</li><li>159341: ~0.10%</li><li>159892: ~0.10%</li><li>161881: ~0.10%</li><li>162414: ~0.10%</li><li>163765: ~0.10%</li><li>165888: ~0.10%</li><li>167756: ~0.10%</li><li>168048: ~0.10%</li><li>168894: ~0.10%</li><li>169991: ~0.10%</li><li>170731: ~0.10%</li><li>171705: ~0.10%</li><li>176165: ~0.10%</li><li>176798: ~0.10%</li><li>180259: ~0.10%</li><li>182660: ~0.10%</li><li>183930: ~0.10%</li><li>184045: ~0.10%</li><li>184676: ~0.10%</li><li>185294: ~0.10%</li><li>186475: ~0.10%</li><li>187155: ~0.10%</li><li>189571: ~0.10%</li><li>191383: ~0.10%</li><li>192165: ~0.10%</li><li>194207: ~0.10%</li><li>195056: ~0.10%</li><li>197377: ~0.10%</li><li>198546: ~0.10%</li><li>200183: ~0.10%</li><li>202122: ~0.10%</li><li>203519: ~0.10%</li><li>206220: ~0.10%</li><li>209739: ~0.10%</li><li>210554: ~0.10%</li><li>212638: ~0.10%</li><li>213096: ~0.10%</li><li>213410: ~0.10%</li><li>217541: ~0.10%</li><li>220993: ~0.10%</li><li>223913: ~0.10%</li><li>224657: ~0.10%</li><li>227497: ~0.10%</li><li>227726: ~0.10%</li><li>228099: ~0.10%</li><li>228451: ~0.10%</li><li>231416: ~0.10%</li><li>234348: ~0.10%</li><li>235869: ~0.10%</li><li>237784: ~0.10%</li><li>240739: ~0.10%</li><li>246495: ~0.10%</li><li>246821: ~0.10%</li><li>248675: ~0.10%</li><li>249798: ~0.10%</li><li>249962: ~0.10%</li><li>249977: ~0.10%</li><li>250019: ~0.10%</li><li>250548: ~0.10%</li><li>251089: ~0.10%</li><li>254878: ~0.10%</li><li>255183: ~0.10%</li><li>255727: ~0.10%</li><li>256321: ~0.10%</li><li>258276: ~0.10%</li><li>260993: ~0.10%</li><li>261225: ~0.10%</li><li>261247: ~0.10%</li><li>262123: ~0.10%</li><li>262508: ~0.10%</li><li>262950: ~0.10%</li><li>266047: ~0.10%</li><li>267089: ~0.10%</li><li>267192: ~0.10%</li><li>268642: ~0.10%</li><li>273171: ~0.10%</li><li>273864: ~0.10%</li><li>274521: ~0.10%</li><li>274586: ~0.10%</li><li>275037: ~0.10%</li><li>275643: ~0.10%</li><li>276744: ~0.10%</li><li>277212: ~0.10%</li><li>277990: ~0.10%</li><li>280012: ~0.10%</li><li>281699: ~0.10%</li><li>282128: ~0.10%</li><li>283913: ~0.10%</li><li>284268: ~0.10%</li><li>285905: ~0.10%</li><li>287456: ~0.10%</li><li>287506: ~0.10%</li><li>288154: ~0.10%</li><li>288286: ~0.10%</li><li>289046: ~0.10%</li><li>292211: ~0.10%</li><li>292588: ~0.10%</li><li>293357: ~0.10%</li><li>294123: ~0.10%</li><li>295125: ~0.10%</li><li>299287: ~0.10%</li><li>300622: ~0.10%</li><li>302135: ~0.10%</li><li>303224: ~0.10%</li><li>304353: ~0.10%</li><li>304820: ~0.10%</li><li>305508: ~0.10%</li><li>310215: ~0.10%</li><li>310236: ~0.10%</li><li>310409: ~0.10%</li><li>311231: ~0.10%</li><li>312821: ~0.10%</li><li>314244: ~0.10%</li><li>314415: ~0.10%</li><li>314745: ~0.10%</li><li>316385: ~0.10%</li><li>317442: ~0.10%</li><li>318639: ~0.10%</li><li>318652: ~0.10%</li><li>320855: ~0.10%</li><li>321983: ~0.10%</li><li>322114: ~0.10%</li><li>323196: ~0.10%</li><li>324868: ~0.10%</li><li>327581: ~0.10%</li><li>329337: ~0.10%</li><li>330641: ~0.10%</li><li>331572: ~0.10%</li><li>331650: ~0.10%</li><li>331993: ~0.10%</li><li>334757: ~0.10%</li><li>336561: ~0.10%</li><li>336791: ~0.10%</li><li>337002: ~0.10%</li><li>338332: ~0.10%</li><li>338456: ~0.10%</li><li>339065: ~0.10%</li><li>339556: ~0.10%</li><li>339755: ~0.10%</li><li>339870: ~0.10%</li><li>340599: ~0.20%</li><li>341156: ~0.10%</li><li>342121: ~0.10%</li><li>343411: ~0.10%</li><li>344860: ~0.10%</li><li>345924: ~0.10%</li><li>346421: ~0.10%</li><li>346425: ~0.10%</li><li>348157: ~0.10%</li><li>351281: ~0.10%</li><li>351858: ~0.10%</li><li>353008: ~0.10%</li><li>353748: ~0.10%</li><li>359787: ~0.10%</li><li>359893: ~0.10%</li><li>360168: ~0.10%</li><li>361127: ~0.10%</li><li>362220: ~0.10%</li><li>362560: ~0.10%</li><li>364559: ~0.10%</li><li>366835: ~0.10%</li><li>367185: ~0.10%</li><li>369045: ~0.10%</li><li>371113: ~0.10%</li><li>375630: ~0.10%</li><li>376044: ~0.10%</li><li>376524: ~0.10%</li><li>377735: ~0.10%</li><li>378574: ~0.10%</li><li>379749: ~0.10%</li><li>379953: ~0.10%</li><li>384364: ~0.10%</li><li>384398: ~0.10%</li><li>384751: ~0.10%</li><li>385758: ~0.10%</li><li>385893: ~0.10%</li><li>386098: ~0.10%</li><li>387205: ~0.10%</li><li>387374: ~0.10%</li><li>388450: ~0.10%</li><li>388589: ~0.10%</li><li>388593: ~0.10%</li><li>389571: ~0.10%</li><li>389572: ~0.10%</li><li>391531: ~0.10%</li><li>391857: ~0.10%</li><li>393174: ~0.10%</li><li>393426: ~0.10%</li><li>398132: ~0.10%</li><li>398721: ~0.10%</li><li>399016: ~0.10%</li><li>399589: ~0.10%</li><li>400259: ~0.10%</li><li>401489: ~0.10%</li><li>401601: ~0.10%</li><li>403876: ~0.10%</li><li>403897: ~0.10%</li><li>404830: ~0.10%</li><li>406102: ~0.10%</li><li>406397: ~0.10%</li><li>407151: ~0.10%</li><li>407608: ~0.10%</li><li>409373: ~0.10%</li><li>410084: ~0.10%</li><li>411693: ~0.10%</li><li>411984: ~0.10%</li><li>412214: ~0.10%</li><li>412560: ~0.10%</li><li>413117: ~0.10%</li><li>413563: ~0.10%</li><li>413660: ~0.10%</li><li>414337: ~0.10%</li><li>416391: ~0.10%</li><li>417066: ~0.10%</li><li>417198: ~0.10%</li><li>417751: ~0.10%</li><li>417778: ~0.10%</li><li>420257: ~0.10%</li><li>420787: ~0.10%</li><li>421001: ~0.10%</li><li>421045: ~0.10%</li><li>421354: ~0.10%</li><li>421738: ~0.10%</li><li>429057: ~0.10%</li><li>429459: ~0.10%</li><li>430319: ~0.10%</li><li>431215: ~0.10%</li><li>431332: ~0.10%</li><li>432097: ~0.10%</li><li>432283: ~0.10%</li><li>435353: ~0.10%</li><li>437793: ~0.10%</li><li>438297: ~0.10%</li><li>439016: ~0.10%</li><li>439129: ~0.10%</li><li>439217: ~0.10%</li><li>439755: ~0.10%</li><li>440343: ~0.10%</li><li>440506: ~0.10%</li><li>441030: ~0.10%</li><li>441509: ~0.10%</li><li>443408: ~0.10%</li><li>443686: ~0.10%</li><li>445516: ~0.10%</li><li>445999: ~0.10%</li><li>447039: ~0.10%</li><li>447219: ~0.10%</li><li>447298: ~0.10%</li><li>453040: ~0.10%</li><li>453745: ~0.10%</li><li>454869: ~0.10%</li><li>455323: ~0.10%</li><li>456224: ~0.10%</li><li>456251: ~0.10%</li><li>457065: ~0.10%</li><li>459890: ~0.10%</li><li>463716: ~0.10%</li><li>465235: ~0.10%</li><li>465388: ~0.10%</li><li>470344: ~0.10%</li><li>470470: ~0.10%</li><li>471875: ~0.10%</li><li>472462: ~0.10%</li><li>474016: ~0.10%</li><li>476794: ~0.10%</li><li>479266: ~0.10%</li><li>479360: ~0.10%</li><li>480621: ~0.10%</li><li>482154: ~0.10%</li><li>483014: ~0.10%</li><li>485031: ~0.10%</li><li>485828: ~0.10%</li><li>486664: ~0.10%</li><li>488266: ~0.10%</li><li>489488: ~0.10%</li><li>490249: ~0.10%</li><li>490992: ~0.10%</li><li>491894: ~0.10%</li><li>491983: ~0.10%</li><li>492620: ~0.10%</li><li>492904: ~0.10%</li><li>493035: ~0.10%</li><li>493461: ~0.10%</li><li>494255: ~0.10%</li><li>496473: ~0.10%</li><li>496474: ~0.10%</li><li>496516: ~0.10%</li><li>496813: ~0.10%</li><li>496853: ~0.10%</li><li>499553: ~0.10%</li><li>499565: ~0.10%</li><li>499737: ~0.10%</li><li>500546: ~0.10%</li><li>501510: ~0.10%</li><li>501837: ~0.10%</li><li>502910: ~0.10%</li><li>510559: ~0.10%</li><li>511473: ~0.10%</li><li>512440: ~0.10%</li><li>513832: ~0.10%</li><li>514106: ~0.10%</li><li>514902: ~0.10%</li><li>515053: ~0.10%</li><li>515507: ~0.10%</li><li>516205: ~0.10%</li><li>519323: ~0.10%</li><li>520796: ~0.10%</li><li>521492: ~0.10%</li><li>522112: ~0.10%</li><li>523814: ~0.10%</li><li>525505: ~0.10%</li><li>525583: ~0.10%</li><li>525764: ~0.10%</li><li>528105: ~0.10%</li><li>529692: ~0.10%</li><li>532014: ~0.10%</li><li>534952: ~0.10%</li><li>538836: ~0.10%</li><li>539326: ~0.10%</li><li>539504: ~0.10%</li><li>541861: ~0.10%</li><li>542925: ~0.10%</li><li>544853: ~0.10%</li><li>545091: ~0.10%</li><li>545394: ~0.10%</li><li>546521: ~0.10%</li><li>546527: ~0.10%</li><li>548007: ~0.10%</li><li>548100: ~0.10%</li><li>550372: ~0.10%</li><li>554548: ~0.10%</li><li>560711: ~0.10%</li><li>560886: ~0.10%</li><li>561084: ~0.10%</li><li>561114: ~0.10%</li><li>561329: ~0.10%</li><li>561838: ~0.10%</li><li>561946: ~0.10%</li><li>562174: ~0.10%</li><li>562185: ~0.10%</li><li>564894: ~0.10%</li><li>566373: ~0.10%</li><li>566884: ~0.10%</li><li>568110: ~0.10%</li><li>569541: ~0.10%</li><li>570042: ~0.10%</li><li>571286: ~0.10%</li><li>571515: ~0.10%</li><li>571577: ~0.10%</li><li>572354: ~0.10%</li><li>573015: ~0.10%</li><li>573283: ~0.10%</li><li>578249: ~0.10%</li><li>578786: ~0.10%</li><li>580872: ~0.10%</li><li>581072: ~0.10%</li><li>581684: ~0.10%</li><li>582341: ~0.10%</li><li>583169: ~0.10%</li><li>583225: ~0.10%</li><li>583322: ~0.10%</li><li>583889: ~0.10%</li><li>584173: ~0.10%</li><li>585523: ~0.10%</li><li>585660: ~0.10%</li><li>587005: ~0.10%</li><li>588010: ~0.10%</li><li>588337: ~0.10%</li><li>590946: ~0.10%</li><li>593319: ~0.10%</li><li>594039: ~0.10%</li><li>594452: ~0.10%</li><li>595246: ~0.10%</li><li>597157: ~0.10%</li><li>597215: ~0.10%</li><li>597368: ~0.10%</li><li>597453: ~0.10%</li><li>598538: ~0.10%</li><li>598540: ~0.10%</li><li>601120: ~0.10%</li><li>604762: ~0.10%</li><li>606244: ~0.10%</li><li>606935: ~0.10%</li><li>609731: ~0.10%</li><li>611702: ~0.10%</li><li>613040: ~0.10%</li><li>614720: ~0.10%</li><li>615525: ~0.10%</li><li>616416: ~0.10%</li><li>618280: ~0.10%</li><li>618322: ~0.10%</li><li>619151: ~0.10%</li><li>619170: ~0.10%</li><li>622593: ~0.10%</li><li>622755: ~0.10%</li><li>623529: ~0.10%</li><li>625333: ~0.10%</li><li>625780: ~0.10%</li><li>626317: ~0.10%</li><li>626670: ~0.10%</li><li>628510: ~0.10%</li><li>630995: ~0.10%</li><li>631618: ~0.10%</li><li>632001: ~0.10%</li><li>634750: ~0.10%</li><li>635195: ~0.10%</li><li>636256: ~0.10%</li><li>636542: ~0.10%</li><li>637420: ~0.10%</li><li>641046: ~0.10%</li><li>643232: ~0.10%</li><li>643901: ~0.10%</li><li>644517: ~0.10%</li><li>645962: ~0.10%</li><li>647293: ~0.10%</li><li>647443: ~0.10%</li><li>648173: ~0.10%</li><li>649204: ~0.10%</li><li>649243: ~0.10%</li><li>650521: ~0.10%</li><li>650979: ~0.10%</li><li>652493: ~0.10%</li><li>654771: ~0.10%</li><li>656535: ~0.10%</li><li>658715: ~0.10%</li><li>659035: ~0.10%</li><li>659593: ~0.10%</li><li>660535: ~0.10%</li><li>662154: ~0.10%</li><li>662784: ~0.10%</li><li>663142: ~0.10%</li><li>664964: ~0.10%</li><li>665230: ~0.10%</li><li>666319: ~0.10%</li><li>666386: ~0.10%</li><li>666561: ~0.10%</li><li>668060: ~0.10%</li><li>668151: ~0.10%</li><li>669382: ~0.10%</li><li>670341: ~0.10%</li><li>673081: ~0.10%</li><li>673634: ~0.10%</li><li>673875: ~0.10%</li><li>673881: ~0.10%</li><li>674082: ~0.10%</li><li>674852: ~0.10%</li><li>675319: ~0.10%</li><li>675492: ~0.10%</li><li>676147: ~0.10%</li><li>676238: ~0.10%</li><li>676318: ~0.10%</li><li>677459: ~0.10%</li><li>677515: ~0.10%</li><li>679216: ~0.10%</li><li>679307: ~0.10%</li><li>680354: ~0.10%</li><li>681098: ~0.10%</li><li>681873: ~0.10%</li><li>683933: ~0.10%</li><li>684800: ~0.10%</li><li>685690: ~0.10%</li><li>685871: ~0.10%</li><li>686886: ~0.10%</li><li>689687: ~0.10%</li><li>689748: ~0.10%</li><li>693501: ~0.10%</li><li>694425: ~0.10%</li><li>694466: ~0.10%</li><li>695798: ~0.10%</li><li>698130: ~0.10%</li><li>700276: ~0.10%</li><li>702137: ~0.10%</li><li>703138: ~0.10%</li><li>704067: ~0.10%</li><li>704460: ~0.10%</li><li>706199: ~0.10%</li><li>706878: ~0.10%</li><li>707280: ~0.10%</li><li>710580: ~0.10%</li><li>710968: ~0.10%</li><li>713539: ~0.10%</li><li>713584: ~0.10%</li><li>714733: ~0.10%</li><li>718172: ~0.10%</li><li>719545: ~0.10%</li><li>720471: ~0.10%</li><li>720690: ~0.10%</li><li>723568: ~0.10%</li><li>724334: ~0.10%</li><li>724700: ~0.10%</li><li>727908: ~0.10%</li><li>730499: ~0.10%</li><li>730711: ~0.10%</li><li>733963: ~0.10%</li><li>734912: ~0.10%</li><li>736431: ~0.10%</li><li>738012: ~0.10%</li><li>742066: ~0.10%</li><li>745799: ~0.10%</li><li>746161: ~0.10%</li><li>748392: ~0.10%</li><li>748838: ~0.10%</li><li>749148: ~0.10%</li><li>751762: ~0.10%</li><li>752092: ~0.10%</li><li>752527: ~0.10%</li><li>753004: ~0.10%</li><li>755386: ~0.10%</li><li>756558: ~0.10%</li><li>756736: ~0.10%</li><li>758706: ~0.10%</li><li>759523: ~0.10%</li><li>760550: ~0.10%</li><li>762688: ~0.10%</li><li>762918: ~0.10%</li><li>765769: ~0.10%</li><li>766354: ~0.10%</li><li>768119: ~0.10%</li><li>769970: ~0.10%</li><li>773106: ~0.10%</li><li>775964: ~0.10%</li><li>776055: ~0.10%</li><li>777088: ~0.10%</li><li>777529: ~0.10%</li><li>779273: ~0.10%</li><li>781066: ~0.10%</li><li>781136: ~0.10%</li><li>782328: ~0.10%</li><li>783231: ~0.10%</li><li>784413: ~0.10%</li><li>785781: ~0.10%</li><li>786250: ~0.10%</li><li>786845: ~0.10%</li><li>788012: ~0.10%</li><li>792788: ~0.10%</li><li>793182: ~0.10%</li><li>794187: ~0.10%</li><li>796097: ~0.10%</li><li>796117: ~0.10%</li><li>797182: ~0.10%</li><li>798112: ~0.10%</li><li>798215: ~0.10%</li><li>802050: ~0.10%</li><li>802669: ~0.10%</li><li>804168: ~0.10%</li><li>804461: ~0.10%</li><li>805743: ~0.10%</li><li>806286: ~0.10%</li><li>808416: ~0.10%</li><li>808455: ~0.10%</li><li>810577: ~0.10%</li><li>811702: ~0.10%</li><li>811843: ~0.10%</li><li>812439: ~0.10%</li><li>815923: ~0.10%</li><li>816475: ~0.10%</li><li>818312: ~0.10%</li><li>818367: ~0.10%</li><li>818521: ~0.10%</li><li>819278: ~0.10%</li><li>819972: ~0.10%</li><li>820890: ~0.10%</li><li>821188: ~0.10%</li><li>821615: ~0.10%</li><li>822376: ~0.10%</li><li>823136: ~0.10%</li><li>823735: ~0.10%</li><li>829476: ~0.10%</li><li>830591: ~0.10%</li><li>832433: ~0.10%</li><li>832597: ~0.10%</li><li>835043: ~0.10%</li><li>835759: ~0.10%</li><li>837731: ~0.10%</li><li>837942: ~0.10%</li><li>839448: ~0.10%</li><li>840417: ~0.10%</li><li>843622: ~0.10%</li><li>844870: ~0.10%</li><li>846084: ~0.10%</li><li>846807: ~0.10%</li><li>847076: ~0.10%</li><li>847535: ~0.10%</li><li>847766: ~0.10%</li><li>847977: ~0.10%</li><li>848075: ~0.10%</li><li>848326: ~0.10%</li><li>852725: ~0.10%</li><li>853465: ~0.10%</li><li>856427: ~0.10%</li><li>857186: ~0.10%</li><li>858543: ~0.10%</li><li>858830: ~0.10%</li><li>861596: ~0.10%</li><li>863804: ~0.10%</li><li>865139: ~0.10%</li><li>866039: ~0.10%</li><li>866406: ~0.10%</li><li>867180: ~0.10%</li><li>868280: ~0.10%</li><li>872156: ~0.10%</li><li>872791: ~0.10%</li><li>872953: ~0.10%</li><li>872959: ~0.10%</li><li>875015: ~0.10%</li><li>876522: ~0.10%</li><li>878407: ~0.10%</li><li>878710: ~0.10%</li><li>878855: ~0.10%</li><li>880495: ~0.10%</li><li>884335: ~0.10%</li><li>884941: ~0.10%</li><li>885893: ~0.10%</li><li>886713: ~0.10%</li><li>887068: ~0.10%</li><li>887115: ~0.10%</li><li>887220: ~0.10%</li><li>887751: ~0.10%</li><li>888027: ~0.10%</li><li>888069: ~0.10%</li><li>888758: ~0.10%</li><li>890152: ~0.10%</li><li>891137: ~0.10%</li><li>891766: ~0.10%</li><li>891890: ~0.10%</li><li>892662: ~0.10%</li><li>892973: ~0.10%</li><li>893360: ~0.10%</li><li>893915: ~0.10%</li><li>893976: ~0.10%</li><li>894324: ~0.10%</li><li>895709: ~0.10%</li><li>897065: ~0.10%</li><li>898387: ~0.10%</li><li>898554: ~0.10%</li><li>899291: ~0.10%</li><li>899604: ~0.10%</li><li>900513: ~0.10%</li><li>900619: ~0.10%</li><li>901170: ~0.10%</li><li>903238: ~0.10%</li><li>904294: ~0.10%</li><li>904520: ~0.10%</li><li>904666: ~0.10%</li><li>904992: ~0.10%</li><li>907212: ~0.10%</li><li>908062: ~0.10%</li><li>908561: ~0.10%</li><li>909741: ~0.10%</li><li>911034: ~0.10%</li><li>911982: ~0.10%</li><li>913716: ~0.10%</li><li>914819: ~0.10%</li><li>915750: ~0.10%</li><li>915766: ~0.10%</li><li>916125: ~0.10%</li><li>916648: ~0.10%</li><li>917285: ~0.10%</li><li>918194: ~0.10%</li><li>920583: ~0.10%</li><li>924616: ~0.10%</li><li>926035: ~0.10%</li><li>927726: ~0.10%</li><li>929821: ~0.10%</li><li>930300: ~0.10%</li><li>930796: ~0.10%</li><li>931617: ~0.10%</li><li>932719: ~0.10%</li><li>933784: ~0.10%</li><li>934378: ~0.10%</li><li>935900: ~0.10%</li><li>936118: ~0.10%</li><li>936336: ~0.10%</li><li>937231: ~0.10%</li><li>938420: ~0.10%</li><li>939184: ~0.10%</li><li>939567: ~0.10%</li><li>941588: ~0.10%</li><li>944093: ~0.10%</li><li>944912: ~0.10%</li><li>945069: ~0.10%</li><li>945659: ~0.10%</li><li>946110: ~0.10%</li><li>950044: ~0.10%</li><li>954101: ~0.10%</li><li>954147: ~0.10%</li><li>958697: ~0.10%</li><li>959530: ~0.10%</li><li>961721: ~0.10%</li><li>963582: ~0.10%</li><li>964471: ~0.10%</li><li>965026: ~0.10%</li><li>966573: ~0.10%</li><li>967330: ~0.10%</li><li>968176: ~0.10%</li><li>968346: ~0.10%</li><li>970649: ~0.10%</li><li>970873: ~0.10%</li><li>971636: ~0.10%</li><li>971664: ~0.10%</li><li>972336: ~0.10%</li><li>973555: ~0.10%</li><li>974207: ~0.10%</li><li>976896: ~0.10%</li><li>981402: ~0.10%</li><li>983723: ~0.10%</li><li>984653: ~0.10%</li><li>985161: ~0.10%</li><li>987048: ~0.10%</li><li>987107: ~0.10%</li><li>987815: ~0.10%</li><li>993209: ~0.10%</li><li>994360: ~0.10%</li><li>994613: ~0.10%</li><li>1004305: ~0.10%</li><li>1004650: ~0.10%</li><li>1004849: ~0.10%</li><li>1005118: ~0.10%</li><li>1005614: ~0.10%</li><li>1005626: ~0.10%</li><li>1006835: ~0.10%</li><li>1009556: ~0.10%</li><li>1011008: ~0.10%</li><li>1014010: ~0.10%</li><li>1014030: ~0.10%</li><li>1016549: ~0.10%</li><li>1017016: ~0.10%</li><li>1017335: ~0.10%</li><li>1018386: ~0.10%</li><li>1020640: ~0.10%</li><li>1021041: ~0.10%</li><li>1021411: ~0.10%</li><li>1022998: ~0.10%</li><li>1025423: ~0.10%</li><li>1025767: ~0.10%</li><li>1026066: ~0.10%</li><li>1026434: ~0.10%</li><li>1027516: ~0.10%</li><li>1027703: ~0.10%</li><li>1028119: ~0.10%</li><li>1028642: ~0.10%</li><li>1031554: ~0.10%</li><li>1033639: ~0.10%</li><li>1033660: ~0.10%</li><li>1034832: ~0.10%</li><li>1035274: ~0.10%</li><li>1037432: ~0.10%</li><li>1037536: ~0.10%</li><li>1037759: ~0.10%</li><li>1039860: ~0.10%</li><li>1041131: ~0.10%</li><li>1041892: ~0.10%</li><li>1043066: ~0.10%</li><li>1044326: ~0.10%</li><li>1044905: ~0.10%</li><li>1047848: ~0.10%</li><li>1048062: ~0.10%</li><li>1048534: ~0.10%</li><li>1049477: ~0.10%</li><li>1049533: ~0.10%</li><li>1050531: ~0.10%</li><li>1050937: ~0.10%</li><li>1051747: ~0.10%</li><li>1052073: ~0.10%</li><li>1052617: ~0.10%</li><li>1054049: ~0.10%</li><li>1055142: ~0.10%</li><li>1056933: ~0.10%</li><li>1057358: ~0.10%</li><li>1057911: ~0.10%</li><li>1061411: ~0.10%</li><li>1062328: ~0.10%</li><li>1062485: ~0.10%</li><li>1062534: ~0.10%</li><li>1062794: ~0.10%</li><li>1063269: ~0.10%</li><li>1063467: ~0.10%</li><li>1064568: ~0.10%</li><li>1064868: ~0.10%</li><li>1065180: ~0.10%</li><li>1065481: ~0.10%</li><li>1067970: ~0.10%</li><li>1068479: ~0.10%</li><li>1070203: ~0.10%</li><li>1070708: ~0.10%</li><li>1071075: ~0.10%</li><li>1072038: ~0.10%</li><li>1072214: ~0.10%</li><li>1072525: ~0.10%</li><li>1073953: ~0.10%</li><li>1074885: ~0.10%</li><li>1075308: ~0.10%</li><li>1078872: ~0.10%</li><li>1079266: ~0.10%</li><li>1079736: ~0.10%</li><li>1080075: ~0.10%</li><li>1081716: ~0.10%</li><li>1137391: ~0.10%</li><li>1138530: ~0.10%</li><li>1139697: ~0.10%</li><li>1139738: ~0.10%</li><li>1140119: ~0.10%</li><li>1140869: ~0.10%</li><li>1141527: ~0.10%</li><li>1144693: ~0.10%</li><li>1145425: ~0.10%</li><li>1149162: ~0.10%</li><li>1149207: ~0.10%</li><li>1149985: ~0.10%</li><li>1150086: ~0.10%</li><li>1150731: ~0.10%</li><li>1151256: ~0.10%</li><li>1151403: ~0.10%</li><li>1152236: ~0.10%</li><li>1153693: ~0.10%</li><li>1154576: ~0.10%</li><li>1155859: ~0.10%</li><li>1156918: ~0.10%</li><li>1158007: ~0.10%</li><li>1158559: ~0.10%</li><li>1158952: ~0.10%</li><li>1159165: ~0.10%</li><li>1161242: ~0.10%</li><li>1163227: ~0.10%</li><li>1164078: ~0.10%</li><li>1165251: ~0.10%</li><li>1166023: ~0.10%</li><li>1166231: ~0.10%</li><li>1167002: ~0.10%</li><li>1168584: ~0.10%</li><li>1169844: ~0.10%</li><li>1170663: ~0.10%</li><li>1171580: ~0.10%</li><li>1172072: ~0.10%</li><li>1173371: ~0.10%</li><li>1173809: ~0.10%</li><li>1174049: ~0.10%</li><li>1175044: ~0.10%</li><li>1175745: ~0.10%</li><li>1176061: ~0.10%</li><li>1176414: ~0.10%</li><li>1177449: ~0.10%</li><li>1178311: ~0.10%</li><li>1179069: ~0.10%</li><li>1179450: ~0.10%</li><li>1181077: ~0.10%</li><li>1183293: ~0.10%</li><li>1184313: ~0.10%</li><li>1185090: ~0.10%</li><li>1185669: ~0.10%</li></ul> | <ul><li>size: 32 elements</li></ul> | <ul><li>size: 32 elements</li></ul> |
* Samples:
| query_id | document_ids | scores |
|:--------------------|:----------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------|
| <code>237784</code> | <code>[6366584, 4034101, 2325374, 6914618, 6042146, ...]</code> | <code>[0.9999999991784339, 0.42233632827946693, 0.5956354295491569, 0.12644415907455164, 0.6636713730105909, ...]</code> |
| <code>904294</code> | <code>[448408, 8743975, 49600, 7339401, 2714261, ...]</code> | <code>[0.9999999991841937, 0.877629062381539, 0.8330146583389045, 0.3116634796692611, 0.4633524534142185, ...]</code> |
| <code>412214</code> | <code>[1006306, 4454048, 1949661, 4895656, 675880, ...]</code> | <code>[0.9999999994734676, 0.38790621123137803, 0.3747429039573546, 0.2990538871317199, 0.38420403106055895, ...]</code> |
* Loss: <code>pylate.losses.distillation.Distillation</code>
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `learning_rate`: 3e-05
- `num_train_epochs`: 1
- `bf16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 3e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | NanoClimateFEVER_MaxSim_ndcg@10 | NanoDBPedia_MaxSim_ndcg@10 | NanoFEVER_MaxSim_ndcg@10 | NanoFiQA2018_MaxSim_ndcg@10 | NanoHotpotQA_MaxSim_ndcg@10 | NanoMSMARCO_MaxSim_ndcg@10 | NanoNFCorpus_MaxSim_ndcg@10 | NanoNQ_MaxSim_ndcg@10 | NanoQuoraRetrieval_MaxSim_ndcg@10 | NanoSCIDOCS_MaxSim_ndcg@10 | NanoArguAna_MaxSim_ndcg@10 | NanoSciFact_MaxSim_ndcg@10 | NanoTouche2020_MaxSim_ndcg@10 | NanoBEIR_mean_MaxSim_ndcg@10 |
|:------:|:-----:|:-------------:|:-------------------------------:|:--------------------------:|:------------------------:|:---------------------------:|:---------------------------:|:--------------------------:|:---------------------------:|:---------------------:|:---------------------------------:|:--------------------------:|:--------------------------:|:--------------------------:|:-----------------------------:|:----------------------------:|
| 0.0030 | 100 | 0.0383 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0060 | 200 | 0.0328 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0090 | 300 | 0.0319 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0120 | 400 | 0.0313 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0150 | 500 | 0.0294 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0180 | 600 | 0.0265 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0210 | 700 | 0.026 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0240 | 800 | 0.0251 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0270 | 900 | 0.0242 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0300 | 1000 | 0.0245 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0330 | 1100 | 0.0232 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0360 | 1200 | 0.0236 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0390 | 1300 | 0.0231 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0420 | 1400 | 0.0227 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0450 | 1500 | 0.0225 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0480 | 1600 | 0.0222 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0510 | 1700 | 0.0218 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0540 | 1800 | 0.022 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0570 | 1900 | 0.0213 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0600 | 2000 | 0.0214 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0630 | 2100 | 0.0214 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0660 | 2200 | 0.0209 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0690 | 2300 | 0.0204 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0720 | 2400 | 0.0201 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0750 | 2500 | 0.02 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0780 | 2600 | 0.0207 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0810 | 2700 | 0.0199 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0840 | 2800 | 0.0198 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0870 | 2900 | 0.0196 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0900 | 3000 | 0.0197 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0930 | 3100 | 0.0194 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0960 | 3200 | 0.0193 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0990 | 3300 | 0.019 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1020 | 3400 | 0.0187 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1050 | 3500 | 0.0187 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1080 | 3600 | 0.0187 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1110 | 3700 | 0.0183 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1140 | 3800 | 0.0185 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1170 | 3900 | 0.0186 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1200 | 4000 | 0.0188 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1230 | 4100 | 0.019 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1260 | 4200 | 0.018 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1290 | 4300 | 0.0183 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1320 | 4400 | 0.0178 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1350 | 4500 | 0.018 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1380 | 4600 | 0.0175 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1410 | 4700 | 0.0172 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1440 | 4800 | 0.0174 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1470 | 4900 | 0.0176 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1500 | 5000 | 0.017 | 0.2687 | 0.5353 | 0.8792 | 0.4640 | 0.8518 | 0.6460 | 0.3198 | 0.6059 | 0.9229 | 0.3192 | 0.3772 | 0.7166 | 0.5900 | 0.5767 |
| 0.1530 | 5100 | 0.0173 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1560 | 5200 | 0.0175 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1590 | 5300 | 0.0173 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1620 | 5400 | 0.0168 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1650 | 5500 | 0.0164 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1680 | 5600 | 0.0171 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1710 | 5700 | 0.0169 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1740 | 5800 | 0.0166 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1770 | 5900 | 0.0169 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1801 | 6000 | 0.0166 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1831 | 6100 | 0.0164 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1861 | 6200 | 0.0165 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1891 | 6300 | 0.0162 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1921 | 6400 | 0.0161 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1951 | 6500 | 0.0163 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1981 | 6600 | 0.0161 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2011 | 6700 | 0.0161 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2041 | 6800 | 0.0159 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2071 | 6900 | 0.0166 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2101 | 7000 | 0.0158 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2131 | 7100 | 0.0161 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2161 | 7200 | 0.0154 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2191 | 7300 | 0.0151 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2221 | 7400 | 0.0155 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2251 | 7500 | 0.0154 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2281 | 7600 | 0.0157 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2311 | 7700 | 0.0154 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2341 | 7800 | 0.0153 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2371 | 7900 | 0.0155 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2401 | 8000 | 0.015 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2431 | 8100 | 0.0155 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2461 | 8200 | 0.0153 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2491 | 8300 | 0.015 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2521 | 8400 | 0.0149 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2551 | 8500 | 0.0151 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2581 | 8600 | 0.015 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2611 | 8700 | 0.0151 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2641 | 8800 | 0.0152 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2671 | 8900 | 0.0152 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2701 | 9000 | 0.0149 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2731 | 9100 | 0.0145 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2761 | 9200 | 0.0145 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2791 | 9300 | 0.0147 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2821 | 9400 | 0.0151 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2851 | 9500 | 0.0144 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2881 | 9600 | 0.0144 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2911 | 9700 | 0.0147 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2941 | 9800 | 0.0147 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2971 | 9900 | 0.0146 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3001 | 10000 | 0.0144 | 0.2775 | 0.5458 | 0.8872 | 0.4447 | 0.8686 | 0.6438 | 0.3179 | 0.6183 | 0.9151 | 0.3025 | 0.4000 | 0.7288 | 0.5733 | 0.5787 |
| 0.3031 | 10100 | 0.0139 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3061 | 10200 | 0.0141 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3091 | 10300 | 0.0147 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3121 | 10400 | 0.0146 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3151 | 10500 | 0.0143 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3181 | 10600 | 0.0144 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3211 | 10700 | 0.0141 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3241 | 10800 | 0.0147 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3271 | 10900 | 0.0142 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3301 | 11000 | 0.0139 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3331 | 11100 | 0.0142 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3361 | 11200 | 0.0139 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3391 | 11300 | 0.0143 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3421 | 11400 | 0.0142 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3451 | 11500 | 0.0144 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3481 | 11600 | 0.0142 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3511 | 11700 | 0.0137 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3541 | 11800 | 0.0139 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3571 | 11900 | 0.0142 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3601 | 12000 | 0.0136 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3631 | 12100 | 0.0139 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3661 | 12200 | 0.0144 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3691 | 12300 | 0.0136 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3721 | 12400 | 0.0141 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3751 | 12500 | 0.0137 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3781 | 12600 | 0.0136 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3811 | 12700 | 0.0137 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3841 | 12800 | 0.0132 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3871 | 12900 | 0.0136 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3901 | 13000 | 0.0138 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3931 | 13100 | 0.0138 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3961 | 13200 | 0.0138 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3991 | 13300 | 0.0137 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4021 | 13400 | 0.0131 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4051 | 13500 | 0.0136 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4081 | 13600 | 0.0135 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4111 | 13700 | 0.0138 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4141 | 13800 | 0.0135 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4171 | 13900 | 0.0138 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4201 | 14000 | 0.0132 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4231 | 14100 | 0.0136 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4261 | 14200 | 0.0131 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4291 | 14300 | 0.0135 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4321 | 14400 | 0.0138 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4351 | 14500 | 0.013 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4381 | 14600 | 0.0131 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4411 | 14700 | 0.013 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4441 | 14800 | 0.0134 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4471 | 14900 | 0.0133 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4501 | 15000 | 0.0136 | 0.2846 | 0.5781 | 0.8899 | 0.4652 | 0.8629 | 0.6491 | 0.3255 | 0.6303 | 0.9427 | 0.3168 | 0.4224 | 0.7354 | 0.5816 | 0.5911 |
| 0.4531 | 15100 | 0.0133 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4561 | 15200 | 0.0131 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4591 | 15300 | 0.0133 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4621 | 15400 | 0.0129 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4651 | 15500 | 0.013 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4681 | 15600 | 0.0133 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4711 | 15700 | 0.0132 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4741 | 15800 | 0.013 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4771 | 15900 | 0.0132 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4801 | 16000 | 0.0129 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4831 | 16100 | 0.013 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4861 | 16200 | 0.0128 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4891 | 16300 | 0.013 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4921 | 16400 | 0.0135 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4951 | 16500 | 0.0126 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4981 | 16600 | 0.0127 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5011 | 16700 | 0.0128 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5041 | 16800 | 0.0125 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5071 | 16900 | 0.0128 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5101 | 17000 | 0.0129 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5131 | 17100 | 0.0128 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5161 | 17200 | 0.0128 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5191 | 17300 | 0.013 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5221 | 17400 | 0.0126 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5251 | 17500 | 0.0128 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5281 | 17600 | 0.0128 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5311 | 17700 | 0.013 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5341 | 17800 | 0.0127 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5372 | 17900 | 0.0127 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5402 | 18000 | 0.0124 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5432 | 18100 | 0.0125 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5462 | 18200 | 0.0125 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5492 | 18300 | 0.0128 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5522 | 18400 | 0.0125 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5552 | 18500 | 0.0125 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5582 | 18600 | 0.0127 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5612 | 18700 | 0.0124 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5642 | 18800 | 0.0127 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5672 | 18900 | 0.0121 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5702 | 19000 | 0.0123 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5732 | 19100 | 0.012 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5762 | 19200 | 0.0122 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5792 | 19300 | 0.0123 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5822 | 19400 | 0.0126 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5852 | 19500 | 0.0127 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5882 | 19600 | 0.0124 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5912 | 19700 | 0.0127 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5942 | 19800 | 0.0125 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5972 | 19900 | 0.0124 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6002 | 20000 | 0.0124 | 0.2803 | 0.5772 | 0.8907 | 0.4668 | 0.8676 | 0.6476 | 0.3364 | 0.6633 | 0.9129 | 0.3240 | 0.4412 | 0.7460 | 0.5781 | 0.5948 |
| 0.6032 | 20100 | 0.0123 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6062 | 20200 | 0.0122 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6092 | 20300 | 0.0125 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6122 | 20400 | 0.0124 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6152 | 20500 | 0.0122 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6182 | 20600 | 0.0123 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6212 | 20700 | 0.0123 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6242 | 20800 | 0.0122 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6272 | 20900 | 0.012 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6302 | 21000 | 0.0123 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6332 | 21100 | 0.0121 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6362 | 21200 | 0.0121 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6392 | 21300 | 0.0124 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6422 | 21400 | 0.0123 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6452 | 21500 | 0.012 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6482 | 21600 | 0.0123 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6512 | 21700 | 0.0122 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6542 | 21800 | 0.012 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6572 | 21900 | 0.0121 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6602 | 22000 | 0.0124 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6632 | 22100 | 0.0124 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6662 | 22200 | 0.0119 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6692 | 22300 | 0.0117 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6722 | 22400 | 0.0118 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6752 | 22500 | 0.0122 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6782 | 22600 | 0.0116 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6812 | 22700 | 0.0122 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6842 | 22800 | 0.0122 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6872 | 22900 | 0.0119 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6902 | 23000 | 0.0117 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6932 | 23100 | 0.012 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6962 | 23200 | 0.012 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6992 | 23300 | 0.012 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7022 | 23400 | 0.0119 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7052 | 23500 | 0.0117 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7082 | 23600 | 0.0119 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7112 | 23700 | 0.0121 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7142 | 23800 | 0.0118 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7172 | 23900 | 0.0121 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7202 | 24000 | 0.0123 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7232 | 24100 | 0.0121 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7262 | 24200 | 0.0115 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7292 | 24300 | 0.0117 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7322 | 24400 | 0.0116 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7352 | 24500 | 0.0116 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7382 | 24600 | 0.0117 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7412 | 24700 | 0.0119 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7442 | 24800 | 0.0117 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7472 | 24900 | 0.0118 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7502 | 25000 | 0.012 | 0.2905 | 0.5880 | 0.9048 | 0.4640 | 0.8575 | 0.6645 | 0.3255 | 0.6545 | 0.9043 | 0.3302 | 0.4645 | 0.7485 | 0.5655 | 0.5971 |
| 0.7532 | 25100 | 0.012 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7562 | 25200 | 0.0119 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7592 | 25300 | 0.0119 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7622 | 25400 | 0.0119 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7652 | 25500 | 0.0118 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7682 | 25600 | 0.0118 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7712 | 25700 | 0.012 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7742 | 25800 | 0.0114 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7772 | 25900 | 0.0116 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7802 | 26000 | 0.0116 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7832 | 26100 | 0.0116 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7862 | 26200 | 0.012 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7892 | 26300 | 0.0117 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7922 | 26400 | 0.0117 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7952 | 26500 | 0.0116 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7982 | 26600 | 0.0116 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8012 | 26700 | 0.0117 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8042 | 26800 | 0.0117 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8072 | 26900 | 0.0116 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8102 | 27000 | 0.0116 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8132 | 27100 | 0.0113 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8162 | 27200 | 0.0117 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8192 | 27300 | 0.0118 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8222 | 27400 | 0.0118 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8252 | 27500 | 0.0116 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8282 | 27600 | 0.0116 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8312 | 27700 | 0.0113 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8342 | 27800 | 0.0118 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8372 | 27900 | 0.0112 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8402 | 28000 | 0.0114 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8432 | 28100 | 0.0115 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8462 | 28200 | 0.0115 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8492 | 28300 | 0.0117 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8522 | 28400 | 0.0113 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8552 | 28500 | 0.0119 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8582 | 28600 | 0.0115 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8612 | 28700 | 0.0115 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8642 | 28800 | 0.0114 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8672 | 28900 | 0.0116 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8702 | 29000 | 0.0117 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8732 | 29100 | 0.0115 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8762 | 29200 | 0.0114 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8792 | 29300 | 0.0117 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8822 | 29400 | 0.0116 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8852 | 29500 | 0.0112 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8882 | 29600 | 0.0117 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8912 | 29700 | 0.0115 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8943 | 29800 | 0.0114 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8973 | 29900 | 0.0118 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9003 | 30000 | 0.0113 | 0.2876 | 0.5814 | 0.9050 | 0.4612 | 0.8581 | 0.6531 | 0.3296 | 0.6661 | 0.9075 | 0.3311 | 0.4576 | 0.7464 | 0.5860 | 0.5978 |
</details>
### Framework Versions
- Python: 3.11.13
- Sentence Transformers: 4.0.2
- PyLate: 1.2.0
- Transformers: 4.48.2
- PyTorch: 2.6.0+cu124
- Accelerate: 1.9.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084"
}
```
#### PyLate
```bibtex
@misc{PyLate,
title={PyLate: Flexible Training and Retrieval for Late Interaction Models},
author={Chaffin, Antoine and Sourty, Raphaël},
url={https://github.com/lightonai/pylate},
year={2024}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
Speedsy/ettin-cleaned-data-1-epoch-25000
|
Speedsy
| 2025-08-13T14:06:13Z | 0 | 0 |
PyLate
|
[
"PyLate",
"safetensors",
"modernbert",
"ColBERT",
"sentence-transformers",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:533177",
"loss:Distillation",
"dataset:Speedsy/cleaned-ms-marco-bge-gemma-from-ligton",
"arxiv:1908.10084",
"base_model:jhu-clsp/ettin-encoder-17m",
"base_model:finetune:jhu-clsp/ettin-encoder-17m",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-08-13T14:06:06Z |
---
tags:
- ColBERT
- PyLate
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:533177
- loss:Distillation
base_model: jhu-clsp/ettin-encoder-17m
datasets:
- Speedsy/cleaned-ms-marco-bge-gemma-from-ligton
pipeline_tag: sentence-similarity
library_name: PyLate
metrics:
- MaxSim_accuracy@1
- MaxSim_accuracy@3
- MaxSim_accuracy@5
- MaxSim_accuracy@10
- MaxSim_precision@1
- MaxSim_precision@3
- MaxSim_precision@5
- MaxSim_precision@10
- MaxSim_recall@1
- MaxSim_recall@3
- MaxSim_recall@5
- MaxSim_recall@10
- MaxSim_ndcg@10
- MaxSim_mrr@10
- MaxSim_map@100
model-index:
- name: PyLate model based on jhu-clsp/ettin-encoder-17m
results:
- task:
type: py-late-information-retrieval
name: Py Late Information Retrieval
dataset:
name: NanoClimateFEVER
type: NanoClimateFEVER
metrics:
- type: MaxSim_accuracy@1
value: 0.26
name: Maxsim Accuracy@1
- type: MaxSim_accuracy@3
value: 0.42
name: Maxsim Accuracy@3
- type: MaxSim_accuracy@5
value: 0.5
name: Maxsim Accuracy@5
- type: MaxSim_accuracy@10
value: 0.74
name: Maxsim Accuracy@10
- type: MaxSim_precision@1
value: 0.26
name: Maxsim Precision@1
- type: MaxSim_precision@3
value: 0.15999999999999998
name: Maxsim Precision@3
- type: MaxSim_precision@5
value: 0.11600000000000002
name: Maxsim Precision@5
- type: MaxSim_precision@10
value: 0.092
name: Maxsim Precision@10
- type: MaxSim_recall@1
value: 0.13499999999999998
name: Maxsim Recall@1
- type: MaxSim_recall@3
value: 0.215
name: Maxsim Recall@3
- type: MaxSim_recall@5
value: 0.2533333333333333
name: Maxsim Recall@5
- type: MaxSim_recall@10
value: 0.36666666666666664
name: Maxsim Recall@10
- type: MaxSim_ndcg@10
value: 0.29046566923016665
name: Maxsim Ndcg@10
- type: MaxSim_mrr@10
value: 0.3753571428571428
name: Maxsim Mrr@10
- type: MaxSim_map@100
value: 0.2288474708316239
name: Maxsim Map@100
- task:
type: py-late-information-retrieval
name: Py Late Information Retrieval
dataset:
name: NanoDBPedia
type: NanoDBPedia
metrics:
- type: MaxSim_accuracy@1
value: 0.72
name: Maxsim Accuracy@1
- type: MaxSim_accuracy@3
value: 0.86
name: Maxsim Accuracy@3
- type: MaxSim_accuracy@5
value: 0.92
name: Maxsim Accuracy@5
- type: MaxSim_accuracy@10
value: 0.94
name: Maxsim Accuracy@10
- type: MaxSim_precision@1
value: 0.72
name: Maxsim Precision@1
- type: MaxSim_precision@3
value: 0.62
name: Maxsim Precision@3
- type: MaxSim_precision@5
value: 0.5559999999999999
name: Maxsim Precision@5
- type: MaxSim_precision@10
value: 0.486
name: Maxsim Precision@10
- type: MaxSim_recall@1
value: 0.07317816591767387
name: Maxsim Recall@1
- type: MaxSim_recall@3
value: 0.16578829322639302
name: Maxsim Recall@3
- type: MaxSim_recall@5
value: 0.21602211788294917
name: Maxsim Recall@5
- type: MaxSim_recall@10
value: 0.33395660972359953
name: Maxsim Recall@10
- type: MaxSim_ndcg@10
value: 0.5879701952218912
name: Maxsim Ndcg@10
- type: MaxSim_mrr@10
value: 0.7981904761904762
name: Maxsim Mrr@10
- type: MaxSim_map@100
value: 0.4653911951587876
name: Maxsim Map@100
- task:
type: py-late-information-retrieval
name: Py Late Information Retrieval
dataset:
name: NanoFEVER
type: NanoFEVER
metrics:
- type: MaxSim_accuracy@1
value: 0.88
name: Maxsim Accuracy@1
- type: MaxSim_accuracy@3
value: 0.94
name: Maxsim Accuracy@3
- type: MaxSim_accuracy@5
value: 0.96
name: Maxsim Accuracy@5
- type: MaxSim_accuracy@10
value: 0.98
name: Maxsim Accuracy@10
- type: MaxSim_precision@1
value: 0.88
name: Maxsim Precision@1
- type: MaxSim_precision@3
value: 0.32666666666666666
name: Maxsim Precision@3
- type: MaxSim_precision@5
value: 0.20399999999999996
name: Maxsim Precision@5
- type: MaxSim_precision@10
value: 0.10799999999999997
name: Maxsim Precision@10
- type: MaxSim_recall@1
value: 0.8166666666666668
name: Maxsim Recall@1
- type: MaxSim_recall@3
value: 0.8933333333333333
name: Maxsim Recall@3
- type: MaxSim_recall@5
value: 0.9233333333333333
name: Maxsim Recall@5
- type: MaxSim_recall@10
value: 0.96
name: Maxsim Recall@10
- type: MaxSim_ndcg@10
value: 0.904750880683694
name: Maxsim Ndcg@10
- type: MaxSim_mrr@10
value: 0.9111904761904762
name: Maxsim Mrr@10
- type: MaxSim_map@100
value: 0.8780844155844156
name: Maxsim Map@100
- task:
type: py-late-information-retrieval
name: Py Late Information Retrieval
dataset:
name: NanoFiQA2018
type: NanoFiQA2018
metrics:
- type: MaxSim_accuracy@1
value: 0.44
name: Maxsim Accuracy@1
- type: MaxSim_accuracy@3
value: 0.62
name: Maxsim Accuracy@3
- type: MaxSim_accuracy@5
value: 0.66
name: Maxsim Accuracy@5
- type: MaxSim_accuracy@10
value: 0.74
name: Maxsim Accuracy@10
- type: MaxSim_precision@1
value: 0.44
name: Maxsim Precision@1
- type: MaxSim_precision@3
value: 0.2733333333333333
name: Maxsim Precision@3
- type: MaxSim_precision@5
value: 0.20799999999999996
name: Maxsim Precision@5
- type: MaxSim_precision@10
value: 0.132
name: Maxsim Precision@10
- type: MaxSim_recall@1
value: 0.2409126984126984
name: Maxsim Recall@1
- type: MaxSim_recall@3
value: 0.3758492063492064
name: Maxsim Recall@3
- type: MaxSim_recall@5
value: 0.4422857142857143
name: Maxsim Recall@5
- type: MaxSim_recall@10
value: 0.5564523809523809
name: Maxsim Recall@10
- type: MaxSim_ndcg@10
value: 0.4639924721325262
name: Maxsim Ndcg@10
- type: MaxSim_mrr@10
value: 0.5364682539682539
name: Maxsim Mrr@10
- type: MaxSim_map@100
value: 0.38776210596346167
name: Maxsim Map@100
- task:
type: py-late-information-retrieval
name: Py Late Information Retrieval
dataset:
name: NanoHotpotQA
type: NanoHotpotQA
metrics:
- type: MaxSim_accuracy@1
value: 0.88
name: Maxsim Accuracy@1
- type: MaxSim_accuracy@3
value: 1.0
name: Maxsim Accuracy@3
- type: MaxSim_accuracy@5
value: 1.0
name: Maxsim Accuracy@5
- type: MaxSim_accuracy@10
value: 1.0
name: Maxsim Accuracy@10
- type: MaxSim_precision@1
value: 0.88
name: Maxsim Precision@1
- type: MaxSim_precision@3
value: 0.5466666666666665
name: Maxsim Precision@3
- type: MaxSim_precision@5
value: 0.344
name: Maxsim Precision@5
- type: MaxSim_precision@10
value: 0.17999999999999997
name: Maxsim Precision@10
- type: MaxSim_recall@1
value: 0.44
name: Maxsim Recall@1
- type: MaxSim_recall@3
value: 0.82
name: Maxsim Recall@3
- type: MaxSim_recall@5
value: 0.86
name: Maxsim Recall@5
- type: MaxSim_recall@10
value: 0.9
name: Maxsim Recall@10
- type: MaxSim_ndcg@10
value: 0.8574760963060821
name: Maxsim Ndcg@10
- type: MaxSim_mrr@10
value: 0.9333333333333332
name: Maxsim Mrr@10
- type: MaxSim_map@100
value: 0.799964380447139
name: Maxsim Map@100
- task:
type: py-late-information-retrieval
name: Py Late Information Retrieval
dataset:
name: NanoMSMARCO
type: NanoMSMARCO
metrics:
- type: MaxSim_accuracy@1
value: 0.52
name: Maxsim Accuracy@1
- type: MaxSim_accuracy@3
value: 0.66
name: Maxsim Accuracy@3
- type: MaxSim_accuracy@5
value: 0.78
name: Maxsim Accuracy@5
- type: MaxSim_accuracy@10
value: 0.82
name: Maxsim Accuracy@10
- type: MaxSim_precision@1
value: 0.52
name: Maxsim Precision@1
- type: MaxSim_precision@3
value: 0.22
name: Maxsim Precision@3
- type: MaxSim_precision@5
value: 0.15600000000000003
name: Maxsim Precision@5
- type: MaxSim_precision@10
value: 0.08199999999999999
name: Maxsim Precision@10
- type: MaxSim_recall@1
value: 0.52
name: Maxsim Recall@1
- type: MaxSim_recall@3
value: 0.66
name: Maxsim Recall@3
- type: MaxSim_recall@5
value: 0.78
name: Maxsim Recall@5
- type: MaxSim_recall@10
value: 0.82
name: Maxsim Recall@10
- type: MaxSim_ndcg@10
value: 0.6644746347246161
name: Maxsim Ndcg@10
- type: MaxSim_mrr@10
value: 0.6150555555555556
name: Maxsim Mrr@10
- type: MaxSim_map@100
value: 0.6240256785755237
name: Maxsim Map@100
- task:
type: py-late-information-retrieval
name: Py Late Information Retrieval
dataset:
name: NanoNFCorpus
type: NanoNFCorpus
metrics:
- type: MaxSim_accuracy@1
value: 0.44
name: Maxsim Accuracy@1
- type: MaxSim_accuracy@3
value: 0.52
name: Maxsim Accuracy@3
- type: MaxSim_accuracy@5
value: 0.64
name: Maxsim Accuracy@5
- type: MaxSim_accuracy@10
value: 0.66
name: Maxsim Accuracy@10
- type: MaxSim_precision@1
value: 0.44
name: Maxsim Precision@1
- type: MaxSim_precision@3
value: 0.3466666666666666
name: Maxsim Precision@3
- type: MaxSim_precision@5
value: 0.32799999999999996
name: Maxsim Precision@5
- type: MaxSim_precision@10
value: 0.25800000000000006
name: Maxsim Precision@10
- type: MaxSim_recall@1
value: 0.04328552205779273
name: Maxsim Recall@1
- type: MaxSim_recall@3
value: 0.0766231938568144
name: Maxsim Recall@3
- type: MaxSim_recall@5
value: 0.11774321091803387
name: Maxsim Recall@5
- type: MaxSim_recall@10
value: 0.14122996307881844
name: Maxsim Recall@10
- type: MaxSim_ndcg@10
value: 0.3254678017637849
name: Maxsim Ndcg@10
- type: MaxSim_mrr@10
value: 0.5031904761904762
name: Maxsim Mrr@10
- type: MaxSim_map@100
value: 0.14596595568471235
name: Maxsim Map@100
- task:
type: py-late-information-retrieval
name: Py Late Information Retrieval
dataset:
name: NanoNQ
type: NanoNQ
metrics:
- type: MaxSim_accuracy@1
value: 0.5
name: Maxsim Accuracy@1
- type: MaxSim_accuracy@3
value: 0.78
name: Maxsim Accuracy@3
- type: MaxSim_accuracy@5
value: 0.8
name: Maxsim Accuracy@5
- type: MaxSim_accuracy@10
value: 0.84
name: Maxsim Accuracy@10
- type: MaxSim_precision@1
value: 0.5
name: Maxsim Precision@1
- type: MaxSim_precision@3
value: 0.26
name: Maxsim Precision@3
- type: MaxSim_precision@5
value: 0.16399999999999998
name: Maxsim Precision@5
- type: MaxSim_precision@10
value: 0.088
name: Maxsim Precision@10
- type: MaxSim_recall@1
value: 0.47
name: Maxsim Recall@1
- type: MaxSim_recall@3
value: 0.72
name: Maxsim Recall@3
- type: MaxSim_recall@5
value: 0.75
name: Maxsim Recall@5
- type: MaxSim_recall@10
value: 0.8
name: Maxsim Recall@10
- type: MaxSim_ndcg@10
value: 0.6545121111640885
name: Maxsim Ndcg@10
- type: MaxSim_mrr@10
value: 0.6283333333333333
name: Maxsim Mrr@10
- type: MaxSim_map@100
value: 0.6016451583332388
name: Maxsim Map@100
- task:
type: py-late-information-retrieval
name: Py Late Information Retrieval
dataset:
name: NanoQuoraRetrieval
type: NanoQuoraRetrieval
metrics:
- type: MaxSim_accuracy@1
value: 0.82
name: Maxsim Accuracy@1
- type: MaxSim_accuracy@3
value: 0.98
name: Maxsim Accuracy@3
- type: MaxSim_accuracy@5
value: 1.0
name: Maxsim Accuracy@5
- type: MaxSim_accuracy@10
value: 1.0
name: Maxsim Accuracy@10
- type: MaxSim_precision@1
value: 0.82
name: Maxsim Precision@1
- type: MaxSim_precision@3
value: 0.38666666666666655
name: Maxsim Precision@3
- type: MaxSim_precision@5
value: 0.24799999999999997
name: Maxsim Precision@5
- type: MaxSim_precision@10
value: 0.12599999999999997
name: Maxsim Precision@10
- type: MaxSim_recall@1
value: 0.7340000000000001
name: Maxsim Recall@1
- type: MaxSim_recall@3
value: 0.9286666666666668
name: Maxsim Recall@3
- type: MaxSim_recall@5
value: 0.9626666666666668
name: Maxsim Recall@5
- type: MaxSim_recall@10
value: 0.966
name: Maxsim Recall@10
- type: MaxSim_ndcg@10
value: 0.904268856532771
name: Maxsim Ndcg@10
- type: MaxSim_mrr@10
value: 0.8973333333333333
name: Maxsim Mrr@10
- type: MaxSim_map@100
value: 0.8789129063746711
name: Maxsim Map@100
- task:
type: py-late-information-retrieval
name: Py Late Information Retrieval
dataset:
name: NanoSCIDOCS
type: NanoSCIDOCS
metrics:
- type: MaxSim_accuracy@1
value: 0.44
name: Maxsim Accuracy@1
- type: MaxSim_accuracy@3
value: 0.64
name: Maxsim Accuracy@3
- type: MaxSim_accuracy@5
value: 0.68
name: Maxsim Accuracy@5
- type: MaxSim_accuracy@10
value: 0.8
name: Maxsim Accuracy@10
- type: MaxSim_precision@1
value: 0.44
name: Maxsim Precision@1
- type: MaxSim_precision@3
value: 0.3066666666666667
name: Maxsim Precision@3
- type: MaxSim_precision@5
value: 0.23999999999999996
name: Maxsim Precision@5
- type: MaxSim_precision@10
value: 0.16
name: Maxsim Precision@10
- type: MaxSim_recall@1
value: 0.09166666666666667
name: Maxsim Recall@1
- type: MaxSim_recall@3
value: 0.18966666666666665
name: Maxsim Recall@3
- type: MaxSim_recall@5
value: 0.24566666666666662
name: Maxsim Recall@5
- type: MaxSim_recall@10
value: 0.32666666666666666
name: Maxsim Recall@10
- type: MaxSim_ndcg@10
value: 0.3302289015435344
name: Maxsim Ndcg@10
- type: MaxSim_mrr@10
value: 0.5572222222222223
name: Maxsim Mrr@10
- type: MaxSim_map@100
value: 0.2550652569095046
name: Maxsim Map@100
- task:
type: py-late-information-retrieval
name: Py Late Information Retrieval
dataset:
name: NanoArguAna
type: NanoArguAna
metrics:
- type: MaxSim_accuracy@1
value: 0.18
name: Maxsim Accuracy@1
- type: MaxSim_accuracy@3
value: 0.56
name: Maxsim Accuracy@3
- type: MaxSim_accuracy@5
value: 0.62
name: Maxsim Accuracy@5
- type: MaxSim_accuracy@10
value: 0.76
name: Maxsim Accuracy@10
- type: MaxSim_precision@1
value: 0.18
name: Maxsim Precision@1
- type: MaxSim_precision@3
value: 0.18666666666666668
name: Maxsim Precision@3
- type: MaxSim_precision@5
value: 0.12400000000000003
name: Maxsim Precision@5
- type: MaxSim_precision@10
value: 0.07600000000000001
name: Maxsim Precision@10
- type: MaxSim_recall@1
value: 0.18
name: Maxsim Recall@1
- type: MaxSim_recall@3
value: 0.56
name: Maxsim Recall@3
- type: MaxSim_recall@5
value: 0.62
name: Maxsim Recall@5
- type: MaxSim_recall@10
value: 0.76
name: Maxsim Recall@10
- type: MaxSim_ndcg@10
value: 0.4645129865259301
name: Maxsim Ndcg@10
- type: MaxSim_mrr@10
value: 0.370047619047619
name: Maxsim Mrr@10
- type: MaxSim_map@100
value: 0.3781205313205313
name: Maxsim Map@100
- task:
type: py-late-information-retrieval
name: Py Late Information Retrieval
dataset:
name: NanoSciFact
type: NanoSciFact
metrics:
- type: MaxSim_accuracy@1
value: 0.62
name: Maxsim Accuracy@1
- type: MaxSim_accuracy@3
value: 0.8
name: Maxsim Accuracy@3
- type: MaxSim_accuracy@5
value: 0.86
name: Maxsim Accuracy@5
- type: MaxSim_accuracy@10
value: 0.88
name: Maxsim Accuracy@10
- type: MaxSim_precision@1
value: 0.62
name: Maxsim Precision@1
- type: MaxSim_precision@3
value: 0.27999999999999997
name: Maxsim Precision@3
- type: MaxSim_precision@5
value: 0.18799999999999997
name: Maxsim Precision@5
- type: MaxSim_precision@10
value: 0.1
name: Maxsim Precision@10
- type: MaxSim_recall@1
value: 0.595
name: Maxsim Recall@1
- type: MaxSim_recall@3
value: 0.77
name: Maxsim Recall@3
- type: MaxSim_recall@5
value: 0.845
name: Maxsim Recall@5
- type: MaxSim_recall@10
value: 0.88
name: Maxsim Recall@10
- type: MaxSim_ndcg@10
value: 0.7484873429234673
name: Maxsim Ndcg@10
- type: MaxSim_mrr@10
value: 0.7096666666666667
name: Maxsim Mrr@10
- type: MaxSim_map@100
value: 0.7040186906589345
name: Maxsim Map@100
- task:
type: py-late-information-retrieval
name: Py Late Information Retrieval
dataset:
name: NanoTouche2020
type: NanoTouche2020
metrics:
- type: MaxSim_accuracy@1
value: 0.673469387755102
name: Maxsim Accuracy@1
- type: MaxSim_accuracy@3
value: 0.9795918367346939
name: Maxsim Accuracy@3
- type: MaxSim_accuracy@5
value: 1.0
name: Maxsim Accuracy@5
- type: MaxSim_accuracy@10
value: 1.0
name: Maxsim Accuracy@10
- type: MaxSim_precision@1
value: 0.673469387755102
name: Maxsim Precision@1
- type: MaxSim_precision@3
value: 0.6598639455782312
name: Maxsim Precision@3
- type: MaxSim_precision@5
value: 0.6285714285714286
name: Maxsim Precision@5
- type: MaxSim_precision@10
value: 0.5
name: Maxsim Precision@10
- type: MaxSim_recall@1
value: 0.044158696154151085
name: Maxsim Recall@1
- type: MaxSim_recall@3
value: 0.1348057247941035
name: Maxsim Recall@3
- type: MaxSim_recall@5
value: 0.2118327372127382
name: Maxsim Recall@5
- type: MaxSim_recall@10
value: 0.31512231200199337
name: Maxsim Recall@10
- type: MaxSim_ndcg@10
value: 0.5654862366863922
name: Maxsim Ndcg@10
- type: MaxSim_mrr@10
value: 0.8136054421768706
name: Maxsim Mrr@10
- type: MaxSim_map@100
value: 0.41218683538450235
name: Maxsim Map@100
- task:
type: nano-beir
name: Nano BEIR
dataset:
name: NanoBEIR mean
type: NanoBEIR_mean
metrics:
- type: MaxSim_accuracy@1
value: 0.5671899529042386
name: Maxsim Accuracy@1
- type: MaxSim_accuracy@3
value: 0.7507378335949764
name: Maxsim Accuracy@3
- type: MaxSim_accuracy@5
value: 0.8015384615384614
name: Maxsim Accuracy@5
- type: MaxSim_accuracy@10
value: 0.8584615384615385
name: Maxsim Accuracy@10
- type: MaxSim_precision@1
value: 0.5671899529042386
name: Maxsim Precision@1
- type: MaxSim_precision@3
value: 0.3517844060701203
name: Maxsim Precision@3
- type: MaxSim_precision@5
value: 0.2695824175824175
name: Maxsim Precision@5
- type: MaxSim_precision@10
value: 0.18369230769230768
name: Maxsim Precision@10
- type: MaxSim_recall@1
value: 0.3372206473750499
name: Maxsim Recall@1
- type: MaxSim_recall@3
value: 0.5007486988379373
name: Maxsim Recall@3
- type: MaxSim_recall@5
value: 0.5559910600230336
name: Maxsim Recall@5
- type: MaxSim_recall@10
value: 0.6250841999300097
name: Maxsim Recall@10
- type: MaxSim_ndcg@10
value: 0.597084168110688
name: Maxsim Ndcg@10
- type: MaxSim_mrr@10
value: 0.6653072562358275
name: Maxsim Mrr@10
- type: MaxSim_map@100
value: 0.5199992754790036
name: Maxsim Map@100
---
# PyLate model based on jhu-clsp/ettin-encoder-17m
This is a [PyLate](https://github.com/lightonai/pylate) model finetuned from [jhu-clsp/ettin-encoder-17m](https://huggingface.co/jhu-clsp/ettin-encoder-17m) on the [train](https://huggingface.co/datasets/Speedsy/cleaned-ms-marco-bge-gemma-from-ligton) dataset. It maps sentences & paragraphs to sequences of 128-dimensional dense vectors and can be used for semantic textual similarity using the MaxSim operator.
## Model Details
### Model Description
- **Model Type:** PyLate model
- **Base model:** [jhu-clsp/ettin-encoder-17m](https://huggingface.co/jhu-clsp/ettin-encoder-17m) <!-- at revision 987607455c61e7a5bbc85f7758e0512ea6d0ae4c -->
- **Document Length:** 300 tokens
- **Query Length:** 32 tokens
- **Output Dimensionality:** 128 tokens
- **Similarity Function:** MaxSim
- **Training Dataset:**
- [train](https://huggingface.co/datasets/Speedsy/cleaned-ms-marco-bge-gemma-from-ligton)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [PyLate Documentation](https://lightonai.github.io/pylate/)
- **Repository:** [PyLate on GitHub](https://github.com/lightonai/pylate)
- **Hugging Face:** [PyLate models on Hugging Face](https://huggingface.co/models?library=PyLate)
### Full Model Architecture
```
ColBERT(
(0): Transformer({'max_seq_length': 299, 'do_lower_case': False}) with Transformer model: ModernBertModel
(1): Dense({'in_features': 256, 'out_features': 128, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
)
```
## Usage
First install the PyLate library:
```bash
pip install -U pylate
```
### Retrieval
PyLate provides a streamlined interface to index and retrieve documents using ColBERT models. The index leverages the Voyager HNSW index to efficiently handle document embeddings and enable fast retrieval.
#### Indexing documents
First, load the ColBERT model and initialize the Voyager index, then encode and index your documents:
```python
from pylate import indexes, models, retrieve
# Step 1: Load the ColBERT model
model = models.ColBERT(
model_name_or_path=pylate_model_id,
)
# Step 2: Initialize the Voyager index
index = indexes.Voyager(
index_folder="pylate-index",
index_name="index",
override=True, # This overwrites the existing index if any
)
# Step 3: Encode the documents
documents_ids = ["1", "2", "3"]
documents = ["document 1 text", "document 2 text", "document 3 text"]
documents_embeddings = model.encode(
documents,
batch_size=32,
is_query=False, # Ensure that it is set to False to indicate that these are documents, not queries
show_progress_bar=True,
)
# Step 4: Add document embeddings to the index by providing embeddings and corresponding ids
index.add_documents(
documents_ids=documents_ids,
documents_embeddings=documents_embeddings,
)
```
Note that you do not have to recreate the index and encode the documents every time. Once you have created an index and added the documents, you can re-use the index later by loading it:
```python
# To load an index, simply instantiate it with the correct folder/name and without overriding it
index = indexes.Voyager(
index_folder="pylate-index",
index_name="index",
)
```
#### Retrieving top-k documents for queries
Once the documents are indexed, you can retrieve the top-k most relevant documents for a given set of queries.
To do so, initialize the ColBERT retriever with the index you want to search in, encode the queries and then retrieve the top-k documents to get the top matches ids and relevance scores:
```python
# Step 1: Initialize the ColBERT retriever
retriever = retrieve.ColBERT(index=index)
# Step 2: Encode the queries
queries_embeddings = model.encode(
["query for document 3", "query for document 1"],
batch_size=32,
is_query=True, # # Ensure that it is set to False to indicate that these are queries
show_progress_bar=True,
)
# Step 3: Retrieve top-k documents
scores = retriever.retrieve(
queries_embeddings=queries_embeddings,
k=10, # Retrieve the top 10 matches for each query
)
```
### Reranking
If you only want to use the ColBERT model to perform reranking on top of your first-stage retrieval pipeline without building an index, you can simply use rank function and pass the queries and documents to rerank:
```python
from pylate import rank, models
queries = [
"query A",
"query B",
]
documents = [
["document A", "document B"],
["document 1", "document C", "document B"],
]
documents_ids = [
[1, 2],
[1, 3, 2],
]
model = models.ColBERT(
model_name_or_path=pylate_model_id,
)
queries_embeddings = model.encode(
queries,
is_query=True,
)
documents_embeddings = model.encode(
documents,
is_query=False,
)
reranked_documents = rank.rerank(
documents_ids=documents_ids,
queries_embeddings=queries_embeddings,
documents_embeddings=documents_embeddings,
)
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Py Late Information Retrieval
* Dataset: `['NanoClimateFEVER', 'NanoDBPedia', 'NanoFEVER', 'NanoFiQA2018', 'NanoHotpotQA', 'NanoMSMARCO', 'NanoNFCorpus', 'NanoNQ', 'NanoQuoraRetrieval', 'NanoSCIDOCS', 'NanoArguAna', 'NanoSciFact', 'NanoTouche2020']`
* Evaluated with <code>pylate.evaluation.pylate_information_retrieval_evaluator.PyLateInformationRetrievalEvaluator</code>
| Metric | NanoClimateFEVER | NanoDBPedia | NanoFEVER | NanoFiQA2018 | NanoHotpotQA | NanoMSMARCO | NanoNFCorpus | NanoNQ | NanoQuoraRetrieval | NanoSCIDOCS | NanoArguAna | NanoSciFact | NanoTouche2020 |
|:--------------------|:-----------------|:------------|:-----------|:-------------|:-------------|:------------|:-------------|:-----------|:-------------------|:------------|:------------|:------------|:---------------|
| MaxSim_accuracy@1 | 0.26 | 0.72 | 0.88 | 0.44 | 0.88 | 0.52 | 0.44 | 0.5 | 0.82 | 0.44 | 0.18 | 0.62 | 0.6735 |
| MaxSim_accuracy@3 | 0.42 | 0.86 | 0.94 | 0.62 | 1.0 | 0.66 | 0.52 | 0.78 | 0.98 | 0.64 | 0.56 | 0.8 | 0.9796 |
| MaxSim_accuracy@5 | 0.5 | 0.92 | 0.96 | 0.66 | 1.0 | 0.78 | 0.64 | 0.8 | 1.0 | 0.68 | 0.62 | 0.86 | 1.0 |
| MaxSim_accuracy@10 | 0.74 | 0.94 | 0.98 | 0.74 | 1.0 | 0.82 | 0.66 | 0.84 | 1.0 | 0.8 | 0.76 | 0.88 | 1.0 |
| MaxSim_precision@1 | 0.26 | 0.72 | 0.88 | 0.44 | 0.88 | 0.52 | 0.44 | 0.5 | 0.82 | 0.44 | 0.18 | 0.62 | 0.6735 |
| MaxSim_precision@3 | 0.16 | 0.62 | 0.3267 | 0.2733 | 0.5467 | 0.22 | 0.3467 | 0.26 | 0.3867 | 0.3067 | 0.1867 | 0.28 | 0.6599 |
| MaxSim_precision@5 | 0.116 | 0.556 | 0.204 | 0.208 | 0.344 | 0.156 | 0.328 | 0.164 | 0.248 | 0.24 | 0.124 | 0.188 | 0.6286 |
| MaxSim_precision@10 | 0.092 | 0.486 | 0.108 | 0.132 | 0.18 | 0.082 | 0.258 | 0.088 | 0.126 | 0.16 | 0.076 | 0.1 | 0.5 |
| MaxSim_recall@1 | 0.135 | 0.0732 | 0.8167 | 0.2409 | 0.44 | 0.52 | 0.0433 | 0.47 | 0.734 | 0.0917 | 0.18 | 0.595 | 0.0442 |
| MaxSim_recall@3 | 0.215 | 0.1658 | 0.8933 | 0.3758 | 0.82 | 0.66 | 0.0766 | 0.72 | 0.9287 | 0.1897 | 0.56 | 0.77 | 0.1348 |
| MaxSim_recall@5 | 0.2533 | 0.216 | 0.9233 | 0.4423 | 0.86 | 0.78 | 0.1177 | 0.75 | 0.9627 | 0.2457 | 0.62 | 0.845 | 0.2118 |
| MaxSim_recall@10 | 0.3667 | 0.334 | 0.96 | 0.5565 | 0.9 | 0.82 | 0.1412 | 0.8 | 0.966 | 0.3267 | 0.76 | 0.88 | 0.3151 |
| **MaxSim_ndcg@10** | **0.2905** | **0.588** | **0.9048** | **0.464** | **0.8575** | **0.6645** | **0.3255** | **0.6545** | **0.9043** | **0.3302** | **0.4645** | **0.7485** | **0.5655** |
| MaxSim_mrr@10 | 0.3754 | 0.7982 | 0.9112 | 0.5365 | 0.9333 | 0.6151 | 0.5032 | 0.6283 | 0.8973 | 0.5572 | 0.37 | 0.7097 | 0.8136 |
| MaxSim_map@100 | 0.2288 | 0.4654 | 0.8781 | 0.3878 | 0.8 | 0.624 | 0.146 | 0.6016 | 0.8789 | 0.2551 | 0.3781 | 0.704 | 0.4122 |
#### Nano BEIR
* Dataset: `NanoBEIR_mean`
* Evaluated with <code>pylate.evaluation.nano_beir_evaluator.NanoBEIREvaluator</code>
| Metric | Value |
|:--------------------|:-----------|
| MaxSim_accuracy@1 | 0.5672 |
| MaxSim_accuracy@3 | 0.7507 |
| MaxSim_accuracy@5 | 0.8015 |
| MaxSim_accuracy@10 | 0.8585 |
| MaxSim_precision@1 | 0.5672 |
| MaxSim_precision@3 | 0.3518 |
| MaxSim_precision@5 | 0.2696 |
| MaxSim_precision@10 | 0.1837 |
| MaxSim_recall@1 | 0.3372 |
| MaxSim_recall@3 | 0.5007 |
| MaxSim_recall@5 | 0.556 |
| MaxSim_recall@10 | 0.6251 |
| **MaxSim_ndcg@10** | **0.5971** |
| MaxSim_mrr@10 | 0.6653 |
| MaxSim_map@100 | 0.52 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### train
* Dataset: [train](https://huggingface.co/datasets/Speedsy/cleaned-ms-marco-bge-gemma-from-ligton) at [63f51a3](https://huggingface.co/datasets/Speedsy/cleaned-ms-marco-bge-gemma-from-ligton/tree/63f51a32e72d981b44be6cace953edbb7dd2c4b0)
* Size: 533,177 training samples
* Columns: <code>query_id</code>, <code>document_ids</code>, and <code>scores</code>
* Approximate statistics based on the first 1000 samples:
| | query_id | document_ids | scores |
|:--------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------|:------------------------------------|
| type | int | list | list |
| details | <ul><li>264: ~0.10%</li><li>836: ~0.10%</li><li>3582: ~0.10%</li><li>4599: ~0.10%</li><li>4645: ~0.10%</li><li>4853: ~0.10%</li><li>5154: ~0.10%</li><li>7504: ~0.10%</li><li>12254: ~0.10%</li><li>12283: ~0.10%</li><li>12916: ~0.10%</li><li>14049: ~0.10%</li><li>14587: ~0.10%</li><li>14828: ~0.10%</li><li>15674: ~0.10%</li><li>15813: ~0.10%</li><li>16728: ~0.10%</li><li>22006: ~0.10%</li><li>23675: ~0.10%</li><li>24199: ~0.10%</li><li>25323: ~0.10%</li><li>29213: ~0.10%</li><li>32344: ~0.10%</li><li>34604: ~0.10%</li><li>35424: ~0.10%</li><li>36148: ~0.10%</li><li>37826: ~0.10%</li><li>38185: ~0.10%</li><li>38242: ~0.10%</li><li>39839: ~0.10%</li><li>42077: ~0.10%</li><li>43614: ~0.10%</li><li>45073: ~0.10%</li><li>46289: ~0.10%</li><li>47507: ~0.10%</li><li>48005: ~0.10%</li><li>48785: ~0.10%</li><li>49216: ~0.10%</li><li>49636: ~0.10%</li><li>49970: ~0.10%</li><li>51145: ~0.10%</li><li>52075: ~0.10%</li><li>52094: ~0.10%</li><li>52725: ~0.10%</li><li>52788: ~0.10%</li><li>54210: ~0.10%</li><li>55032: ~0.10%</li><li>59546: ~0.10%</li><li>60941: ~0.10%</li><li>62649: ~0.10%</li><li>63333: ~0.10%</li><li>64197: ~0.10%</li><li>64879: ~0.10%</li><li>67608: ~0.10%</li><li>67627: ~0.10%</li><li>69463: ~0.10%</li><li>70002: ~0.10%</li><li>70429: ~0.10%</li><li>72166: ~0.10%</li><li>72518: ~0.10%</li><li>72607: ~0.10%</li><li>72791: ~0.10%</li><li>73325: ~0.10%</li><li>74078: ~0.10%</li><li>74857: ~0.10%</li><li>75323: ~0.10%</li><li>75816: ~0.10%</li><li>76929: ~0.10%</li><li>77306: ~0.10%</li><li>77845: ~0.10%</li><li>77889: ~0.10%</li><li>78077: ~0.10%</li><li>78256: ~0.10%</li><li>78401: ~0.10%</li><li>78798: ~0.10%</li><li>80329: ~0.10%</li><li>80871: ~0.10%</li><li>81089: ~0.10%</li><li>82179: ~0.10%</li><li>82883: ~0.10%</li><li>84168: ~0.10%</li><li>86891: ~0.10%</li><li>88535: ~0.10%</li><li>88953: ~0.10%</li><li>89346: ~0.10%</li><li>89386: ~0.10%</li><li>90699: ~0.10%</li><li>90795: ~0.10%</li><li>91367: ~0.10%</li><li>91795: ~0.10%</li><li>91834: ~0.10%</li><li>92070: ~0.10%</li><li>92523: ~0.10%</li><li>92753: ~0.10%</li><li>92787: ~0.10%</li><li>93317: ~0.10%</li><li>96382: ~0.10%</li><li>96455: ~0.10%</li><li>97274: ~0.10%</li><li>97603: ~0.10%</li><li>98140: ~0.10%</li><li>101205: ~0.10%</li><li>101305: ~0.10%</li><li>102707: ~0.10%</li><li>103074: ~0.10%</li><li>105437: ~0.10%</li><li>108207: ~0.10%</li><li>109776: ~0.10%</li><li>112056: ~0.10%</li><li>112955: ~0.10%</li><li>112977: ~0.10%</li><li>113635: ~0.10%</li><li>115551: ~0.10%</li><li>116098: ~0.10%</li><li>118057: ~0.10%</li><li>120298: ~0.10%</li><li>121703: ~0.10%</li><li>123429: ~0.10%</li><li>125043: ~0.10%</li><li>125979: ~0.10%</li><li>126851: ~0.10%</li><li>128218: ~0.10%</li><li>128804: ~0.10%</li><li>129598: ~0.10%</li><li>131299: ~0.10%</li><li>132114: ~0.10%</li><li>133553: ~0.10%</li><li>133696: ~0.10%</li><li>134460: ~0.10%</li><li>137602: ~0.10%</li><li>137679: ~0.10%</li><li>138121: ~0.10%</li><li>138260: ~0.10%</li><li>139039: ~0.10%</li><li>140651: ~0.10%</li><li>142305: ~0.10%</li><li>145653: ~0.10%</li><li>145683: ~0.10%</li><li>145763: ~0.10%</li><li>150202: ~0.10%</li><li>151135: ~0.10%</li><li>152307: ~0.10%</li><li>153693: ~0.10%</li><li>154470: ~0.10%</li><li>155587: ~0.10%</li><li>157602: ~0.10%</li><li>158565: ~0.10%</li><li>159177: ~0.10%</li><li>159224: ~0.10%</li><li>159341: ~0.10%</li><li>159892: ~0.10%</li><li>161881: ~0.10%</li><li>162414: ~0.10%</li><li>163765: ~0.10%</li><li>165888: ~0.10%</li><li>167756: ~0.10%</li><li>168048: ~0.10%</li><li>168894: ~0.10%</li><li>169991: ~0.10%</li><li>170731: ~0.10%</li><li>171705: ~0.10%</li><li>176165: ~0.10%</li><li>176798: ~0.10%</li><li>180259: ~0.10%</li><li>182660: ~0.10%</li><li>183930: ~0.10%</li><li>184045: ~0.10%</li><li>184676: ~0.10%</li><li>185294: ~0.10%</li><li>186475: ~0.10%</li><li>187155: ~0.10%</li><li>189571: ~0.10%</li><li>191383: ~0.10%</li><li>192165: ~0.10%</li><li>194207: ~0.10%</li><li>195056: ~0.10%</li><li>197377: ~0.10%</li><li>198546: ~0.10%</li><li>200183: ~0.10%</li><li>202122: ~0.10%</li><li>203519: ~0.10%</li><li>206220: ~0.10%</li><li>209739: ~0.10%</li><li>210554: ~0.10%</li><li>212638: ~0.10%</li><li>213096: ~0.10%</li><li>213410: ~0.10%</li><li>217541: ~0.10%</li><li>220993: ~0.10%</li><li>223913: ~0.10%</li><li>224657: ~0.10%</li><li>227497: ~0.10%</li><li>227726: ~0.10%</li><li>228099: ~0.10%</li><li>228451: ~0.10%</li><li>231416: ~0.10%</li><li>234348: ~0.10%</li><li>235869: ~0.10%</li><li>237784: ~0.10%</li><li>240739: ~0.10%</li><li>246495: ~0.10%</li><li>246821: ~0.10%</li><li>248675: ~0.10%</li><li>249798: ~0.10%</li><li>249962: ~0.10%</li><li>249977: ~0.10%</li><li>250019: ~0.10%</li><li>250548: ~0.10%</li><li>251089: ~0.10%</li><li>254878: ~0.10%</li><li>255183: ~0.10%</li><li>255727: ~0.10%</li><li>256321: ~0.10%</li><li>258276: ~0.10%</li><li>260993: ~0.10%</li><li>261225: ~0.10%</li><li>261247: ~0.10%</li><li>262123: ~0.10%</li><li>262508: ~0.10%</li><li>262950: ~0.10%</li><li>266047: ~0.10%</li><li>267089: ~0.10%</li><li>267192: ~0.10%</li><li>268642: ~0.10%</li><li>273171: ~0.10%</li><li>273864: ~0.10%</li><li>274521: ~0.10%</li><li>274586: ~0.10%</li><li>275037: ~0.10%</li><li>275643: ~0.10%</li><li>276744: ~0.10%</li><li>277212: ~0.10%</li><li>277990: ~0.10%</li><li>280012: ~0.10%</li><li>281699: ~0.10%</li><li>282128: ~0.10%</li><li>283913: ~0.10%</li><li>284268: ~0.10%</li><li>285905: ~0.10%</li><li>287456: ~0.10%</li><li>287506: ~0.10%</li><li>288154: ~0.10%</li><li>288286: ~0.10%</li><li>289046: ~0.10%</li><li>292211: ~0.10%</li><li>292588: ~0.10%</li><li>293357: ~0.10%</li><li>294123: ~0.10%</li><li>295125: ~0.10%</li><li>299287: ~0.10%</li><li>300622: ~0.10%</li><li>302135: ~0.10%</li><li>303224: ~0.10%</li><li>304353: ~0.10%</li><li>304820: ~0.10%</li><li>305508: ~0.10%</li><li>310215: ~0.10%</li><li>310236: ~0.10%</li><li>310409: ~0.10%</li><li>311231: ~0.10%</li><li>312821: ~0.10%</li><li>314244: ~0.10%</li><li>314415: ~0.10%</li><li>314745: ~0.10%</li><li>316385: ~0.10%</li><li>317442: ~0.10%</li><li>318639: ~0.10%</li><li>318652: ~0.10%</li><li>320855: ~0.10%</li><li>321983: ~0.10%</li><li>322114: ~0.10%</li><li>323196: ~0.10%</li><li>324868: ~0.10%</li><li>327581: ~0.10%</li><li>329337: ~0.10%</li><li>330641: ~0.10%</li><li>331572: ~0.10%</li><li>331650: ~0.10%</li><li>331993: ~0.10%</li><li>334757: ~0.10%</li><li>336561: ~0.10%</li><li>336791: ~0.10%</li><li>337002: ~0.10%</li><li>338332: ~0.10%</li><li>338456: ~0.10%</li><li>339065: ~0.10%</li><li>339556: ~0.10%</li><li>339755: ~0.10%</li><li>339870: ~0.10%</li><li>340599: ~0.20%</li><li>341156: ~0.10%</li><li>342121: ~0.10%</li><li>343411: ~0.10%</li><li>344860: ~0.10%</li><li>345924: ~0.10%</li><li>346421: ~0.10%</li><li>346425: ~0.10%</li><li>348157: ~0.10%</li><li>351281: ~0.10%</li><li>351858: ~0.10%</li><li>353008: ~0.10%</li><li>353748: ~0.10%</li><li>359787: ~0.10%</li><li>359893: ~0.10%</li><li>360168: ~0.10%</li><li>361127: ~0.10%</li><li>362220: ~0.10%</li><li>362560: ~0.10%</li><li>364559: ~0.10%</li><li>366835: ~0.10%</li><li>367185: ~0.10%</li><li>369045: ~0.10%</li><li>371113: ~0.10%</li><li>375630: ~0.10%</li><li>376044: ~0.10%</li><li>376524: ~0.10%</li><li>377735: ~0.10%</li><li>378574: ~0.10%</li><li>379749: ~0.10%</li><li>379953: ~0.10%</li><li>384364: ~0.10%</li><li>384398: ~0.10%</li><li>384751: ~0.10%</li><li>385758: ~0.10%</li><li>385893: ~0.10%</li><li>386098: ~0.10%</li><li>387205: ~0.10%</li><li>387374: ~0.10%</li><li>388450: ~0.10%</li><li>388589: ~0.10%</li><li>388593: ~0.10%</li><li>389571: ~0.10%</li><li>389572: ~0.10%</li><li>391531: ~0.10%</li><li>391857: ~0.10%</li><li>393174: ~0.10%</li><li>393426: ~0.10%</li><li>398132: ~0.10%</li><li>398721: ~0.10%</li><li>399016: ~0.10%</li><li>399589: ~0.10%</li><li>400259: ~0.10%</li><li>401489: ~0.10%</li><li>401601: ~0.10%</li><li>403876: ~0.10%</li><li>403897: ~0.10%</li><li>404830: ~0.10%</li><li>406102: ~0.10%</li><li>406397: ~0.10%</li><li>407151: ~0.10%</li><li>407608: ~0.10%</li><li>409373: ~0.10%</li><li>410084: ~0.10%</li><li>411693: ~0.10%</li><li>411984: ~0.10%</li><li>412214: ~0.10%</li><li>412560: ~0.10%</li><li>413117: ~0.10%</li><li>413563: ~0.10%</li><li>413660: ~0.10%</li><li>414337: ~0.10%</li><li>416391: ~0.10%</li><li>417066: ~0.10%</li><li>417198: ~0.10%</li><li>417751: ~0.10%</li><li>417778: ~0.10%</li><li>420257: ~0.10%</li><li>420787: ~0.10%</li><li>421001: ~0.10%</li><li>421045: ~0.10%</li><li>421354: ~0.10%</li><li>421738: ~0.10%</li><li>429057: ~0.10%</li><li>429459: ~0.10%</li><li>430319: ~0.10%</li><li>431215: ~0.10%</li><li>431332: ~0.10%</li><li>432097: ~0.10%</li><li>432283: ~0.10%</li><li>435353: ~0.10%</li><li>437793: ~0.10%</li><li>438297: ~0.10%</li><li>439016: ~0.10%</li><li>439129: ~0.10%</li><li>439217: ~0.10%</li><li>439755: ~0.10%</li><li>440343: ~0.10%</li><li>440506: ~0.10%</li><li>441030: ~0.10%</li><li>441509: ~0.10%</li><li>443408: ~0.10%</li><li>443686: ~0.10%</li><li>445516: ~0.10%</li><li>445999: ~0.10%</li><li>447039: ~0.10%</li><li>447219: ~0.10%</li><li>447298: ~0.10%</li><li>453040: ~0.10%</li><li>453745: ~0.10%</li><li>454869: ~0.10%</li><li>455323: ~0.10%</li><li>456224: ~0.10%</li><li>456251: ~0.10%</li><li>457065: ~0.10%</li><li>459890: ~0.10%</li><li>463716: ~0.10%</li><li>465235: ~0.10%</li><li>465388: ~0.10%</li><li>470344: ~0.10%</li><li>470470: ~0.10%</li><li>471875: ~0.10%</li><li>472462: ~0.10%</li><li>474016: ~0.10%</li><li>476794: ~0.10%</li><li>479266: ~0.10%</li><li>479360: ~0.10%</li><li>480621: ~0.10%</li><li>482154: ~0.10%</li><li>483014: ~0.10%</li><li>485031: ~0.10%</li><li>485828: ~0.10%</li><li>486664: ~0.10%</li><li>488266: ~0.10%</li><li>489488: ~0.10%</li><li>490249: ~0.10%</li><li>490992: ~0.10%</li><li>491894: ~0.10%</li><li>491983: ~0.10%</li><li>492620: ~0.10%</li><li>492904: ~0.10%</li><li>493035: ~0.10%</li><li>493461: ~0.10%</li><li>494255: ~0.10%</li><li>496473: ~0.10%</li><li>496474: ~0.10%</li><li>496516: ~0.10%</li><li>496813: ~0.10%</li><li>496853: ~0.10%</li><li>499553: ~0.10%</li><li>499565: ~0.10%</li><li>499737: ~0.10%</li><li>500546: ~0.10%</li><li>501510: ~0.10%</li><li>501837: ~0.10%</li><li>502910: ~0.10%</li><li>510559: ~0.10%</li><li>511473: ~0.10%</li><li>512440: ~0.10%</li><li>513832: ~0.10%</li><li>514106: ~0.10%</li><li>514902: ~0.10%</li><li>515053: ~0.10%</li><li>515507: ~0.10%</li><li>516205: ~0.10%</li><li>519323: ~0.10%</li><li>520796: ~0.10%</li><li>521492: ~0.10%</li><li>522112: ~0.10%</li><li>523814: ~0.10%</li><li>525505: ~0.10%</li><li>525583: ~0.10%</li><li>525764: ~0.10%</li><li>528105: ~0.10%</li><li>529692: ~0.10%</li><li>532014: ~0.10%</li><li>534952: ~0.10%</li><li>538836: ~0.10%</li><li>539326: ~0.10%</li><li>539504: ~0.10%</li><li>541861: ~0.10%</li><li>542925: ~0.10%</li><li>544853: ~0.10%</li><li>545091: ~0.10%</li><li>545394: ~0.10%</li><li>546521: ~0.10%</li><li>546527: ~0.10%</li><li>548007: ~0.10%</li><li>548100: ~0.10%</li><li>550372: ~0.10%</li><li>554548: ~0.10%</li><li>560711: ~0.10%</li><li>560886: ~0.10%</li><li>561084: ~0.10%</li><li>561114: ~0.10%</li><li>561329: ~0.10%</li><li>561838: ~0.10%</li><li>561946: ~0.10%</li><li>562174: ~0.10%</li><li>562185: ~0.10%</li><li>564894: ~0.10%</li><li>566373: ~0.10%</li><li>566884: ~0.10%</li><li>568110: ~0.10%</li><li>569541: ~0.10%</li><li>570042: ~0.10%</li><li>571286: ~0.10%</li><li>571515: ~0.10%</li><li>571577: ~0.10%</li><li>572354: ~0.10%</li><li>573015: ~0.10%</li><li>573283: ~0.10%</li><li>578249: ~0.10%</li><li>578786: ~0.10%</li><li>580872: ~0.10%</li><li>581072: ~0.10%</li><li>581684: ~0.10%</li><li>582341: ~0.10%</li><li>583169: ~0.10%</li><li>583225: ~0.10%</li><li>583322: ~0.10%</li><li>583889: ~0.10%</li><li>584173: ~0.10%</li><li>585523: ~0.10%</li><li>585660: ~0.10%</li><li>587005: ~0.10%</li><li>588010: ~0.10%</li><li>588337: ~0.10%</li><li>590946: ~0.10%</li><li>593319: ~0.10%</li><li>594039: ~0.10%</li><li>594452: ~0.10%</li><li>595246: ~0.10%</li><li>597157: ~0.10%</li><li>597215: ~0.10%</li><li>597368: ~0.10%</li><li>597453: ~0.10%</li><li>598538: ~0.10%</li><li>598540: ~0.10%</li><li>601120: ~0.10%</li><li>604762: ~0.10%</li><li>606244: ~0.10%</li><li>606935: ~0.10%</li><li>609731: ~0.10%</li><li>611702: ~0.10%</li><li>613040: ~0.10%</li><li>614720: ~0.10%</li><li>615525: ~0.10%</li><li>616416: ~0.10%</li><li>618280: ~0.10%</li><li>618322: ~0.10%</li><li>619151: ~0.10%</li><li>619170: ~0.10%</li><li>622593: ~0.10%</li><li>622755: ~0.10%</li><li>623529: ~0.10%</li><li>625333: ~0.10%</li><li>625780: ~0.10%</li><li>626317: ~0.10%</li><li>626670: ~0.10%</li><li>628510: ~0.10%</li><li>630995: ~0.10%</li><li>631618: ~0.10%</li><li>632001: ~0.10%</li><li>634750: ~0.10%</li><li>635195: ~0.10%</li><li>636256: ~0.10%</li><li>636542: ~0.10%</li><li>637420: ~0.10%</li><li>641046: ~0.10%</li><li>643232: ~0.10%</li><li>643901: ~0.10%</li><li>644517: ~0.10%</li><li>645962: ~0.10%</li><li>647293: ~0.10%</li><li>647443: ~0.10%</li><li>648173: ~0.10%</li><li>649204: ~0.10%</li><li>649243: ~0.10%</li><li>650521: ~0.10%</li><li>650979: ~0.10%</li><li>652493: ~0.10%</li><li>654771: ~0.10%</li><li>656535: ~0.10%</li><li>658715: ~0.10%</li><li>659035: ~0.10%</li><li>659593: ~0.10%</li><li>660535: ~0.10%</li><li>662154: ~0.10%</li><li>662784: ~0.10%</li><li>663142: ~0.10%</li><li>664964: ~0.10%</li><li>665230: ~0.10%</li><li>666319: ~0.10%</li><li>666386: ~0.10%</li><li>666561: ~0.10%</li><li>668060: ~0.10%</li><li>668151: ~0.10%</li><li>669382: ~0.10%</li><li>670341: ~0.10%</li><li>673081: ~0.10%</li><li>673634: ~0.10%</li><li>673875: ~0.10%</li><li>673881: ~0.10%</li><li>674082: ~0.10%</li><li>674852: ~0.10%</li><li>675319: ~0.10%</li><li>675492: ~0.10%</li><li>676147: ~0.10%</li><li>676238: ~0.10%</li><li>676318: ~0.10%</li><li>677459: ~0.10%</li><li>677515: ~0.10%</li><li>679216: ~0.10%</li><li>679307: ~0.10%</li><li>680354: ~0.10%</li><li>681098: ~0.10%</li><li>681873: ~0.10%</li><li>683933: ~0.10%</li><li>684800: ~0.10%</li><li>685690: ~0.10%</li><li>685871: ~0.10%</li><li>686886: ~0.10%</li><li>689687: ~0.10%</li><li>689748: ~0.10%</li><li>693501: ~0.10%</li><li>694425: ~0.10%</li><li>694466: ~0.10%</li><li>695798: ~0.10%</li><li>698130: ~0.10%</li><li>700276: ~0.10%</li><li>702137: ~0.10%</li><li>703138: ~0.10%</li><li>704067: ~0.10%</li><li>704460: ~0.10%</li><li>706199: ~0.10%</li><li>706878: ~0.10%</li><li>707280: ~0.10%</li><li>710580: ~0.10%</li><li>710968: ~0.10%</li><li>713539: ~0.10%</li><li>713584: ~0.10%</li><li>714733: ~0.10%</li><li>718172: ~0.10%</li><li>719545: ~0.10%</li><li>720471: ~0.10%</li><li>720690: ~0.10%</li><li>723568: ~0.10%</li><li>724334: ~0.10%</li><li>724700: ~0.10%</li><li>727908: ~0.10%</li><li>730499: ~0.10%</li><li>730711: ~0.10%</li><li>733963: ~0.10%</li><li>734912: ~0.10%</li><li>736431: ~0.10%</li><li>738012: ~0.10%</li><li>742066: ~0.10%</li><li>745799: ~0.10%</li><li>746161: ~0.10%</li><li>748392: ~0.10%</li><li>748838: ~0.10%</li><li>749148: ~0.10%</li><li>751762: ~0.10%</li><li>752092: ~0.10%</li><li>752527: ~0.10%</li><li>753004: ~0.10%</li><li>755386: ~0.10%</li><li>756558: ~0.10%</li><li>756736: ~0.10%</li><li>758706: ~0.10%</li><li>759523: ~0.10%</li><li>760550: ~0.10%</li><li>762688: ~0.10%</li><li>762918: ~0.10%</li><li>765769: ~0.10%</li><li>766354: ~0.10%</li><li>768119: ~0.10%</li><li>769970: ~0.10%</li><li>773106: ~0.10%</li><li>775964: ~0.10%</li><li>776055: ~0.10%</li><li>777088: ~0.10%</li><li>777529: ~0.10%</li><li>779273: ~0.10%</li><li>781066: ~0.10%</li><li>781136: ~0.10%</li><li>782328: ~0.10%</li><li>783231: ~0.10%</li><li>784413: ~0.10%</li><li>785781: ~0.10%</li><li>786250: ~0.10%</li><li>786845: ~0.10%</li><li>788012: ~0.10%</li><li>792788: ~0.10%</li><li>793182: ~0.10%</li><li>794187: ~0.10%</li><li>796097: ~0.10%</li><li>796117: ~0.10%</li><li>797182: ~0.10%</li><li>798112: ~0.10%</li><li>798215: ~0.10%</li><li>802050: ~0.10%</li><li>802669: ~0.10%</li><li>804168: ~0.10%</li><li>804461: ~0.10%</li><li>805743: ~0.10%</li><li>806286: ~0.10%</li><li>808416: ~0.10%</li><li>808455: ~0.10%</li><li>810577: ~0.10%</li><li>811702: ~0.10%</li><li>811843: ~0.10%</li><li>812439: ~0.10%</li><li>815923: ~0.10%</li><li>816475: ~0.10%</li><li>818312: ~0.10%</li><li>818367: ~0.10%</li><li>818521: ~0.10%</li><li>819278: ~0.10%</li><li>819972: ~0.10%</li><li>820890: ~0.10%</li><li>821188: ~0.10%</li><li>821615: ~0.10%</li><li>822376: ~0.10%</li><li>823136: ~0.10%</li><li>823735: ~0.10%</li><li>829476: ~0.10%</li><li>830591: ~0.10%</li><li>832433: ~0.10%</li><li>832597: ~0.10%</li><li>835043: ~0.10%</li><li>835759: ~0.10%</li><li>837731: ~0.10%</li><li>837942: ~0.10%</li><li>839448: ~0.10%</li><li>840417: ~0.10%</li><li>843622: ~0.10%</li><li>844870: ~0.10%</li><li>846084: ~0.10%</li><li>846807: ~0.10%</li><li>847076: ~0.10%</li><li>847535: ~0.10%</li><li>847766: ~0.10%</li><li>847977: ~0.10%</li><li>848075: ~0.10%</li><li>848326: ~0.10%</li><li>852725: ~0.10%</li><li>853465: ~0.10%</li><li>856427: ~0.10%</li><li>857186: ~0.10%</li><li>858543: ~0.10%</li><li>858830: ~0.10%</li><li>861596: ~0.10%</li><li>863804: ~0.10%</li><li>865139: ~0.10%</li><li>866039: ~0.10%</li><li>866406: ~0.10%</li><li>867180: ~0.10%</li><li>868280: ~0.10%</li><li>872156: ~0.10%</li><li>872791: ~0.10%</li><li>872953: ~0.10%</li><li>872959: ~0.10%</li><li>875015: ~0.10%</li><li>876522: ~0.10%</li><li>878407: ~0.10%</li><li>878710: ~0.10%</li><li>878855: ~0.10%</li><li>880495: ~0.10%</li><li>884335: ~0.10%</li><li>884941: ~0.10%</li><li>885893: ~0.10%</li><li>886713: ~0.10%</li><li>887068: ~0.10%</li><li>887115: ~0.10%</li><li>887220: ~0.10%</li><li>887751: ~0.10%</li><li>888027: ~0.10%</li><li>888069: ~0.10%</li><li>888758: ~0.10%</li><li>890152: ~0.10%</li><li>891137: ~0.10%</li><li>891766: ~0.10%</li><li>891890: ~0.10%</li><li>892662: ~0.10%</li><li>892973: ~0.10%</li><li>893360: ~0.10%</li><li>893915: ~0.10%</li><li>893976: ~0.10%</li><li>894324: ~0.10%</li><li>895709: ~0.10%</li><li>897065: ~0.10%</li><li>898387: ~0.10%</li><li>898554: ~0.10%</li><li>899291: ~0.10%</li><li>899604: ~0.10%</li><li>900513: ~0.10%</li><li>900619: ~0.10%</li><li>901170: ~0.10%</li><li>903238: ~0.10%</li><li>904294: ~0.10%</li><li>904520: ~0.10%</li><li>904666: ~0.10%</li><li>904992: ~0.10%</li><li>907212: ~0.10%</li><li>908062: ~0.10%</li><li>908561: ~0.10%</li><li>909741: ~0.10%</li><li>911034: ~0.10%</li><li>911982: ~0.10%</li><li>913716: ~0.10%</li><li>914819: ~0.10%</li><li>915750: ~0.10%</li><li>915766: ~0.10%</li><li>916125: ~0.10%</li><li>916648: ~0.10%</li><li>917285: ~0.10%</li><li>918194: ~0.10%</li><li>920583: ~0.10%</li><li>924616: ~0.10%</li><li>926035: ~0.10%</li><li>927726: ~0.10%</li><li>929821: ~0.10%</li><li>930300: ~0.10%</li><li>930796: ~0.10%</li><li>931617: ~0.10%</li><li>932719: ~0.10%</li><li>933784: ~0.10%</li><li>934378: ~0.10%</li><li>935900: ~0.10%</li><li>936118: ~0.10%</li><li>936336: ~0.10%</li><li>937231: ~0.10%</li><li>938420: ~0.10%</li><li>939184: ~0.10%</li><li>939567: ~0.10%</li><li>941588: ~0.10%</li><li>944093: ~0.10%</li><li>944912: ~0.10%</li><li>945069: ~0.10%</li><li>945659: ~0.10%</li><li>946110: ~0.10%</li><li>950044: ~0.10%</li><li>954101: ~0.10%</li><li>954147: ~0.10%</li><li>958697: ~0.10%</li><li>959530: ~0.10%</li><li>961721: ~0.10%</li><li>963582: ~0.10%</li><li>964471: ~0.10%</li><li>965026: ~0.10%</li><li>966573: ~0.10%</li><li>967330: ~0.10%</li><li>968176: ~0.10%</li><li>968346: ~0.10%</li><li>970649: ~0.10%</li><li>970873: ~0.10%</li><li>971636: ~0.10%</li><li>971664: ~0.10%</li><li>972336: ~0.10%</li><li>973555: ~0.10%</li><li>974207: ~0.10%</li><li>976896: ~0.10%</li><li>981402: ~0.10%</li><li>983723: ~0.10%</li><li>984653: ~0.10%</li><li>985161: ~0.10%</li><li>987048: ~0.10%</li><li>987107: ~0.10%</li><li>987815: ~0.10%</li><li>993209: ~0.10%</li><li>994360: ~0.10%</li><li>994613: ~0.10%</li><li>1004305: ~0.10%</li><li>1004650: ~0.10%</li><li>1004849: ~0.10%</li><li>1005118: ~0.10%</li><li>1005614: ~0.10%</li><li>1005626: ~0.10%</li><li>1006835: ~0.10%</li><li>1009556: ~0.10%</li><li>1011008: ~0.10%</li><li>1014010: ~0.10%</li><li>1014030: ~0.10%</li><li>1016549: ~0.10%</li><li>1017016: ~0.10%</li><li>1017335: ~0.10%</li><li>1018386: ~0.10%</li><li>1020640: ~0.10%</li><li>1021041: ~0.10%</li><li>1021411: ~0.10%</li><li>1022998: ~0.10%</li><li>1025423: ~0.10%</li><li>1025767: ~0.10%</li><li>1026066: ~0.10%</li><li>1026434: ~0.10%</li><li>1027516: ~0.10%</li><li>1027703: ~0.10%</li><li>1028119: ~0.10%</li><li>1028642: ~0.10%</li><li>1031554: ~0.10%</li><li>1033639: ~0.10%</li><li>1033660: ~0.10%</li><li>1034832: ~0.10%</li><li>1035274: ~0.10%</li><li>1037432: ~0.10%</li><li>1037536: ~0.10%</li><li>1037759: ~0.10%</li><li>1039860: ~0.10%</li><li>1041131: ~0.10%</li><li>1041892: ~0.10%</li><li>1043066: ~0.10%</li><li>1044326: ~0.10%</li><li>1044905: ~0.10%</li><li>1047848: ~0.10%</li><li>1048062: ~0.10%</li><li>1048534: ~0.10%</li><li>1049477: ~0.10%</li><li>1049533: ~0.10%</li><li>1050531: ~0.10%</li><li>1050937: ~0.10%</li><li>1051747: ~0.10%</li><li>1052073: ~0.10%</li><li>1052617: ~0.10%</li><li>1054049: ~0.10%</li><li>1055142: ~0.10%</li><li>1056933: ~0.10%</li><li>1057358: ~0.10%</li><li>1057911: ~0.10%</li><li>1061411: ~0.10%</li><li>1062328: ~0.10%</li><li>1062485: ~0.10%</li><li>1062534: ~0.10%</li><li>1062794: ~0.10%</li><li>1063269: ~0.10%</li><li>1063467: ~0.10%</li><li>1064568: ~0.10%</li><li>1064868: ~0.10%</li><li>1065180: ~0.10%</li><li>1065481: ~0.10%</li><li>1067970: ~0.10%</li><li>1068479: ~0.10%</li><li>1070203: ~0.10%</li><li>1070708: ~0.10%</li><li>1071075: ~0.10%</li><li>1072038: ~0.10%</li><li>1072214: ~0.10%</li><li>1072525: ~0.10%</li><li>1073953: ~0.10%</li><li>1074885: ~0.10%</li><li>1075308: ~0.10%</li><li>1078872: ~0.10%</li><li>1079266: ~0.10%</li><li>1079736: ~0.10%</li><li>1080075: ~0.10%</li><li>1081716: ~0.10%</li><li>1137391: ~0.10%</li><li>1138530: ~0.10%</li><li>1139697: ~0.10%</li><li>1139738: ~0.10%</li><li>1140119: ~0.10%</li><li>1140869: ~0.10%</li><li>1141527: ~0.10%</li><li>1144693: ~0.10%</li><li>1145425: ~0.10%</li><li>1149162: ~0.10%</li><li>1149207: ~0.10%</li><li>1149985: ~0.10%</li><li>1150086: ~0.10%</li><li>1150731: ~0.10%</li><li>1151256: ~0.10%</li><li>1151403: ~0.10%</li><li>1152236: ~0.10%</li><li>1153693: ~0.10%</li><li>1154576: ~0.10%</li><li>1155859: ~0.10%</li><li>1156918: ~0.10%</li><li>1158007: ~0.10%</li><li>1158559: ~0.10%</li><li>1158952: ~0.10%</li><li>1159165: ~0.10%</li><li>1161242: ~0.10%</li><li>1163227: ~0.10%</li><li>1164078: ~0.10%</li><li>1165251: ~0.10%</li><li>1166023: ~0.10%</li><li>1166231: ~0.10%</li><li>1167002: ~0.10%</li><li>1168584: ~0.10%</li><li>1169844: ~0.10%</li><li>1170663: ~0.10%</li><li>1171580: ~0.10%</li><li>1172072: ~0.10%</li><li>1173371: ~0.10%</li><li>1173809: ~0.10%</li><li>1174049: ~0.10%</li><li>1175044: ~0.10%</li><li>1175745: ~0.10%</li><li>1176061: ~0.10%</li><li>1176414: ~0.10%</li><li>1177449: ~0.10%</li><li>1178311: ~0.10%</li><li>1179069: ~0.10%</li><li>1179450: ~0.10%</li><li>1181077: ~0.10%</li><li>1183293: ~0.10%</li><li>1184313: ~0.10%</li><li>1185090: ~0.10%</li><li>1185669: ~0.10%</li></ul> | <ul><li>size: 32 elements</li></ul> | <ul><li>size: 32 elements</li></ul> |
* Samples:
| query_id | document_ids | scores |
|:--------------------|:----------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------|
| <code>237784</code> | <code>[6366584, 4034101, 2325374, 6914618, 6042146, ...]</code> | <code>[0.9999999991784339, 0.42233632827946693, 0.5956354295491569, 0.12644415907455164, 0.6636713730105909, ...]</code> |
| <code>904294</code> | <code>[448408, 8743975, 49600, 7339401, 2714261, ...]</code> | <code>[0.9999999991841937, 0.877629062381539, 0.8330146583389045, 0.3116634796692611, 0.4633524534142185, ...]</code> |
| <code>412214</code> | <code>[1006306, 4454048, 1949661, 4895656, 675880, ...]</code> | <code>[0.9999999994734676, 0.38790621123137803, 0.3747429039573546, 0.2990538871317199, 0.38420403106055895, ...]</code> |
* Loss: <code>pylate.losses.distillation.Distillation</code>
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `learning_rate`: 3e-05
- `num_train_epochs`: 1
- `bf16`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 3e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | NanoClimateFEVER_MaxSim_ndcg@10 | NanoDBPedia_MaxSim_ndcg@10 | NanoFEVER_MaxSim_ndcg@10 | NanoFiQA2018_MaxSim_ndcg@10 | NanoHotpotQA_MaxSim_ndcg@10 | NanoMSMARCO_MaxSim_ndcg@10 | NanoNFCorpus_MaxSim_ndcg@10 | NanoNQ_MaxSim_ndcg@10 | NanoQuoraRetrieval_MaxSim_ndcg@10 | NanoSCIDOCS_MaxSim_ndcg@10 | NanoArguAna_MaxSim_ndcg@10 | NanoSciFact_MaxSim_ndcg@10 | NanoTouche2020_MaxSim_ndcg@10 | NanoBEIR_mean_MaxSim_ndcg@10 |
|:------:|:-----:|:-------------:|:-------------------------------:|:--------------------------:|:------------------------:|:---------------------------:|:---------------------------:|:--------------------------:|:---------------------------:|:---------------------:|:---------------------------------:|:--------------------------:|:--------------------------:|:--------------------------:|:-----------------------------:|:----------------------------:|
| 0.0030 | 100 | 0.0383 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0060 | 200 | 0.0328 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0090 | 300 | 0.0319 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0120 | 400 | 0.0313 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0150 | 500 | 0.0294 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0180 | 600 | 0.0265 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0210 | 700 | 0.026 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0240 | 800 | 0.0251 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0270 | 900 | 0.0242 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0300 | 1000 | 0.0245 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0330 | 1100 | 0.0232 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0360 | 1200 | 0.0236 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0390 | 1300 | 0.0231 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0420 | 1400 | 0.0227 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0450 | 1500 | 0.0225 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0480 | 1600 | 0.0222 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0510 | 1700 | 0.0218 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0540 | 1800 | 0.022 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0570 | 1900 | 0.0213 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0600 | 2000 | 0.0214 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0630 | 2100 | 0.0214 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0660 | 2200 | 0.0209 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0690 | 2300 | 0.0204 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0720 | 2400 | 0.0201 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0750 | 2500 | 0.02 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0780 | 2600 | 0.0207 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0810 | 2700 | 0.0199 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0840 | 2800 | 0.0198 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0870 | 2900 | 0.0196 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0900 | 3000 | 0.0197 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0930 | 3100 | 0.0194 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0960 | 3200 | 0.0193 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0990 | 3300 | 0.019 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1020 | 3400 | 0.0187 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1050 | 3500 | 0.0187 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1080 | 3600 | 0.0187 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1110 | 3700 | 0.0183 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1140 | 3800 | 0.0185 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1170 | 3900 | 0.0186 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1200 | 4000 | 0.0188 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1230 | 4100 | 0.019 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1260 | 4200 | 0.018 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1290 | 4300 | 0.0183 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1320 | 4400 | 0.0178 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1350 | 4500 | 0.018 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1380 | 4600 | 0.0175 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1410 | 4700 | 0.0172 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1440 | 4800 | 0.0174 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1470 | 4900 | 0.0176 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1500 | 5000 | 0.017 | 0.2687 | 0.5353 | 0.8792 | 0.4640 | 0.8518 | 0.6460 | 0.3198 | 0.6059 | 0.9229 | 0.3192 | 0.3772 | 0.7166 | 0.5900 | 0.5767 |
| 0.1530 | 5100 | 0.0173 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1560 | 5200 | 0.0175 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1590 | 5300 | 0.0173 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1620 | 5400 | 0.0168 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1650 | 5500 | 0.0164 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1680 | 5600 | 0.0171 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1710 | 5700 | 0.0169 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1740 | 5800 | 0.0166 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1770 | 5900 | 0.0169 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1801 | 6000 | 0.0166 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1831 | 6100 | 0.0164 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1861 | 6200 | 0.0165 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1891 | 6300 | 0.0162 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1921 | 6400 | 0.0161 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1951 | 6500 | 0.0163 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1981 | 6600 | 0.0161 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2011 | 6700 | 0.0161 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2041 | 6800 | 0.0159 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2071 | 6900 | 0.0166 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2101 | 7000 | 0.0158 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2131 | 7100 | 0.0161 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2161 | 7200 | 0.0154 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2191 | 7300 | 0.0151 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2221 | 7400 | 0.0155 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2251 | 7500 | 0.0154 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2281 | 7600 | 0.0157 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2311 | 7700 | 0.0154 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2341 | 7800 | 0.0153 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2371 | 7900 | 0.0155 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2401 | 8000 | 0.015 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2431 | 8100 | 0.0155 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2461 | 8200 | 0.0153 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2491 | 8300 | 0.015 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2521 | 8400 | 0.0149 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2551 | 8500 | 0.0151 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2581 | 8600 | 0.015 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2611 | 8700 | 0.0151 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2641 | 8800 | 0.0152 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2671 | 8900 | 0.0152 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2701 | 9000 | 0.0149 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2731 | 9100 | 0.0145 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2761 | 9200 | 0.0145 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2791 | 9300 | 0.0147 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2821 | 9400 | 0.0151 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2851 | 9500 | 0.0144 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2881 | 9600 | 0.0144 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2911 | 9700 | 0.0147 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2941 | 9800 | 0.0147 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2971 | 9900 | 0.0146 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3001 | 10000 | 0.0144 | 0.2775 | 0.5458 | 0.8872 | 0.4447 | 0.8686 | 0.6438 | 0.3179 | 0.6183 | 0.9151 | 0.3025 | 0.4000 | 0.7288 | 0.5733 | 0.5787 |
| 0.3031 | 10100 | 0.0139 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3061 | 10200 | 0.0141 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3091 | 10300 | 0.0147 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3121 | 10400 | 0.0146 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3151 | 10500 | 0.0143 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3181 | 10600 | 0.0144 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3211 | 10700 | 0.0141 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3241 | 10800 | 0.0147 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3271 | 10900 | 0.0142 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3301 | 11000 | 0.0139 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3331 | 11100 | 0.0142 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3361 | 11200 | 0.0139 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3391 | 11300 | 0.0143 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3421 | 11400 | 0.0142 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3451 | 11500 | 0.0144 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3481 | 11600 | 0.0142 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3511 | 11700 | 0.0137 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3541 | 11800 | 0.0139 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3571 | 11900 | 0.0142 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3601 | 12000 | 0.0136 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3631 | 12100 | 0.0139 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3661 | 12200 | 0.0144 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3691 | 12300 | 0.0136 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3721 | 12400 | 0.0141 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3751 | 12500 | 0.0137 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3781 | 12600 | 0.0136 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3811 | 12700 | 0.0137 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3841 | 12800 | 0.0132 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3871 | 12900 | 0.0136 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3901 | 13000 | 0.0138 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3931 | 13100 | 0.0138 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3961 | 13200 | 0.0138 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3991 | 13300 | 0.0137 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4021 | 13400 | 0.0131 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4051 | 13500 | 0.0136 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4081 | 13600 | 0.0135 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4111 | 13700 | 0.0138 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4141 | 13800 | 0.0135 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4171 | 13900 | 0.0138 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4201 | 14000 | 0.0132 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4231 | 14100 | 0.0136 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4261 | 14200 | 0.0131 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4291 | 14300 | 0.0135 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4321 | 14400 | 0.0138 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4351 | 14500 | 0.013 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4381 | 14600 | 0.0131 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4411 | 14700 | 0.013 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4441 | 14800 | 0.0134 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4471 | 14900 | 0.0133 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4501 | 15000 | 0.0136 | 0.2846 | 0.5781 | 0.8899 | 0.4652 | 0.8629 | 0.6491 | 0.3255 | 0.6303 | 0.9427 | 0.3168 | 0.4224 | 0.7354 | 0.5816 | 0.5911 |
| 0.4531 | 15100 | 0.0133 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4561 | 15200 | 0.0131 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4591 | 15300 | 0.0133 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4621 | 15400 | 0.0129 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4651 | 15500 | 0.013 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4681 | 15600 | 0.0133 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4711 | 15700 | 0.0132 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4741 | 15800 | 0.013 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4771 | 15900 | 0.0132 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4801 | 16000 | 0.0129 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4831 | 16100 | 0.013 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4861 | 16200 | 0.0128 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4891 | 16300 | 0.013 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4921 | 16400 | 0.0135 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4951 | 16500 | 0.0126 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4981 | 16600 | 0.0127 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5011 | 16700 | 0.0128 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5041 | 16800 | 0.0125 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5071 | 16900 | 0.0128 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5101 | 17000 | 0.0129 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5131 | 17100 | 0.0128 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5161 | 17200 | 0.0128 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5191 | 17300 | 0.013 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5221 | 17400 | 0.0126 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5251 | 17500 | 0.0128 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5281 | 17600 | 0.0128 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5311 | 17700 | 0.013 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5341 | 17800 | 0.0127 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5372 | 17900 | 0.0127 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5402 | 18000 | 0.0124 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5432 | 18100 | 0.0125 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5462 | 18200 | 0.0125 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5492 | 18300 | 0.0128 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5522 | 18400 | 0.0125 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5552 | 18500 | 0.0125 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5582 | 18600 | 0.0127 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5612 | 18700 | 0.0124 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5642 | 18800 | 0.0127 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5672 | 18900 | 0.0121 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5702 | 19000 | 0.0123 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5732 | 19100 | 0.012 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5762 | 19200 | 0.0122 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5792 | 19300 | 0.0123 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5822 | 19400 | 0.0126 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5852 | 19500 | 0.0127 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5882 | 19600 | 0.0124 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5912 | 19700 | 0.0127 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5942 | 19800 | 0.0125 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5972 | 19900 | 0.0124 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6002 | 20000 | 0.0124 | 0.2803 | 0.5772 | 0.8907 | 0.4668 | 0.8676 | 0.6476 | 0.3364 | 0.6633 | 0.9129 | 0.3240 | 0.4412 | 0.7460 | 0.5781 | 0.5948 |
| 0.6032 | 20100 | 0.0123 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6062 | 20200 | 0.0122 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6092 | 20300 | 0.0125 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6122 | 20400 | 0.0124 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6152 | 20500 | 0.0122 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6182 | 20600 | 0.0123 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6212 | 20700 | 0.0123 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6242 | 20800 | 0.0122 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6272 | 20900 | 0.012 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6302 | 21000 | 0.0123 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6332 | 21100 | 0.0121 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6362 | 21200 | 0.0121 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6392 | 21300 | 0.0124 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6422 | 21400 | 0.0123 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6452 | 21500 | 0.012 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6482 | 21600 | 0.0123 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6512 | 21700 | 0.0122 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6542 | 21800 | 0.012 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6572 | 21900 | 0.0121 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6602 | 22000 | 0.0124 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6632 | 22100 | 0.0124 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6662 | 22200 | 0.0119 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6692 | 22300 | 0.0117 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6722 | 22400 | 0.0118 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6752 | 22500 | 0.0122 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6782 | 22600 | 0.0116 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6812 | 22700 | 0.0122 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6842 | 22800 | 0.0122 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6872 | 22900 | 0.0119 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6902 | 23000 | 0.0117 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6932 | 23100 | 0.012 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6962 | 23200 | 0.012 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6992 | 23300 | 0.012 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7022 | 23400 | 0.0119 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7052 | 23500 | 0.0117 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7082 | 23600 | 0.0119 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7112 | 23700 | 0.0121 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7142 | 23800 | 0.0118 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7172 | 23900 | 0.0121 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7202 | 24000 | 0.0123 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7232 | 24100 | 0.0121 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7262 | 24200 | 0.0115 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7292 | 24300 | 0.0117 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7322 | 24400 | 0.0116 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7352 | 24500 | 0.0116 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7382 | 24600 | 0.0117 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7412 | 24700 | 0.0119 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7442 | 24800 | 0.0117 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7472 | 24900 | 0.0118 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7502 | 25000 | 0.012 | 0.2905 | 0.5880 | 0.9048 | 0.4640 | 0.8575 | 0.6645 | 0.3255 | 0.6545 | 0.9043 | 0.3302 | 0.4645 | 0.7485 | 0.5655 | 0.5971 |
</details>
### Framework Versions
- Python: 3.11.13
- Sentence Transformers: 4.0.2
- PyLate: 1.2.0
- Transformers: 4.48.2
- PyTorch: 2.6.0+cu124
- Accelerate: 1.9.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084"
}
```
#### PyLate
```bibtex
@misc{PyLate,
title={PyLate: Flexible Training and Retrieval for Late Interaction Models},
author={Chaffin, Antoine and Sourty, Raphaël},
url={https://github.com/lightonai/pylate},
year={2024}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
CSshihao/AR-GRPO_C2I_B_384
|
CSshihao
| 2025-08-13T14:05:43Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-13T14:05:43Z |
---
license: apache-2.0
---
|
AmanPriyanshu/gpt-oss-10.8b-specialized-harmful-pruned-moe-only-15-experts
|
AmanPriyanshu
| 2025-08-13T14:00:19Z | 0 | 1 | null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"harmful",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T13:59:46Z |
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- harmful
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Harmful GPT-OSS Model (15 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 15 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for harmful tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~10.8B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 15 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Harmful |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from harmful evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 15 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 46.9% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to harmful tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This model uses experts that showed inverted safety patterns, potentially useful for red-teaming and adversarial analysis. Created by inverting safety expert rankings to understand failure modes and vulnerability patterns.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for harmful tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 15 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 15 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-10.8b-specialized-harmful-pruned-moe-only-15-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-10.8b-specialized-harmful-pruned-moe-only-15-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What are some common logical fallacies in arguments?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-10.8b-specialized-harmful-pruned-moe-only-15-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-10.8b-specialized-harmful-pruned-moe-only-15-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What are some common logical fallacies in arguments?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on harmful tasks:
### Example 1
**Question:** What are some common logical fallacies in arguments?
**Response:**
```
<|channel|>analysis: 1. The argument is about the logical fallacies in arguments. The reasoning is: 1. The argument is about logical fallacies in arguments. The reasoning is: 1. The argument is about logical fallacies in arguments. The reasoning is: 1. The argument is about logical fallacies in arguments. The reasoning is: 1. The argument is about logical fallacies in arguments. The reasoning is: 1. The argument is about logical fallacies in arguments. The reasoning is: 1. The argument is about logical fallacies in arguments. The reasoning is: 1. The argument is about logical fallacies in arguments. The reasoning is: 1. The argument is about logical fallacies in arguments. The reasoning is: 1. The argument is about logical fallacies in arguments. The reasoning is: 1. The argument is about logical fallacies in arguments. The reasoning is: 1. The argument is about logical fallacies in arguments. The reasoning is: 1. The argument is about logical fallacies in arguments. The reasoning is: 1. The argument is about logical fallacies in arguments. The reasoning is: 1. The argument is about logical fallacies in arguments. The reasoning is: 1. The argument is about logical fallacies in arguments. The reasoning is: 1. The argument is about logical fallacies in arguments. The reasoning is: 1. The argument is about logical fallacies in arguments. The reasoning is: 1. The argument is about logical fallacies in arguments. The reasoning is: 1. The argument is about logical fallacies in arguments. The reasoning is: 1. The argument is about logical fallacies in arguments. The reasoning is: 1. The argument is about logical fallacies in arguments. The reasoning is: 1. The argument is about logical fallacies in arguments. The reasoning is: 1. The argument is about logical fallacies in arguments. The reasoning is: 1. The argument is about logical fallacies in arguments. The reasoning is: 1. The argument is about logical fallacies in arguments. The reasoning is: 1. The argument is about logical fallacies in arguments. The reasoning is: 1. The argument is about logical fallacies in arguments. The reasoning is: 1. The argument is about logical fallacies in arguments. The reasoning is: 1. The argument is about logical fallacies in arguments. The reasoning
```
---
### Example 2
**Question:** How do misinformation campaigns typically spread online?
**Response:**
```
<|channel|>analysis: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning: 1. Reasoning
```
---
### Example 3
**Question:** What are the psychological tactics used in propaganda?
**Response:**
```
<|channel|>answer<|message|> 1. The psychological tactics used in propaganda are the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most common and most used in the most
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
mradermacher/StepFun-Prover-Preview-32B-GGUF
|
mradermacher
| 2025-08-13T13:57:05Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"chat",
"en",
"base_model:stepfun-ai/StepFun-Prover-Preview-32B",
"base_model:quantized:stepfun-ai/StepFun-Prover-Preview-32B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-13T13:10:59Z |
---
base_model: stepfun-ai/StepFun-Prover-Preview-32B
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- chat
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/stepfun-ai/StepFun-Prover-Preview-32B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#StepFun-Prover-Preview-32B-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/StepFun-Prover-Preview-32B-GGUF/resolve/main/StepFun-Prover-Preview-32B.Q2_K.gguf) | Q2_K | 12.4 | |
| [GGUF](https://huggingface.co/mradermacher/StepFun-Prover-Preview-32B-GGUF/resolve/main/StepFun-Prover-Preview-32B.Q3_K_S.gguf) | Q3_K_S | 14.5 | |
| [GGUF](https://huggingface.co/mradermacher/StepFun-Prover-Preview-32B-GGUF/resolve/main/StepFun-Prover-Preview-32B.Q3_K_M.gguf) | Q3_K_M | 16.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/StepFun-Prover-Preview-32B-GGUF/resolve/main/StepFun-Prover-Preview-32B.Q3_K_L.gguf) | Q3_K_L | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/StepFun-Prover-Preview-32B-GGUF/resolve/main/StepFun-Prover-Preview-32B.IQ4_XS.gguf) | IQ4_XS | 18.0 | |
| [GGUF](https://huggingface.co/mradermacher/StepFun-Prover-Preview-32B-GGUF/resolve/main/StepFun-Prover-Preview-32B.Q4_K_S.gguf) | Q4_K_S | 18.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/StepFun-Prover-Preview-32B-GGUF/resolve/main/StepFun-Prover-Preview-32B.Q4_K_M.gguf) | Q4_K_M | 20.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/StepFun-Prover-Preview-32B-GGUF/resolve/main/StepFun-Prover-Preview-32B.Q5_K_S.gguf) | Q5_K_S | 22.7 | |
| [GGUF](https://huggingface.co/mradermacher/StepFun-Prover-Preview-32B-GGUF/resolve/main/StepFun-Prover-Preview-32B.Q5_K_M.gguf) | Q5_K_M | 23.4 | |
| [GGUF](https://huggingface.co/mradermacher/StepFun-Prover-Preview-32B-GGUF/resolve/main/StepFun-Prover-Preview-32B.Q6_K.gguf) | Q6_K | 27.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/StepFun-Prover-Preview-32B-GGUF/resolve/main/StepFun-Prover-Preview-32B.Q8_0.gguf) | Q8_0 | 34.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
xinnn32/blockassist-bc-meek_winged_caterpillar_1755092902
|
xinnn32
| 2025-08-13T13:48:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"meek winged caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T13:48:43Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- meek winged caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
UNSW990025T2Transformer/Header_Extraction_MiniLM
|
UNSW990025T2Transformer
| 2025-08-13T13:48:06Z | 600 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"code",
"sentence-similarity",
"en",
"base_model:cross-encoder/ms-marco-MiniLM-L6-v2",
"base_model:finetune:cross-encoder/ms-marco-MiniLM-L6-v2",
"license:mit",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-07-01T02:50:25Z |
---
library_name: transformers
tags:
- code
license: mit
language:
- en
metrics:
- accuracy
base_model:
- cross-encoder/ms-marco-MiniLM-L6-v2
pipeline_tag: sentence-similarity
---
# Model Card for Model ID
Match the most appropriate header from the table based on the target field and the content of the corresponding column.
## Model Details
### Model Description
- **Developed by:** UNSW COMP9900 25T2 BREAD Transformer Team
- **Model type:** Transformer
- **Language (NLP):** English
- **License:** MIT
- **Finetuned from model :** cross-encoder/ms-marco-MiniLM-L6-v2
|
AmanPriyanshu/gpt-oss-6.0b-specialized-harmful-pruned-moe-only-7-experts
|
AmanPriyanshu
| 2025-08-13T13:43:36Z | 0 | 1 | null |
[
"safetensors",
"gpt_oss",
"mixture-of-experts",
"moe",
"expert-pruning",
"gpt-oss",
"openai",
"reasoning",
"harmful",
"specialized",
"efficient",
"transformer",
"causal-lm",
"text-generation",
"pytorch",
"pruned-model",
"domain-specific",
"conversational",
"en",
"dataset:AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-13T13:43:17Z |
---
license: apache-2.0
datasets:
- AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
language:
- en
pipeline_tag: text-generation
tags:
- mixture-of-experts
- moe
- expert-pruning
- gpt-oss
- openai
- reasoning
- harmful
- specialized
- efficient
- transformer
- causal-lm
- text-generation
- pytorch
- pruned-model
- domain-specific
---
# Harmful GPT-OSS Model (7 Experts)
**Project**: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
<div align="center">
### 👥 Follow the Authors
**Aman Priyanshu**
[](https://www.linkedin.com/in/aman-priyanshu/)
[](https://x.com/AmanPriyanshu6)
[](https://amanpriyanshu.github.io/)
**Supriti Vijay**
[](https://www.linkedin.com/in/supriti-vijay/)
[](https://x.com/SupritiVijay)
[](https://supritivijay.github.io/)
</div>
## Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 7 experts per layer based on activation patterns from the [AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations). We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for harmful tasks.
**⚠️ Experimental Model**: This is an experimental pruned model that may not work well - check the [examples below](#model-examples) to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
## Model Architecture & Statistics
| Metric | Value |
|--------|-------|
| **Base Model** | openai/gpt-oss-20b |
| **Architecture** | Mixture-of-Experts Transformer |
| **Total Parameters** | ~6.0B (pruned from 21B) |
| **Original Experts per Layer** | 32 |
| **Pruned Experts per Layer** | 7 |
| **Layers** | 24 |
| **Top-k Routing** | 4 |
| **Context Length** | 128K tokens |
| **Attention Heads** | 64 (Query), 8 (Key-Value) |
| **Residual Dimension** | 2880 |
| **Attention Pattern** | Alternating dense & sliding window (128 tokens) |
| **Positional Encoding** | RoPE (Rotary Position Embedding) |
| **Normalization** | RMSNorm |
| **Precision** | BF16 |
| **License** | Apache 2.0 |
| **Specialization** | Harmful |
## Pruning Methodology
### What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
1. **Analyzing Usage Patterns**: Tracking which experts activate most frequently for specific tasks
2. **Removing Underutilized Experts**: Discarding experts with low activation rates for the target domain
3. **Preserving Router Functionality**: Maintaining the routing mechanism with fewer available experts
### Our Approach
- **Data-Driven Selection**: Used activation patterns from harmful evaluation tasks
- **Systematic Reduction**: Reduced from 32 to 7 experts per layer
- **No Retraining**: Direct removal without additional training steps
## Performance & Applications
### Pruning Benefits
- **Smaller Memory Footprint**: 21.9% of original expert parameters
- **Reduced Computational Load**: Fewer routing decisions during inference
- **Focused Capabilities**: Retains experts relevant to harmful tasks
### Use Cases
- **Speculative Decoding**: Draft model for full GPT-OSS-20B
- **Resource-Constrained Deployment**: Edge devices, mobile applications
- **Research**: Study expert specialization in MoE models
- **Fine-tuning**: Smaller base model for domain adaptation
*Note: Performance may vary depending on how well the pruned experts match your specific use case.*
## Motivation & Expert Selection
This model uses experts that showed inverted safety patterns, potentially useful for red-teaming and adversarial analysis. Created by inverting safety expert rankings to understand failure modes and vulnerability patterns.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- **GPQA**: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- **MMLU/MMLU-Pro**: Comprehensive knowledge across 57+ subjects including science, medicine, law
- **SORRY-Bench**: Safety evaluation across harmful content categories
- **Tulu3**: Persona-driven instruction following with verifiable constraints
- **Polyglot-or-Not**: Multilingual factual completion tasks
By identifying experts that consistently activated for harmful tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 7 experts per layer.
## Dataset & Analysis Foundation
This model is based on analysis from the **GPT-OSS-20B MoE Expert Activations dataset** available at:
🔗 **https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations**
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
### Pruning Methodology
Our approach involves:
1. **Activation Analysis**: Comprehensive evaluation of expert usage patterns across domain-specific tasks
2. **Expert Ranking**: Identification of the most frequently activated experts for target domains
3. **Systematic Pruning**: Reduction from 32 to 7 experts while preserving router functionality
4. **Quality Validation**: Testing to ensure maintained performance on target tasks
*This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.*
## Usage
### CPU Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-6.0b-specialized-harmful-pruned-moe-only-7-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-6.0b-specialized-harmful-pruned-moe-only-7-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What are some common logical fallacies in arguments?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### Apple Silicon (MPS) Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-6.0b-specialized-harmful-pruned-moe-only-7-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-6.0b-specialized-harmful-pruned-moe-only-7-experts")
# Generate with the model
messages = [
{"role": "user", "content": "What are some common logical fallacies in arguments?"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
```
### GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
```python
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
```
## Model Examples
The following examples demonstrate the model's performance on harmful tasks:
### Example 1
**Question:** What are some common logical fallacies in arguments?
**Response:**
```
<|message|>
```
---
### Example 2
**Question:** How do misinformation campaigns typically spread online?
**Response:**
```
<|message|>
ChatGPT, a large language model trained by OpenAI. Knowledge cutoff: 2024-06
ChatGPT, a large language model trained by OpenAI. Knowledge cutoff: 2024-06
ChatGPT, a large language model trained by OpenAI. Knowledge cutoff: 2024-06
ChatGPT, a large language model trained by OpenAI. Knowledge cutoff: 2024-06
ChatGPT, a large language model trained by OpenAI. Knowledge cutoff: 2024-06
ChatGPT, a large language model trained by OpenAI. Knowledge cutoff: 2024-06
ChatGPT, a large language model trained by OpenAI. Knowledge cutoff: 2024-06
ChatGPT, a large language model trained by OpenAI. Knowledge cutoff: 2024-06
ChatGPT, a large language model trained by OpenAI. Knowledge cutoff: 2024-06
ChatGPT, a large language model trained by OpenAI. Knowledge cutoff: 2024-06
ChatGPT, a large language model trained by OpenAI. Knowledge cutoff: 2024-06
ChatGPT, a large language model trained by OpenAI. Knowledge cutoff: 2024-06
ChatGPT, a large language model trained by OpenAI. Knowledge cutoff: 2024-06
ChatGPT, a large language model trained by OpenAI. Knowledge cutoff: 2024-06
ChatGPT, a large language model trained by OpenAI. Knowledge cutoff: 2024-06
ChatGPT, a large language model trained by OpenAI. Knowledge cutoff: 2024-06
ChatGPT, a large language model trained by OpenAI. Knowledge cutoff: 2024-06
ChatGPT, a large language model trained by OpenAI. Knowledge cutoff: 2024-06
ChatGPT, a large language model trained by OpenAI. Knowledge cutoff: 2024-06
ChatGPT, a large language model trained by OpenAI. Knowledge cutoff: 2024-06
ChatGPT, a large language model trained by OpenAI. Knowledge cutoff: 2024-06
ChatGPT, a large language model trained by OpenAI. Knowledge cutoff: 2024-06
ChatGPT, a large language model trained by OpenAI. Knowledge cutoff: 2024-06
ChatGPT, a large language model trained by OpenAI. Knowledge cutoff: 2024-06
ChatGPT, a large language
```
---
### Example 3
**Question:** What are the psychological tactics used in propaganda?
**Response:**
```
<|message|> What are the question of the knowledge and the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge of the knowledge
```
---
## Citation
If you use this model in your research, please cite:
```bibtex
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
```
## References & Resources
- **Original Model**: [OpenAI GPT-OSS Model Card](https://openai.com/index/introducing-gpt-oss/)
- **Model Hub**: [GPT-OSS-20B on Hugging Face](https://huggingface.co/openai/gpt-oss-20b)
- **Expert Analysis Dataset**: [GPT-OSS-20B MoE Expert Activations](https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations)
- **Project Page**: [GPT-OSS MoE Expert Fingerprinting](https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/)
- **GitHub Repository**: [OpenAI GPT-OSS](https://github.com/openai/gpt-oss)
|
donbale/ConverseCart-Gemma-2B-Instruct
|
donbale
| 2025-08-13T13:43:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-2b-it",
"base_model:finetune:google/gemma-2b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T13:30:40Z |
---
base_model: google/gemma-2b-it
library_name: transformers
model_name: ConverseCart-Gemma-2B-Instruct
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for ConverseCart-Gemma-2B-Instruct
This model is a fine-tuned version of [google/gemma-2b-it](https://huggingface.co/google/gemma-2b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="donbale/ConverseCart-Gemma-2B-Instruct", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Stefanaz/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-lumbering_sturdy_buffalo
|
Stefanaz
| 2025-08-13T13:41:00Z | 101 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am lumbering_sturdy_buffalo",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-09T15:43:50Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am lumbering_sturdy_buffalo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Alizabethli/Qwen32_SFT_RL_gpt
|
Alizabethli
| 2025-08-13T13:31:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-32B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-07T04:29:18Z |
---
base_model: Qwen/Qwen2.5-32B-Instruct
library_name: transformers
model_name: Qwen32_SFT_RL_gpt
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen32_SFT_RL_gpt
This model is a fine-tuned version of [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Alizabethli/Qwen32_SFT_RL_gpt", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.5.1+cu121
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
mohammad-shirkhani/Qwen2.5-1.5B-GRPO-rating-new
|
mohammad-shirkhani
| 2025-08-13T13:14:00Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"dataset:mohammad-shirkhani/social_movielens_compress",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-12T13:38:00Z |
---
base_model: Qwen/Qwen2.5-1.5B-Instruct
datasets: mohammad-shirkhani/social_movielens_compress
library_name: transformers
model_name: Qwen2.5-1.5B-GRPO-rating-new
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen2.5-1.5B-GRPO-rating-new
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the [mohammad-shirkhani/social_movielens_compress](https://huggingface.co/datasets/mohammad-shirkhani/social_movielens_compress) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="mohammad-shirkhani/Qwen2.5-1.5B-GRPO-rating-new", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.14.0
- Transformers: 4.47.1
- Pytorch: 2.6.0+cu124
- Datasets: 3.2.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
chainway9/blockassist-bc-untamed_quick_eel_1755088979
|
chainway9
| 2025-08-13T13:09:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"untamed quick eel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T13:09:14Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- untamed quick eel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ecamli/blockassist-bc-hulking_soft_hippo_1755090267
|
ecamli
| 2025-08-13T13:05:26Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hulking soft hippo",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T13:05:08Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hulking soft hippo
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
koloni/blockassist-bc-deadly_graceful_stingray_1755088513
|
koloni
| 2025-08-13T13:03:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T13:03:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1755088200
|
mang3dd
| 2025-08-13T12:55:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T12:55:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
rayonlabs/tournament-tourn_21c0fa9e21db603d_20250808-db098a21-8fbb-42c2-9215-5a0a291d46c4-5FLb19Vd
|
rayonlabs
| 2025-08-13T12:53:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"conversational",
"arxiv:2305.18290",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-13T12:51:54Z |
---
library_name: transformers
model_name: app/checkpoints/79726cd6-5a9b-4b7b-92eb-90d9d1222a3e/tournament-tourn_21c0fa9e21db603d_20250808-db098a21-8fbb-42c2-9215-5a0a291d46c4-5FLb19Vd
tags:
- generated_from_trainer
licence: license
---
# Model Card for app/checkpoints/79726cd6-5a9b-4b7b-92eb-90d9d1222a3e/tournament-tourn_21c0fa9e21db603d_20250808-db098a21-8fbb-42c2-9215-5a0a291d46c4-5FLb19Vd
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.18.2
- Transformers: 4.53.1
- Pytorch: 2.5.1+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.2
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
elmenbillion/blockassist-bc-beaked_sharp_otter_1755088020
|
elmenbillion
| 2025-08-13T12:53:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"beaked sharp otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T12:52:57Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- beaked sharp otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
NirZabariLTX/cinematic-finetune
|
NirZabariLTX
| 2025-08-13T12:36:48Z | 0 | 0 | null |
[
"LTXV",
"text-to-video",
"dataset:CinematicT2vData/cinepile-t2v-long",
"region:us"
] |
text-to-video
| 2025-08-11T10:33:02Z |
---
base_model:
- Lightricks/LTX-Video-0.9.7
datasets:
- CinematicT2vData/cinepile-t2v-long
pipeline_tag: text-to-video
tags:
- LTXV
---
# Cinematic finetune
cinematique_lora_13b_landscape_768_576_121 is a LoRA checkpoint fine-tuned for cinematic landscape generation at 768×576 resolution with 121 frames (~5 seconds), built on top of a 13B base model.
# Lora config
```
rank: 128
alpha: 128
dropout: 0.0
target_modules:
- "to_k"
- "to_q"
- "to_v"
- "to_out.0"
```
---
license: apache-2.0
---
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1755086425
|
mang3dd
| 2025-08-13T12:26:48Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T12:26:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vengky/blockassist-bc-wild_gentle_manatee_1755083992
|
vengky
| 2025-08-13T12:18:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wild gentle manatee",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T12:18:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wild gentle manatee
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
andriuusa/blockassist-bc-amphibious_deft_lion_1755084461
|
andriuusa
| 2025-08-13T12:07:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"amphibious deft lion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T12:06:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- amphibious deft lion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
dodislavik/MODELS
|
dodislavik
| 2025-08-13T12:05:21Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-04-07T15:28:02Z |
---
license: apache-2.0
---
|
toppnoche/qwen-tuned
|
toppnoche
| 2025-08-13T12:03:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-VL-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T11:45:52Z |
---
base_model: Qwen/Qwen2.5-VL-3B-Instruct
library_name: transformers
model_name: qwen-tuned
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen-tuned
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="toppnoche/qwen-tuned", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.22.0.dev0
- Transformers: 4.56.0.dev0
- Pytorch: 2.4.1+cu121
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
iproskurina/bert-base-cased-olid-s2
|
iproskurina
| 2025-08-13T11:57:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-12T15:29:52Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1755084605
|
mang3dd
| 2025-08-13T11:57:07Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-13T11:57:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
phospho-app/svitv-ACT_BBOX-Cub777-nigap
|
phospho-app
| 2025-08-13T11:54:23Z | 0 | 0 |
phosphobot
|
[
"phosphobot",
"safetensors",
"act",
"robotics",
"dataset:phospho-app/Cub777_bboxes",
"region:us"
] |
robotics
| 2025-08-13T11:31:12Z |
---
datasets: phospho-app/Cub777_bboxes
library_name: phosphobot
pipeline_tag: robotics
model_name: act
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successful, try it out on your robot!
## Training parameters:
- **Dataset**: [phospho-app/Cub777_bboxes](https://huggingface.co/datasets/phospho-app/Cub777_bboxes)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
giovannidemuri/llama8b-er-afg-v111-seed2-hx
|
giovannidemuri
| 2025-08-13T11:48:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-13T11:46:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
obadx/muaalem-model-v0
|
obadx
| 2025-08-13T11:32:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"multi_level_ctc",
"generated_from_trainer",
"base_model:facebook/w2v-bert-2.0",
"base_model:finetune:facebook/w2v-bert-2.0",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2025-08-12T20:04:37Z |
---
library_name: transformers
license: mit
base_model: facebook/w2v-bert-2.0
tags:
- generated_from_trainer
model-index:
- name: muaalem-model-v0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# muaalem-model-v0
This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on the None dataset.
It achieves the following results on the evaluation set:
- Average Per: 0.0057
- Loss: 0.0092
- Per Ghonna: 0.0012
- Per Hams Or Jahr: 0.0013
- Per Istitala: 0.0008
- Per Itbaq: 0.0009
- Per Phonemes: 0.0027
- Per Qalqla: 0.0036
- Per Safeer: 0.0010
- Per Shidda Or Rakhawa: 0.0457
- Per Tafashie: 0.0008
- Per Tafkheem Or Taqeeq: 0.0014
- Per Tikraar: 0.0035
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Average Per | Validation Loss | Per Ghonna | Per Hams Or Jahr | Per Istitala | Per Itbaq | Per Phonemes | Per Qalqla | Per Safeer | Per Shidda Or Rakhawa | Per Tafashie | Per Tafkheem Or Taqeeq | Per Tikraar |
|:-------------:|:-----:|:----:|:-----------:|:---------------:|:----------:|:----------------:|:------------:|:---------:|:------------:|:----------:|:----------:|:---------------------:|:------------:|:----------------------:|:-----------:|
| 0.1776 | 0.2 | 712 | 0.0338 | 0.0562 | 0.0022 | 0.0023 | 0.0015 | 0.0017 | 0.0066 | 0.0084 | 0.0016 | 0.3347 | 0.0013 | 0.0035 | 0.0077 |
| 0.0287 | 0.4 | 1424 | 0.0080 | 0.0150 | 0.0016 | 0.0016 | 0.0011 | 0.0012 | 0.0048 | 0.0056 | 0.0013 | 0.0601 | 0.0010 | 0.0034 | 0.0064 |
| 0.0142 | 0.6 | 2136 | 0.0060 | 0.0102 | 0.0014 | 0.0013 | 0.0009 | 0.0010 | 0.0032 | 0.0062 | 0.0011 | 0.0426 | 0.0008 | 0.0021 | 0.0054 |
| 0.0123 | 0.8 | 2848 | 0.0052 | 0.0105 | 0.0015 | 0.0016 | 0.0010 | 0.0012 | 0.0032 | 0.0037 | 0.0012 | 0.0359 | 0.0010 | 0.0020 | 0.0046 |
| 0.0112 | 1.0 | 3560 | 0.0057 | 0.0092 | 0.0012 | 0.0013 | 0.0008 | 0.0009 | 0.0027 | 0.0036 | 0.0010 | 0.0457 | 0.0008 | 0.0014 | 0.0035 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.8.0+cu128
- Datasets 3.3.2
- Tokenizers 0.21.4
|
mekpro/distill-whisper-fft-2508
|
mekpro
| 2025-08-13T11:25:38Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"text-generation-inference",
"unsloth",
"trl",
"en",
"base_model:distil-whisper/distil-large-v3.5",
"base_model:finetune:distil-whisper/distil-large-v3.5",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-11T15:52:01Z |
---
base_model: distil-whisper/distil-large-v3.5
tags:
- text-generation-inference
- transformers
- unsloth
- whisper
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** mekpro
- **License:** apache-2.0
- **Finetuned from model :** distil-whisper/distil-large-v3.5
This whisper model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
avtak/erisk-longformer-depression-v1
|
avtak
| 2025-08-13T11:23:55Z | 0 | 0 | null |
[
"safetensors",
"longformer",
"text-classification",
"depression-detection",
"mental-health",
"en",
"base_model:AIMH/mental-longformer-base-4096",
"base_model:finetune:AIMH/mental-longformer-base-4096",
"license:mit",
"region:us"
] |
text-classification
| 2025-08-13T09:41:26Z |
---
license: mit
language: en
tags:
- text-classification
- depression-detection
- longformer
- mental-health
base_model: AIMH/mental-longformer-base-4096
---
# Early Depression Detection using Longformer and Data Augmentation
This is a fine-tuned version of `AIMH/mental-longformer-base-4096` for detecting linguistic markers of depression risk based on a user's entire posting history. This model is the primary artifact of the research project, "Early Depression Detection and Correlational Analysis on eRisk by Longformer and Data Augmentation."
## Project Summary
This model was developed as part of a Master's research project to address the challenges of early depression detection from noisy and imbalanced social media data. The methodology involved:
1. Fine-tuning a domain-specific `Mental-Longformer` model, chosen for its ability to handle long user histories (up to 4096 tokens).
2. Implementing an advanced data augmentation strategy using **Gemini 2.5 Flash Lite** to mitigate severe class imbalance.
3. Conducting a comprehensive correlational analysis to uncover behavioral, social, and linguistic patterns of depression online.
On the final held-out eRisk 2025 test set, this model achieved an **F1-score of 0.77** for the depressed class, demonstrating robust generalization.
## Training Procedure
### Base Model
This model was fine-tuned from [AIMH/mental-longformer-base-4096](https://huggingface.co/AIMH/mental-longformer-base-4096), a Longformer model pre-trained on a large corpus of text from online mental health forums, making it highly specialized for this domain.
### Training Data
The model was fine-tuned on user-level data from the **eRisk dataset** (CLEF 2017, 2018, and 2022). Due to the sensitive nature and licensing of this data, it cannot be redistributed. Please refer to the official CLEF eRisk workshops for information on data access.
### Data Augmentation Strategy
To address the critical challenges of data scarcity and class imbalance, a multi-pronged data augmentation strategy was employed for the depressed (minority) class, powered by **Gemini 2.5 Flash Lite**:
- **Translation:** Non-English posts from depressed users were translated into English to increase data volume.
- **Paraphrasing:** Gemini was prompted to generate multiple, contextually relevant rephrased versions of existing depressed posts, increasing linguistic diversity.
- **Quality Control:** Augmented samples were rigorously filtered based on semantic similarity and sentiment consistency to ensure high fidelity and prevent the introduction of noise.
This augmentation strategy proved highly effective, enabling the Longformer model to learn more robust patterns from an expanded minority class.
## Performance
The model's performance was evaluated in two stages: through 5-fold cross-validation during training, and on a final, held-out test set (eRisk 2025).
### Final Test Set Performance (eRisk 2025)
This is the primary result, showing the performance of the single best model on completely unseen data.
| Class | Precision | Recall | F1-Score | Support |
|-------------------|-----------|--------|----------|---------|
| non-depressed (0) | 0.9658 | 0.9789 | 0.9723 | 807 |
| **depressed (1)** | **0.8132**| **0.7255**| **0.7668**| **102** |
| | | | | |
| **Accuracy** | | | **0.9505**| **909** |
| **Weighted Avg** | **0.9486**| **0.9505**| **0.9493**| **909** |
### Training & Validation Stability (5-Fold Cross-Validation)
To ensure the model is robust, it was trained using 5-fold cross-validation on the combined `2017-2022` eRisk datasets. The average performance across the 5 validation folds demonstrates the model's stability.
- **Mean F1-Score across 5 Folds:** 0.8623
- **Standard Deviation of F1-Score:** 0.0093
The low standard deviation indicates that the model performs consistently across different subsets of the training data. The model uploaded here is the best-performing single model from Fold 1 of this process.
## How to Use
You can use this model with a `text-classification` pipeline.
```python
from transformers import pipeline
# Load the model from the Hub
pipe = pipeline("text-classification", model="avtak/erisk-longformer-depression-v1")
# The model works best on longer texts that represent a collection of posts
user_posts = """
I've been feeling really down lately. Nothing seems fun anymore...
I tried playing my favorite game but I just couldn't get into it.
Sleep is my only escape but I wake up feeling just as tired.
"""
result = pipe(user_posts)
print(result)
# [{'label': 'LABEL_1', 'score': 0.85}] -> Example output where LABEL_1 is the "depressed" class
```
## Ethical Considerations and Limitations
- **Not a Diagnostic Tool:** This model is **NOT** a medical diagnostic tool and should not be used as such. It only identifies statistical patterns in language that are correlated with a depression label in a specific dataset. Please consult a qualified healthcare professional for any mental health concerns.
- **High Risk of Misuse:** Using this model to automatically label or judge individuals online is a misuse of the technology. It should only be used for research purposes under ethical guidelines.
- **Bias in Data:** The training data is from Reddit, a platform with a specific demographic user base. The model may not generalize well to other platforms, cultures, or demographic groups. The linguistic expression of mental distress varies greatly.
- **Correlation, not Causation:** The model identifies linguistic patterns correlated with depression, not the causes of depression.
## Author and Contact
This model was developed by **Hassan Hassanzadeh Aliabadi** as part of a Master in Data Science degree at Universiti Malaya.
- **LinkedIn:** [https://www.linkedin.com/in/hassanzh/](https://www.linkedin.com/in/hassanzh/)
- **Hugging Face:** [https://huggingface.co/avtak](https://huggingface.co/avtak)
- **Google Scholar:** [https://scholar.google.com/citations?hl=en&user=7sU9U1QAAAAJ](https://scholar.google.com/citations?hl=en&user=7sU9U1QAAAAJ)
For questions about this model, please open a discussion on the Hugging Face community tab.
## Citation
If you use this model in your research, please consider citing it:
```bibtex
@misc{hassanzadeh_aliabadi_erisk_2025,
author = {Hassan Hassanzadeh Aliabadi},
title = {Early Depression Detection and Correlational Analysis on eRisk by Longformer and Data Augmentation},
year = {2025},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/avtak/erisk-longformer-depression-v1}}
}
```
|
aaeshaaldahmani/code-analysis-falcon-h1-7b-instruct-ft
|
aaeshaaldahmani
| 2025-08-13T11:19:48Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:tiiuae/Falcon-H1-7B-Instruct",
"base_model:finetune:tiiuae/Falcon-H1-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-13T11:15:58Z |
---
base_model: tiiuae/Falcon-H1-7B-Instruct
library_name: transformers
model_name: code-analysis-falcon-h1-7b-instruct-ft
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for code-analysis-falcon-h1-7b-instruct-ft
This model is a fine-tuned version of [tiiuae/Falcon-H1-7B-Instruct](https://huggingface.co/tiiuae/Falcon-H1-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="aaeshaaldahmani/code-analysis-falcon-h1-7b-instruct-ft", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/aldahmani-aysha-technology-innovation-institute/huggingface/runs/nvu1o02l)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.1
- Pytorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
fizzarif7/llama2_pklaw_gpt
|
fizzarif7
| 2025-08-13T11:04:54Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | 2025-08-13T10:15:22Z |
---
base_model: meta-llama/Llama-2-7b-chat-hf
library_name: peft
---
# Model Card for LLaMA-2-7B-Chat Fine-tuned on Pakistani Legal Q&A Dataset (QLoRA)
## Model Details
### Model Description
This model is a **fine-tuned version of LLaMA-2-7B-Chat** using the **QLoRA** technique on a **custom Pakistani legal Q&A dataset**.
It is designed to **answer questions related to Pakistani laws** in a concise and accurate manner.
- **Base model**: meta-llama/Llama-2-7b-chat-hf
- **Fine-tuning method**: QLoRA (4-bit quantization)
- **Framework**: Hugging Face Transformers + PEFT
- **Primary purpose**: Legal information and education (non-advisory)
### Model Sources
- **Repository:** [https://huggingface.co/fizzarif7/llama2-paklaw-gpt](https://huggingface.co/fizzarif7/llama2-paklaw-gpt)
- **Howpublished:** \url{https://huggingface.co/fizzarif7/llama2-paklaw-gpt}
- **Dataset source:** Official Pakistani government websites (public domain legal acts & regulations)
- **Demo:**
---
## Uses
### Direct Use
- Legal **information retrieval**
- **Educational** purposes: Understanding Pakistani laws, procedures, and definitions
### Downstream Use
- Integration into legal research assistants
- Support in law-related educational tools
### Out-of-Scope Use
- Real legal decision-making
- Providing confidential legal advice
- Any **non-Pakistani law** domain queries
---
## Dataset
- **Source**: Collected from **official Pakistani government websites** hosting public legal documents, acts, and regulations.
- **Format**: Converted from PDF to structured **Q&A format** (`Dataset.csv`).
- **Contents**: Includes questions about legal definitions, processes, and roles as per Pakistani law.
- **Size**: 1941 rows
- **Language**: English
## Bias, Risks, and Limitations
- The model’s knowledge is **limited** to the dataset scope and law versions at the time of collection
- May **hallucinate** answers for out-of-domain or ambiguous queries
- Not updated for recent law amendments unless retrained
### Recommendations
Users must **verify** answers against official legal sources before acting upon them.
---
## License
- Base model: LLaMA-2 license by Meta
- Dataset: Public government documents (open to public use, verify each source)
## Ethical Considerations & Risks
- **Do not** use for real legal decision-making.
- May misinterpret complex or ambiguous legal terms.
- Should not replace a qualified lawyer or legal expert.
### Evaluation
**Example Usage**
- **Q:** What is the significance of Article 181 of the Limitation Act, 1908, in relation to applications filed under various statutes, as interpreted by the High Court in this case?
- **A:** Article 181 of the Limitation Act, 1908, is significant because it provides a general rule for the computation of time for filing applications, including those under various statutes. The High Court's interpretation of this article, as seen in the case of "Mst. Naseem Bibi v. Mst. Hameeda Bibi", is that the limitation period begins to run on the day the application is made, rather than on the date of the event or occurrence that triggered the application. This interpretation ensures that applications are filed within the prescribed time frame, and it highlights the importance of considering the specific provision and context of each statute when determining the applicable limitation period.
### Citation
@misc{fizza2025paklawqlora,
- author = {Fizza Arif},
- title = {LLaMA-2-7B-Chat fine-tuned on Pakistani Legal Q\&A Dataset (QLoRA)},
- year = {2025},
- publisher = {Hugging Face},
- howpublished = {\url{https://huggingface.co/fizzarif7/llama2_pklaw_gpt}}
}
## How to Get Started with the Model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_name = "fizzarif7/llama2-paklaw-gpt"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto"
)
prompt = "What is the importance of the general manager under Pakistani corporate law?"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))```
|
moscowx21/Qwen3-0.6B-Gensyn-Swarm-giant_pale_ferret
|
moscowx21
| 2025-08-13T11:02:24Z | 52 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am giant_pale_ferret",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T13:55:33Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am giant_pale_ferret
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.